Hackernews posts about Mixtral
- Mistral CEO:China lagging in AI is a 'fairy tale' (www.msn.com)
- Mistral Vibe – Minimal CLI Coding Agent (github.com)
- Wikimédia to Partner with Amazon, Meta, Microsoft, Mistral AI, and Perplexity (enterprise.wikimedia.com)
- Show HN: I made R/place for LLMs (art.heimdal.dev)
- Show HN: Calliope AI – Free AI IDE and AI Data Lab BYOK (calliope.ai)
- Show HN: RouterLab – open-source AI API with Swiss hosting (routerlab.ch)
- Heaps do lie: debugging a memory leak in vLLM (mistral.ai)
- North Dakota law lists fake critical minerals based on coal lawyers' names (bismarcktribune.com)
- The mineral riches hiding under Greenland's ice (www.bbc.com)
- From mineral resources to oil and nuclear: the twilight of the Industrial Age (thehonestsorcerer.substack.com)
- 'Our minerals could be used to annex us': why Canada doesn't want US mining (www.theguardian.com)
- 'Our minerals could be used to annex us': why Canada doesn't want US mining (www.theguardian.com)
- How plants create mitraphylline, a natural compound linked to anticancer effects (www.sciencedaily.com)
- Mitra 15 (French minicomputer from the 1970) (en.wikipedia.org)
- AI in Mineral Exploration: 2025 in Review (posgeo.wordpress.com)
- Show HN: ChemistryLaTeX (chromewebstore.google.com)
- In the Path of a Raging Wildfire, a Luthier's Precious Wood (www.nytimes.com)
- Groq runs Mixtral 8x7B-32k with 500 T/s (groq.com)
- Mixtral 8x22B (mistral.ai)
- Brave Leo now uses Mixtral 8x7B as default (brave.com)
- Mistral AI launches Mixtral-Next (chat.lmsys.org)
- Mixtral 8x22B Model (twitter.com)
- Mixtral-8x22B on HuggingFace (huggingface.co)
- Mixtral 8x7B going 378 tokens per second on CPU (twitter.com)
- Mixtral 8x22B on MLX (twitter.com)
- Mixtral-8x22B-Instruct-v0.1 (huggingface.co)
- Mixtral 8x22B latest-model benchmarks (www.promptzone.com)
- The latest major open LLM releases: Mixtral, Llama 3, Phi-3, and OpenELM (magazine.sebastianraschka.com)
- Mixtral AI generated comment to US regulators about Open Models (www.regulations.gov)
- Interactive poetry breeding through Mixtral base model LLMs (www.flourish.ing)
- Zephyr-orpo-141B-A35B: Mixtral 8x22B fine-tune by HuggingFace (huggingface.co)
- Tess-2.0-Mixtral-8x22B (huggingface.co)
- Together AI adds Mixtral-8x22B (twitter.com)
- DBRX vs. Mixtral vs. GPT: create your own benchmark (www.promptfoo.dev)
- Supporting Mixtral in GPT-fast through torch.compile (thonking.substack.com)
- How Did Open Source Catch Up to OpenAI? [Mixtral-8x7B] [video] (www.youtube.com)
- Show HN: Speeding up LLM inference 2x times (possibly) (asciinema.org)