Hackernews posts about Mixtral
- Mistral Codestral 25.01 (twitter.com)
- Codestral 25.01 by Mistral.ai (mistral.ai)
- Codestral 25.01 (mistral.ai)
- Codestral 25.01 (mistral.ai)
- Pixtral Deployment on AWS [video] (www.youtube.com)
- Groq runs Mixtral 8x7B-32k with 500 T/s (groq.com)
- Mixtral of experts (mistral.ai)
- Mistral "Mixtral" 8x7B 32k model [magnet] (twitter.com)
- Mixtral 8x22B (mistral.ai)
- Brave Leo now uses Mixtral 8x7B as default (brave.com)
- Mistral AI launches Mixtral-Next (chat.lmsys.org)
- Visualizing expert firing frequencies in Mixtral MoE (mixtral-moe-vis-d726c4a10ef5.herokuapp.com)
- Mistral's mixtral-8x7B-32kseqlen on Vercel (twitter.com)
- Mixtral 8x22B Model (twitter.com)
- Show HN: Code interpreter with mixtral-8x7B-instruct (github.com)
- Mixtral-8x22B on HuggingFace (huggingface.co)
- Mixtral 8x7B going 378 tokens per second on CPU (twitter.com)
- Mixtral 8x22B on MLX (twitter.com)
- Mixtral-8x22B-Instruct-v0.1 (huggingface.co)
- Mixtral 8x22B latest-model benchmarks (www.promptzone.com)
- Argilla released Notux 8x7B - DPO fine-tune of Mixtral 8x7B (huggingface.co)
- Show HN: I made a VS Code extension where you can use Mixtral 8x7B for free (marketplace.visualstudio.com)
- Llamafile 0.4 now with Mixtral support (github.com)
- New Mixtral HQQ Quantzied 4-bit/2-bit configuration (huggingface.co)
- The latest major open LLM releases: Mixtral, Llama 3, Phi-3, and OpenELM (magazine.sebastianraschka.com)
- Mixtral AI generated comment to US regulators about Open Models (www.regulations.gov)
- Mixtral on MLX (github.com)