Hackernews posts about SmolLM2
- SmolLM2 (simonwillison.net)
- SmolLM2 (LLM) running in browser and WebGPU (simonwillison.net)
- SmolLM2: The new, best, and open small language model (huggingface.co)
- Structured Generation with SmolLM2 running in browser and WebGPU (simonwillison.net)
- smollm2 (ollama.com)
- Using pip to install a Large Language Model that's under 100MB (simonwillison.net)
- Smollm3: Smol, multilingual, long-context reasoner LLM (huggingface.co)
- SmolLM – Fast and Remarkably Powerful (huggingface.co)
- Instant in-browser demo of SmolLM (huggingface.co)
- Show HN: Cactus – Ollama for Smartphones (github.com)
- Full LLM training and evaluation toolkit (github.com)
- SmolVLM – small yet mighty Vision Language Model (simonwillison.net)
- Smolvlm – Realtime Vision Language Model Demo (github.com)
- Smolmodels (www.plexe.ai)
- Show HN: AsianMOM – WebGPU Vision-LLM app that roasts you like ur mom in-browser (asianmom.kuber.studio)
- Show HN: Website Content from Query Parameters (www.params.org)
- New 2B vision language model that consumes the least memory (huggingface.co)