Hackernews posts about Unsloth
- Unsloth – Train LLMs 2x faster with 70% less VRAM (github.com)
- How to Run Qwen-Image-2512 Locally in ComfyUI (unsloth.ai)
- Run Qwen3-Coder-480B-A35B Locally with Unsloth Dynamic Quants (docs.unsloth.ai)
- Train your own R1 reasoning model with Unsloth (unsloth.ai)
- Unsloth Dynamic GGUFs DeepSeek (671B) outperforms SOTA models (docs.unsloth.ai)
- Unsloth Dynamic 2.0 GGUFs (docs.unsloth.ai)
- Unsloth Now Supports GRPO (github.com)
- Unsloth – Dynamic 4-bit Quantization (unsloth.ai)
- Fine-Tune Llama 3.1 Ultra-Efficiently with Unsloth (huggingface.co)
- Running Unsloth with all perks on DGX Spark (bartusiak.ai)
- Fine-Tuning LLMs with Nvidia DGX Spark and Unsloth (docs.unsloth.ai)
- Unsloth – Dynamic 4-bit Quantization (unsloth.ai)
- Unsloth Dynamic v.20 GGUFs (unsloth.ai)
- Finetune and Run Llama 4 with Unsloth (unsloth.ai)
- Unsloth: Dynamic 4-bit Quantization (2024) (unsloth.ai)
- Use GRPO policy locally with Qwen and Unsloth (github.com)
- Make LLM Fine-Tuning 2x Faster with Unsloth and HuggingFace TRL (huggingface.co)
- Unsloth: GPT-OSS (docs.unsloth.ai)
- Unsloth improvements to gguf tool calling for Qwen3 (huggingface.co)
- Unsloth Dynamic v2.0 GGUFs (unsloth.ai)
- Fine-Tuning Ollama Models with Unsloth (medium.com)
- Continued LLM Pretraining with Unsloth (unsloth.ai)
- Show HN: Mutable.ai Codebase chat that uses a Wiki for RAG (wiki.mutable.ai)
- Show HN: Finetune Llama-3.1 2x faster in a Colab (colab.research.google.com)