Hackernews posts about Unsloth
- Unsloth Dynamic GGUFs DeepSeek (671B) outperforms SOTA models (docs.unsloth.ai)
- Show HN: Write deep learning code on your laptop and run it instantly on GPUs (aiengineering.academy)
- Show HN: I hate paying for GPUs while developing – this is how I solved it (adithyask.medium.com)
- GPT-OSS Reinforcement Learning (docs.unsloth.ai)
- Run Qwen3-Coder-480B-A35B Locally with Unsloth Dynamic Quants (docs.unsloth.ai)
- Train your own R1 reasoning model with Unsloth (unsloth.ai)
- Unsloth: 30x faster AI training (unsloth.ai)
- Unsloth Dynamic 2.0 GGUFs (docs.unsloth.ai)
- Unsloth Now Supports GRPO (github.com)
- Unsloth – Dynamic 4-bit Quantization (unsloth.ai)
- Fine-Tune Llama 3.1 Ultra-Efficiently with Unsloth (huggingface.co)
- Unsloth – Dynamic 4-bit Quantization (unsloth.ai)
- Unsloth Dynamic v.20 GGUFs (unsloth.ai)
- Finetune and Run Llama 4 with Unsloth (unsloth.ai)
- Unsloth: Dynamic 4-bit Quantization (2024) (unsloth.ai)
- Use GRPO policy locally with Qwen and Unsloth (github.com)
- Make LLM Fine-Tuning 2x Faster with Unsloth and HuggingFace TRL (huggingface.co)
- Unsloth: GPT-OSS (docs.unsloth.ai)
- Unsloth improvements to gguf tool calling for Qwen3 (huggingface.co)
- Unsloth Dynamic v2.0 GGUFs (unsloth.ai)
- Fine-Tuning Ollama Models with Unsloth (medium.com)
- Continued LLM Pretraining with Unsloth (unsloth.ai)
- Show HN: Mutable.ai Codebase chat that uses a Wiki for RAG (wiki.mutable.ai)
- Show HN: Finetune Llama-3.1 2x faster in a Colab (colab.research.google.com)