Hackernews posts about Unsloth
- Unsloth Dynamic 2.0 GGUFs (docs.unsloth.ai)
- Unsloth: GPT-OSS (docs.unsloth.ai)
- Unsloth improvements to gguf tool calling for Qwen3 (huggingface.co)
- Qwen3-Coder: How to Run Locally (docs.unsloth.ai)
- Fine-tuning GPT-OSS-20B (colab.research.google.com)
- Show HN: GPT OSS: How to run and fine-tune (docs.unsloth.ai)
- Run Qwen3-Coder-480B-A35B Locally with Unsloth Dynamic Quants (docs.unsloth.ai)
- Train your own R1 reasoning model with Unsloth (unsloth.ai)
- Unsloth: 30x faster AI training (unsloth.ai)
- Unsloth Now Supports GRPO (github.com)
- Unsloth – Dynamic 4-bit Quantization (unsloth.ai)
- Fine-Tune Llama 3.1 Ultra-Efficiently with Unsloth (huggingface.co)
- Unsloth – Dynamic 4-bit Quantization (unsloth.ai)
- Unsloth Dynamic v.20 GGUFs (unsloth.ai)
- Finetune and Run Llama 4 with Unsloth (unsloth.ai)
- Unsloth: Dynamic 4-bit Quantization (2024) (unsloth.ai)
- Use GRPO policy locally with Qwen and Unsloth (github.com)
- Make LLM Fine-Tuning 2x Faster with Unsloth and HuggingFace TRL (huggingface.co)
- Unsloth Dynamic v2.0 GGUFs (unsloth.ai)
- Fine-Tuning Ollama Models with Unsloth (medium.com)
- Continued LLM Pretraining with Unsloth (unsloth.ai)
- Show HN: Mutable.ai Codebase chat that uses a Wiki for RAG (wiki.mutable.ai)
- Show HN: Finetune Llama-3.1 2x faster in a Colab (colab.research.google.com)
- Show HN: Finetune Llama 3.2 Vision in a Colab (colab.research.google.com)
- Show HN: I built a website where you can easily fine-tune Llama 3.1 models (www.tunellama.com)
- Show HN: I built cracked engineers – a new platform for technical job roles only (www.crackedengineers.com)