Hackernews posts about Unsloth
- Deepseek-R1-0528 Dynamic 1-bit (unsloth.ai)
- How to Run Deepseek-R1-0528 Locally (unsloth.ai)
- Train your own R1 reasoning model with Unsloth (unsloth.ai)
- Unsloth: 30x faster AI training (unsloth.ai)
- Unsloth Now Supports GRPO (github.com)
- Unsloth – Dynamic 4-bit Quantization (unsloth.ai)
- Fine-Tune Llama 3.1 Ultra-Efficiently with Unsloth (huggingface.co)
- Unsloth Dynamic v.20 GGUFs (unsloth.ai)
- Finetune and Run Llama 4 with Unsloth (unsloth.ai)
- Unsloth: Dynamic 4-bit Quantization (2024) (unsloth.ai)
- Use GRPO policy locally with Qwen and Unsloth (github.com)
- Make LLM Fine-Tuning 2x Faster with Unsloth and HuggingFace TRL (huggingface.co)
- Unsloth Dynamic v2.0 GGUFs (unsloth.ai)
- Fine-Tuning Ollama Models with Unsloth (medium.com)
- Continued LLM Pretraining with Unsloth (unsloth.ai)
- Show HN: Mutable.ai Codebase chat that uses a Wiki for RAG (wiki.mutable.ai)
- Show HN: Finetune Llama-3.1 2x faster in a Colab (colab.research.google.com)
- Show HN: Finetune Llama 3.2 Vision in a Colab (colab.research.google.com)
- Show HN: I built a website where you can easily fine-tune Llama 3.1 models (www.tunellama.com)
- Show HN: I built cracked engineers – a new platform for technical job roles only (www.crackedengineers.com)
- Show HN: Open-source fine-tuning in a Colab notebook (colab.research.google.com)
- Finetune language models 30x faster (unsloth.ai)