Hackernews posts about Unsloth
- Finetune and Run Llama 4 with Unsloth (unsloth.ai)
- How to Run Deepseek-V3-0324 Locally (unsloth.ai)
- Run DeepSeek-V3-0324 locally (www.unsloth.ai)
- 2.71bit DeepSeek-V3-0324 (unsloth.ai)
- Keeping Pygmy Sloths Afloat (2018) (www.biographic.com)
- Train your own R1 reasoning model with Unsloth (unsloth.ai)
- Unsloth: 30x faster AI training (unsloth.ai)
- Unsloth Now Supports GRPO (github.com)
- Unsloth – Dynamic 4-bit Quantization (unsloth.ai)
- Fine-Tune Llama 3.1 Ultra-Efficiently with Unsloth (huggingface.co)
- Unsloth: Dynamic 4-bit Quantization (2024) (unsloth.ai)
- Use GRPO policy locally with Qwen and Unsloth (github.com)
- Make LLM Fine-Tuning 2x Faster with Unsloth and HuggingFace TRL (huggingface.co)
- Fine-Tuning Ollama Models with Unsloth (medium.com)
- Continued LLM Pretraining with Unsloth (unsloth.ai)
- Show HN: Mutable.ai Codebase chat that uses a Wiki for RAG (wiki.mutable.ai)
- Show HN: Finetune Llama-3.1 2x faster in a Colab (colab.research.google.com)
- Show HN: Finetune Llama 3.2 Vision in a Colab (colab.research.google.com)
- Show HN: I built a website where you can easily fine-tune Llama 3.1 models (www.tunellama.com)
- Show HN: I built cracked engineers – a new platform for technical job roles only (www.crackedengineers.com)
- Show HN: Open-source fine-tuning in a Colab notebook (colab.research.google.com)
- Finetune language models 30x faster (unsloth.ai)
- Show HN: Afiyah – Snap, Understand Ingredients, Live Clean (a3l17lcdgz1ezx-7860.proxy.runpod.net)
- Show HN: Finetune, build and deploy LLMs with AIKit (sozercan.github.io)
- Run DeepSeek R1 Dynamic 1.58-bit (unsloth.ai)