Hackernews posts about LLaMA 7B
Related:
M2 MacBook Pro
- Show HN: Finetune Llama 3.2 Vision in a Colab (colab.research.google.com)
- Running LLaMA 7B on a 64GB M2 MacBook Pro with Llama.cpp (til.simonwillison.net)
- Alpaca: An Instruct Tuned LLaMA 7B – Responses on par with txt-DaVinci-3 (crfm.stanford.edu)
- EagleX 1.7T: Soaring past LLaMA 7B 2T in both English and Multi-lang evals (substack.recursal.ai)
- LLaMA 7B (Alpaca) running on Google Pixel 7 Pro (twitter.com)
- Using LLaMA 7B LLM on Raspberry Pi 4 (twitter.com)
- Jlama (Java) outperforms llama.cpp in F32 Llama 7B Model (twitter.com)
- LLaMA 7B model running on 4GB RAM Raspberry Pi 4 (github.com)
- Puma: Secure Inference of LLaMA-7B in Five Minutes (huggingface.co)
- What’s the difference between Llama 2 7B, 13B, and 70B? (replicate.com)
- I Conducted Experiments with the Alpaca/LLaMA 7B Language Model (hackernoon.com)
- Mistral AI 7B – beats Llama 2 7B and 13B (www.mystic.ai)
- Phi-1.5 (1.3B Outperforms Llama 2 7B) (huggingface.co)
- MLX: Fine-tune Llama 7B or Mistral 7B with 32GB (github.com)
- Llama-2-7B-chat-mlx for Apple’s new MLX framework (huggingface.co)
- Devin fine tuning a 7B llama model (twitter.com)
- Fine tune Llama 2 (7B) with an API (replicate.com)
- LLaMA 7B model on a 4GB RAM Raspberry Pi 4 (twitter.com)