Hackernews posts about Llama 3
Llama 3 is an AI-powered chatbot that uses pure NumPy to generate human-like conversations in a browser-based interface.
- D2F – We made dLLMs 2.5x faster than LLaMA3 (arxiv.org)
- Show HN: I integrated Ollama into Excel to run local LLMs (pythonandvba.com)
- A Python RAG tutorial with Pinecone and Ollama 3.2 with a code example (blog.yasuflores.me)
- Show HN: My Agentic Newsletter Project (iliareingold.com)
- Meta Llama 3 (llama.meta.com)
- Llama3 implemented from scratch (github.com)
- Llama 3 implemented in pure NumPy (docs.likejazz.com)
- Llama 3-V: Matching GPT4-V with a 100x smaller model and 500 dollars (aksh-garg.medium.com)
- Llama 3.1 (llama.meta.com)
- Llama-3.3-70B-Instruct (huggingface.co)
- Llama 3.1 Omni Model (github.com)
- Cost of self hosting Llama-3 8B-Instruct (blog.lytix.co)
- llama-fs: A self-organizing file system with llama 3 (github.com)
- Llama 3.1 in C (github.com)
- Run llama3 locally with 1M token context (ollama.com)
- Show HN: Llama 3.3 70B Sparse Autoencoders with API access (www.goodfire.ai)
- Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B (github.com)
- Meta's Llama 3.1 can recall 42 percent of the first Harry Potter book (www.understandingai.org)
- Show HN: Tune LLaMa3.1 on Google Cloud TPUs (github.com)
- Llama 3 8B is almost as good as Wizard 2 8x22B (huggingface.co)
- Implementing LLaMA3 in 100 Lines of Pure Jax (saurabhalone.com)
- Longwriter – Increase llama3.1 output to 10k words (github.com)
- Hermes 3: The First Fine-Tuned Llama 3.1 405B Model (lambdalabs.com)