Hackernews posts about Llama.cpp
Related:
M2 Max
- Llama.cpp 30B runs with only 6GB of RAM now (github.com)
- Llama.cpp: Full CUDA GPU Acceleration (github.com)
- How Is LLaMa.cpp Possible? (finbarr.ca)
- LLama.cpp now has a web interface (github.com)
- Running LLaMA 7B on a 64GB M2 MacBook Pro with Llama.cpp (til.simonwillison.net)
- Show HN: Open-source load balancer for llama.cpp (github.com)
- Why MMAP in llama.cpp hides true memory usage (twitter.com)
- Performance of llama.cpp on Apple Silicon A-series (github.com)
- llama.cpp: Roadmap May 2023 (github.com)
- Running Llama.cpp on AWS Instances (github.com)
- Revert for jart’s llama.cpp MMAP miracles (github.com)
- Show HN: Llama.go – port of llama.cpp to pure Go (github.com)
- WIP Llama.cpp Vulkan Implementations (github.com)
- Show HN: Grammar Generator App for Llama.cpp (grammar.intrinsiclabs.ai)
- Gemma Is Added to Llama.cpp (github.com)
- LLaVA C++ server (based on llama.cpp) (github.com)
- Llama.cpp Now Part of the Nvidia RTX AI Toolkit (developer.nvidia.com)
- Grok-1 Support for Llama.cpp (github.com)
- Jlama (Java) outperforms llama.cpp in F32 Llama 7B Model (twitter.com)
- Llama.cpp Working on Support for Llama3 (github.com)
- llama.cpp now supports StarCoder model series (github.com)