Hackernews posts about LLMs
LLMs is an acronym for Large Language Models, which refers to artificial intelligence language processing systems that can understand and generate human-like text.
Related:
SEC
- LLMs corrupt your documents when you delegate (arxiv.org)
- Train Your Own LLM from Scratch (github.com)
- Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s (www.cocoawithlove.com)
- Show HN: How LLMs Work – Interactive visual guide based on Karpathy's lecture (ynarwal.github.io)
- Let's talk about LLMs (www.b-list.org)
- DeepSeek-V4-Flash means LLM steering is interesting again (www.seangoedecke.com)
- LLMs Are Not a Higher Level of Abstraction (www.lelanthran.com)
- AMÁLIA and the future of European Portuguese LLMs (duarteocarmo.com)
- Advanced Quantization Algorithm for LLMs (github.com)
- Wiki Builder: Skill to Build LLM Knowledge Bases (academy.dair.ai)
- Running local LLMs offline on a ten-hour flight (deploy.live)
- Making LLM Training Faster with Unsloth and NVIDIA (unsloth.ai)
- Can LLMs model real-world systems in TLA+? (www.sigops.org)
- UK sovereign LLM inference (relax.ai)
- We decreased our LLM costs with Opus (www.mendral.com)
- LLM Policy for Rust Compiler (github.com)
- Rars: a Rust RAR implementation, mostly written by LLMs (bitplane.net)
- Show HN: Reducing LLM input tokens by 70% (adola.app)