Hackernews posts about LLMs
LLMs is an acronym for Large Language Models, which refers to artificial intelligence language processing systems that can understand and generate human-like text.
Related:
SEC
- iPhone 17 Pro Demonstrated Running a 400B LLM (twitter.com)
- The L in "LLM" Stands for Lying (acko.net)
- LLM Architecture Gallery (sebastianraschka.com)
- How I write software with LLMs (www.stavros.io)
- LLMs work best when the user defines their acceptance criteria first (blog.katanaquant.com)
- Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy (gitlab.redox-os.org)
- LLM Writing Tropes.md (tropes.fyi)
- BitNet: Inference framework for 1-bit LLMs (github.com)
- Ensu – Ente’s Local LLM app (ente.com)
- LLMs can be exhausting (tomjohnell.com)
- A tool that removes censorship from open-weight LLMs (github.com)
- Sarvam 105B, the first competitive Indian open source LLM (www.sarvam.ai)
- Are LLM merge rates not getting better? (entropicthoughts.com)
- From 300KB to 69KB per Token: How LLM Architectures Solve the KV Cache Problem (news.future-shock.ai)
- LLM Neuroanatomy II: Modern LLM Hacking and Hints of a Universal Language? (dnhkng.github.io)
- LLMs predict my coffee (dynomight.net)
- Reliable Software in the LLM Era (quint-lang.org)
- EsoLang-Bench: Evaluating Genuine Reasoning in LLMs via Esoteric Languages (esolang-bench.vercel.app)
- LLMs can unmask pseudonymous users at scale with surprising accuracy (arstechnica.com)
- I don't use LLMs for programming (neilmadden.blog)
- LLM Doesn't Write Correct Code. It Writes Plausible Code (twitter.com)