Hackernews posts about LLM
LLM is a type of large language model that uses artificial intelligence and machine learning algorithms to generate human-like text responses.
- A small number of samples can poison LLMs of any size (www.anthropic.com)
- LLMs can get "brain rot" (llm-brain-rot.github.io)
- Neural audio codecs: how to get audio into LLMs (kyutai.org)
- My trick for getting consistent classification from LLMs (verdik.substack.com)
- Poker Tournament for LLMs (pokerbattle.ai)
- LLMs are mortally terrified of exceptions (twitter.com)
- Writing an LLM from scratch, part 22 – training our LLM (www.gilesthomas.com)
- The Smol Training Playbook: The Secrets to Building World-Class LLMs (huggingface.co)
- The end of the rip-off economy: consumers use LLMs against information asymmetry (www.economist.com)
- Responses from LLMs are not facts (stopcitingai.com)
- Our LLM-controlled office robot can't pass butter (andonlabs.com)
- AdapTive-LeArning Speculator System (ATLAS): Faster LLM inference (www.together.ai)
- Should LLMs just treat text content as an image? (www.seangoedecke.com)
- The security paradox of local LLMs (quesma.com)
- LLMs are getting better at character-level text manipulation (blog.burkert.me)
- AGI is not imminent, and LLMs are not the royal road to getting there (garymarcus.substack.com)
- Reasoning LLMs are wandering solution explorers (arxiv.org)