Hackernews posts about LLMs
LLMs is an acronym for Large Language Models, which refers to artificial intelligence language processing systems that can understand and generate human-like text.
Related:
SEC
- A small number of samples can poison LLMs of any size (www.anthropic.com)
- LLMs can get "brain rot" (llm-brain-rot.github.io)
- Neural audio codecs: how to get audio into LLMs (kyutai.org)
- My trick for getting consistent classification from LLMs (verdik.substack.com)
- Poker Tournament for LLMs (pokerbattle.ai)
- LLMs are mortally terrified of exceptions (twitter.com)
- Writing an LLM from scratch, part 22 – training our LLM (www.gilesthomas.com)
- The end of the rip-off economy: consumers use LLMs against information asymmetry (www.economist.com)
- Responses from LLMs are not facts (stopcitingai.com)
- Which table format do LLMs understand best? (www.improvingagents.com)
- Our LLM-controlled office robot can't pass butter (andonlabs.com)
- AdapTive-LeArning Speculator System (ATLAS): Faster LLM inference (www.together.ai)
- Should LLMs just treat text content as an image? (www.seangoedecke.com)
- The security paradox of local LLMs (quesma.com)
- LLMs are getting better at character-level text manipulation (blog.burkert.me)
- AGI is not imminent, and LLMs are not the royal road to getting there (garymarcus.substack.com)
- Reasoning LLMs are wandering solution explorers (arxiv.org)