Hackernews posts about LLMs
LLMs is an acronym for Large Language Models, which refers to artificial intelligence language processing systems that can understand and generate human-like text.
Related:
SEC
- Show HN: Clippy – 90s UI for local LLMs (felixrieseberg.github.io)
- Human coders are still better than LLMs (antirez.com)
- Show HN: My LLM CLI tool can run tools now, from Python code or plugins (simonwillison.net)
- As an experienced LLM user, I don't use generative LLMs often (minimaxir.com)
- LLMs get lost in multi-turn conversation (arxiv.org)
- After months of coding with LLMs, I'm going back to using my brain (albertofortin.com)
- Writing an LLM from scratch, part 13 – attention heads are dumb (www.gilesthomas.com)
- Run LLMs on Apple Neural Engine (ANE) (github.com)
- LLM function calls don't scale; code orchestration is simpler, more effective (jngiam.bearblog.dev)
- Dummy's Guide to Modern LLM Sampling (rentry.co)
- Peer Programming with LLMs, for Senior+ Engineers (pmbanugo.me)
- Google Gemini has the worst LLM API (venki.dev)
- The behavior of LLMs in hiring decisions: Systemic biases in candidate selection (davidrozado.substack.com)
- Build real-time knowledge graph for documents with LLM (cocoindex.io)
- LLM codegen go brrr – Parallelization with Git worktrees and tmux (www.skeptrune.com)
- Why do LLMs have emergent properties? (www.johndcook.com)