Hackernews posts about LLMs
LLMs is an acronym for Large Language Models, which refers to artificial intelligence language processing systems that can understand and generate human-like text.
Related:
SEC
- OK, I can partly explain the LLM chess weirdness now (dynomight.net)
- QwQ: Alibaba's O1-like reasoning LLM (qwenlm.github.io)
- New LLM optimization technique slashes memory costs (venturebeat.com)
- Llama.cpp guide – Running LLMs locally on any hardware, from scratch (steelph0enix.github.io)
- Fast LLM Inference From Scratch (using CUDA) (andrewkchan.dev)
- LLM abstraction levels inspired by fish eye lens (wattenberger.com)
- Full LLM training and evaluation toolkit (github.com)
- Task-specific LLM evals that do and don't work (eugeneyan.com)
- Show HN: Gemini LLM corrects ASR YouTube transcripts (ldenoue.github.io)
- Taming LLMs – A Practical Guide to LLM Pitfalls with Open Source Software (www.souzatharsis.com)
- The industry structure of LLM makers (calpaterson.com)
- Meta Uses LLMs to Improve Incident Response (www.tryparity.com)
- An Intuitive Explanation of Sparse Autoencoders for LLM Interpretability (adamkarvonen.github.io)
- Show HN: Prompt Engine – Auto pick LLMs based on your prompts (jigsawstack.com)
- Establishing an etiquette for LLM use on Libera.Chat (libera.chat)
- Automated reasoning to remove LLM hallucinations (aws.amazon.com)
- Show HN: DataFuel.dev – Turn websites into LLM-ready data (www.datafuel.dev)
- Archetypes of LLM apps (www.contraption.co)
- How We Optimize LLM Inference for AI Coding Assistant (www.augmentcode.com)