Hackernews posts about LLMs
LLMs is an acronym for Large Language Models, which refers to artificial intelligence language processing systems that can understand and generate human-like text.
Related:
SEC
- A non-anthropomorphized view of LLMs (addxorrol.blogspot.com)
- I'm dialing back my LLM usage (zed.dev)
- Smollm3: Smol, multilingual, long-context reasoner LLM (huggingface.co)
- Compiling LLMs into a MegaKernel: A path to low-latency inference (zhihaojia.medium.com)
- Everything around LLMs is still magical and wishful thinking (dmitriid.com)
- LLMs should not replace therapists (arxiv.org)
- LLM code generation may lead to an erosion of trust (jaysthoughts.com)
- SymbolicAI: A neuro-symbolic perspective on LLMs (github.com)
- LLMs pose an interesting problem for DSL designers (kirancodes.me)
- Lossless LLM 3x Throughput Increase by LMCache (github.com)
- Salesforce study finds LLM agents flunk CRM and confidentiality tests (www.theregister.com)
- Optimizing Tool Selection for LLM Workflows with Differentiable Programming (viksit.substack.com)
- How OpenElections uses LLMs (thescoop.org)
- Design Patterns for Securing LLM Agents Against Prompt Injections (simonwillison.net)
- I'm Building LLM for Satellite Data EarthGPT.app (www.earthgpt.app)
- Prompting LLMs is not engineering (dmitriid.com)
- Agentic Misalignment: How LLMs could be insider threats (www.anthropic.com)
- What LLMs Know About Their Users (www.schneier.com)
- The Emperor's New LLM (dayafter.substack.com)
- Pitfalls of premature closure with LLM assisted coding (www.shayon.dev)
- Mapping LLMs over excel saved my passion for game dev (danieltan.weblog.lol)