Hackernews posts about GPT-OSS-120B
- Show HN: AIDictation – zero data retention dictation app (aidictation.com)
- Show HN: The Port Augusta Times – "All the news that's fit to generate" (henrygabriels.github.io)
- GPT-OSS-120B runs on just 8GB VRAM & 64GB+ system RAM (old.reddit.com)
- Running GPT-OSS-120B at 500 tokens per second on Nvidia GPUs (www.baseten.co)
- GPT-OSS 120B Runs at 3000 tokens/sec on Cerebras (www.cerebras.ai)
- Cerebras now supports OpenAI GPT-OSS-120B at 3k Tokens Per SEC (www.cerebras.ai)
- OpenAI/GPT-OSS-120B · Hugging Face (huggingface.co)
- Using Codex CLI with GPT-OSS:120B on an Nvidia DGX Spark via Tailscale (til.simonwillison.net)
- Using Codex CLI with GPT-OSS:120B on an Nvidia DGX Spark via Tailscale (til.simonwillison.net)
- How Benchmaxxed is GPT-OSS-120B? (cmart.blog)
- A first look at GPT-OSS-120B's coding ability (blog.brokk.ai)
- GPT-OSS-120B (high): API Provider Performance Benchmarking (artificialanalysis.ai)
- GPT-OSS 120B Writes a Lisp in Go Fast (elite-ai-assisted-coding.dev)
- How Benchmaxxed is GPT-OSS-120B? (cmart.blog)
- GPT-OSS-120B Lisp interpreter in Go (gist.github.com)
- Show HN: Distil expenses – personal finance agent (github.com)
- Self-host GPT-OSS on 2xH100s (northflank.com)