Hackernews posts about MI300
- CES 2026: Taking the Lids Off AMD's Venice and MI400 SoCs (chipsandcheese.com)
- AMD's MI300X Outperforms Nvidia's H100 for LLM Inference (www.blog.tensorwave.com)
- Testing AMD's Giant MI300X (chipsandcheese.com)
- The AMD Radeon Instinct MI300A's Giant Memory Subsystem (chipsandcheese.com)
- Boosting Computational Fluid Dynamics Performance with AMD MI300X (rocm.blogs.amd.com)
- An EPYC Exclusive for Azure: AMD's MI300C – By George Cozma (chipsandcheese.com)
- Attention is NOT all you need: Qwerky-72B trained using only 8 AMD MI300X GPUs (substack.recursal.ai)
- Linux 6.9 Adding AMD MI300 Row Retirement Support for Problematic HBM Memory (www.phoronix.com)
- How the "Antares" MI300 GPU Ramp Will Save AMD's Datacenter Business (www.nextplatform.com)
- AMD MI300X vs. Nvidia H100 LLM Benchmarks (blog.runpod.io)
- Harnessing AI Compute Power Atop Open-Source Software: 8 X AMD MI300X (www.phoronix.com)
- Using AMD MI300X for High-Throughput, Low-Cost LLM Inference (www.herdora.com)
- AMD's Long and Winding Road to the Hybrid CPU-GPU Instinct MI300A (www.nextplatform.com)
- MI300X vs. H100 vs. H200 Benchmark Part 1: Training (newsletter.semianalysis.com)
- AMD MI300X performance compared with Nvidia H100 (www.tomshardware.com)
- Nvidia H100 vs. AMD MI300X (blog.runpod.io)
- Sizing Up MI300A's GPU (chipsandcheese.com)
- How the "Antares" MI300 GPU Ramp Will Save AMD's Datacenter Business (www.nextplatform.com)
- Take AMD MI300X for a test drive (tensorwave.com)
- Microsoft beat H200 Deepseek inference with MI300 (techcommunity.microsoft.com)
- My First Multi-GPU Kernel: Writing All-to-All for AMD MI300X (gau-nernst.github.io)
- The Microsoft Azure HBv5 and AMD MI300C (www.servethehome.com)