Hackernews posts about MI300A
- The AMD Radeon Instinct MI300A's Giant Memory Subsystem (chipsandcheese.com)
- AMD's Long and Winding Road to the Hybrid CPU-GPU Instinct MI300A (www.nextplatform.com)
- Sizing Up MI300A's GPU (chipsandcheese.com)
- Gigabyte G383-R80-AAP1 AMD Instinct MI300A Server Review – ServeTheHome (www.servethehome.com)
- Sizing Up MI300A's GPU – By Chester Lam (chipsandcheese.com)
- AMD's Long and Winding Road to the Hybrid CPU-GPU Instinct MI300A (www.nextplatform.com)
- Sizing up MI300A's GPU (chipsandcheese.com)
- Sizing Up MI300A's GPU – By Chester Lam (chipsandcheese.com)
- The AMD Radeon Instinct MI300A's Giant Memory Subsystem (old.chipsandcheese.com)
- Microsoft beat H200 Deepseek inference with MI300 (techcommunity.microsoft.com)
- AMD's MI300X Outperforms Nvidia's H100 for LLM Inference (www.blog.tensorwave.com)
- Testing AMD's Giant MI300X (chipsandcheese.com)
- Boosting Computational Fluid Dynamics Performance with AMD MI300X (rocm.blogs.amd.com)
- An EPYC Exclusive for Azure: AMD's MI300C – By George Cozma (chipsandcheese.com)
- Attention is NOT all you need: Qwerky-72B trained using only 8 AMD MI300X GPUs (substack.recursal.ai)
- Linux 6.9 Adding AMD MI300 Row Retirement Support for Problematic HBM Memory (www.phoronix.com)
- How the "Antares" MI300 GPU Ramp Will Save AMD's Datacenter Business (www.nextplatform.com)
- AMD MI300X vs. Nvidia H100 LLM Benchmarks (blog.runpod.io)
- Harnessing AI Compute Power Atop Open-Source Software: 8 X AMD MI300X (www.phoronix.com)
- Using AMD MI300X for High-Throughput, Low-Cost LLM Inference (www.herdora.com)
- MI300X vs. H100 vs. H200 Benchmark Part 1: Training (newsletter.semianalysis.com)