Hackernews posts about MI300X
MI300X is a high-performance graphics processing unit (GPU) designed by AMD for large language model (LLM) inference applications.
- AMD's MI300X Outperforms Nvidia's H100 for LLM Inference (www.blog.tensorwave.com)
- Testing AMD's Giant MI300X (chipsandcheese.com)
- Boosting Computational Fluid Dynamics Performance with AMD MI300X (rocm.blogs.amd.com)
- Attention is NOT all you need: Qwerky-72B trained using only 8 AMD MI300X GPUs (substack.recursal.ai)
- AMD MI300X vs. Nvidia H100 LLM Benchmarks (blog.runpod.io)
- Harnessing AI Compute Power Atop Open-Source Software: 8 X AMD MI300X (www.phoronix.com)
- Using AMD MI300X for High-Throughput, Low-Cost LLM Inference (www.herdora.com)
- MI300X vs. H100 vs. H200 Benchmark Part 1: Training (newsletter.semianalysis.com)
- AMD MI300X performance compared with Nvidia H100 (www.tomshardware.com)
- Nvidia H100 vs. AMD MI300X (blog.runpod.io)
- Take AMD MI300X for a test drive (tensorwave.com)
- My First Multi-GPU Kernel: Writing All-to-All for AMD MI300X (gau-nernst.github.io)
- Unveiling MLPerf Results on AMD Instinct MI300X Accelerators (community.amd.com)
- Show HN: Chisel – Profile AMD MI300X kernels locally (github.com)
- Benchmarking MI300X Memcpy (scalarlm.ghost.io)
- Supercharge DeepSeek-R1 Inference on AMD Instinct MI300X (rocm.blogs.amd.com)
- Unlock DeepSeek-R1 Inference Performance on AMD Instinct MI300X GPU (rocm.blogs.amd.com)
- Vultr Advances Global AI Cloud Inference with AMD Instinct MI300X (www.hpcwire.com)
- Azure – ND AMD MI300X v5-series (learn.microsoft.com)
- AMD Data Center GPUs Explained: MI250X, MI300X, MI350X and Beyond (www.bentoml.com)
- AMD MI300X for LLM Serving Disaggregating Prefill and Decode with SGLang (rocm.blogs.amd.com)
- AMD MI300X Memcpy Peer Deep Dive (www.scalarlm.com)