Hackernews posts about HB100
- Nvidia B200 vs. H100 performance compared with GPT-OSS (www.clarifai.com)
- Nvidia H100 Price Guide 2025: Detailed Costs, Comparisons and Expert Insights (docs.jarvislabs.ai)
- HB100 Doppler Radar Module Teardown (www.allaboutcircuits.com)
- $2 H100s: How the GPU Rental Bubble Burst (www.latent.space)
- So you want to rent an NVIDIA H100 cluster? 2024 Consumer Guide (www.photoroom.com)
- AMD's MI300X Outperforms Nvidia's H100 for LLM Inference (www.blog.tensorwave.com)
- Google TPU v5p beats Nvidia H100 (www.techradar.com)
- AMD MI300X 30% higher performance than Nvidia H100, even with optimized stack (www.tomshardware.com)
- Huawei's Ascend 910C delivers 60% of Nvidia H100 inference performance (www.tomshardware.com)
- AMD MI300 performance – Faster than H100, but how much? (www.semianalysis.com)
- Tesla turns on 10k-node Nvidia H100 Cluster (www.techradar.com)
- IBM analog AI chip could give the Nvidia H100 a run for its money (www.techradar.com)
- AMD's Response to Nvidia regarding H100 vs. MI300X on real world workloads (community.amd.com)
- NVIDIA introduces TensorRT-LLM for accelerating LLM inference on H100/A100 GPUs (developer.nvidia.com)
- U.S. Bans Sales of Nvidia's H100, A100 GPUs to Middle East (www.tomshardware.com)
- Nvidia L40S is a Nvidia H100 AI alternative (www.servethehome.com)
- Colossus AI Supercluster with over 100k Nvidia H100 GPUs (twitter.com)
- How to train a model on 10k H100 GPUs? (soumith.ch)
- Bay Bridge: the cheapest H100 training clusters (sfcompute.com)
- Dual RTX 5090 Beats $25,000 H100 in Real-World LLM Performance (www.hardware-corner.net)
- Nvidia H100 GPU Shipments by Customer (www.threads.net)
- AI Company Plans to Run Clusters of 10k Nvidia H100 GPUs in International Waters (www.extremetech.com)
- Everything about the new beast H100 (musingsonai.substack.com)
- XAI's Memphis Supercluster has gone live, with up to 100,000 Nvidia H100 GPUs (www.datacenterdynamics.com)
- Musk confirms 12K H100s ordered for Tesla were instead prioritized for xAI (www.theregister.com)
- Meta buys 600k H100s to train LLaMa3 (twitter.com)
- Sohu AI chip claimed to run models 20x faster and cheaper than Nvidia H100 GPUs (www.tomshardware.com)