Hackernews posts about H100s
- Nvidia B200 vs. H100 performance compared with GPT-OSS (www.clarifai.com)
- Nvidia H100 Price Guide 2025: Detailed Costs, Comparisons and Expert Insights (docs.jarvislabs.ai)
- MetaMask extension bug causes 100s of GBs of extraneous data to be written (www.tomshardware.com)
- Add internal/external link to 100s of keywrds/phrases in 1 minute (proofwidget.com)
- Show HN: I built a tool to generate 100s of short-form videos in minutes (app.shortsfast.io)
- Launch HN: Reality Defender (YC W22) – API for Deepfake and GenAI Detection (www.realitydefender.com)
- Show HN: BaaS to build agents as data, not code (github.com)
- $2 H100s: How the GPU Rental Bubble Burst (www.latent.space)
- Musk confirms 12K H100s ordered for Tesla were instead prioritized for xAI (www.theregister.com)
- Meta buys 600k H100s to train LLaMa3 (twitter.com)
- Show HN: TensorDock – GPU Cloud Marketplace, H100s from $2.49/hr (dashboard.tensordock.com)
- Cerebras launches processor for AI theoretically equivalent to 62 Nvidia H100s (www.tomshardware.com)
- Show HN: Prime Intellect GPU Cloud. H100s starting at $1.65/HR on demand (app.primeintellect.ai)
- Show HN: GPU price-per-hour tracker for A100/H100s (computeindex.michaelgiba.com)
- Free H100s for All .EDU User Emails (app.hyperbolic.xyz)
- Who's Hoarding Nvidia H100s? (sherwood.news)
- Meta will have more than 350.000 Nvidia H100s this year (www.instagram.com)
- Flux Fast: Making Flux Go Brrr on H100s (pytorch.org)
- Show HN: gpudeploy.com – "Airbnb" for GPUs (www.gpudeploy.com)
- Show HN: H100cloud.com – Dutch Auction Cloud Exchange (h100cloud.com)
- Show HN: Voltage Park – H100 GPU Orderbook (auction.voltagepark.com)
- Show HN: I built a website where you can easily fine-tune Llama 3.1 models (www.tunellama.com)
- Show HN: Airbnb for GPUs (www.gpudeploy.com)
- Nvidia to Reportedly Triple Output of Compute GPUs in 2024 (www.tomshardware.com)
- So you want to rent an NVIDIA H100 cluster? 2024 Consumer Guide (www.photoroom.com)
- AMD's MI300X Outperforms Nvidia's H100 for LLM Inference (www.blog.tensorwave.com)
- Google TPU v5p beats Nvidia H100 (www.techradar.com)