Hackernews posts about FP16
- Show HN: OpenGraviton – Run 500B+ parameter models on a consumer Mac Mini (opengraviton.github.io)
- Show HN: I made Qwen3.5-4B 13% smarter by compressing it to 4-bit (huggingface.co)
- AMD Hints at Big FP64 Increases in MI430X GPU as Ozaki Underwhelms (www.hpcwire.com)
- Felix86 26.03: AVX, AVX2, BMI1 and F16C Support (felix86.com)
- New Hetzner prices: +30% starting on 1 April (p169.p3.n0.cdn.zight.com)
- ONNX Runtime and CoreML May Silently Convert Your Model to FP16 (ym2132.github.io)
- Running the Deepseek-R1 671B Model at FP16 Fidelity on AMD EPYC CPUs (www.servethehome.com)
- 90T/s on my iPhone llama3.2-1B-fp16 (www.reddit.com)
- PyTorch 2.6 Delivers FP16 Support for x86 CPUs, Better Intel GPU Experience (www.phoronix.com)
- PyTorch 2.6 Delivers FP16 Support for x86 CPUs, Better Intel GPU Experience (www.phoronix.com)
- Show HN: Speeding up LLM inference 2x times (possibly) (asciinema.org)
- Show HN: Ghost Engine – generate weights on the fly (github.com)
- Intel's AMX-BF16: Over 4x the Performance at 69% the Power (www.phoronix.com)
- 15 years of FP64 segmentation, and why the Blackwell Ultra breaks the pattern (nicolasdickenmann.com)
- Neuroplasticity in F16 fighter jet pilots (pmc.ncbi.nlm.nih.gov)