Hackernews posts about DeepSeek-R1 671B
- Running the Deepseek-R1 671B Model at FP16 Fidelity on AMD EPYC CPUs (www.servethehome.com)
- How to Run DeepSeek R1 671B Locally on a $2000 EPYC Server (digitalspaceport.com)
- DeepSeek-R1-671B-Q4_K_M with 1 or 2 Arc A770 on Xeon (github.com)
- A step-by-step guide on deploying DeepSeek-R1 671B locally (snowkylin.github.io)
- DeepSeek R1 671B over 2 tok/s without GPU on local gaming rig (old.reddit.com)
- Mac Studio M3 Ultra can run Deepseek R1 671B in memory using <200W (www.techradar.com)
- Lambda Chat – Hosted DeepSeek R1 671B (lambda.chat)
- DeepSeek R1 671B Running and Testing on a $2000 Local AI Server [video] (www.youtube.com)
- DeepSeek R1 671B running locally [video] (www.youtube.com)
- Quanxing Technology helps 671B DeepSeek R1 training costs drop by another 95% (www.quanxingtech.cn)
- Run the Full DeepSeek R1 Locally – With Only 32GB RAM (www.gulla.net)