Hackernews posts about Gonzo
- The Codebase Is Decadent and Depraved (gonzo.engineer)
- NeurIPS 2025 Best Papers in Comics: From Artificial Hivemind to 1000-Layer RL (gonzoml.substack.com)
- Visualizing Research: How I Use Gemini 3.0 to Turn Papers into Comics (gonzoml.substack.com)
- I'm Done with FlutterFlow [video] (www.youtube.com)
- Blame the governor! Oklahoma’s “board meeting porn” scandal goes gonzo (arstechnica.com)
- Tversky Neural Networks (gonzoml.substack.com)
- Diffusion models are evolutionary algorithms (gonzoml.substack.com)
- StreetComplete OSM Contribution App Begins iOS Port (gonzoknows.com)
- Thermodynamic AI is getting hotter (gonzoml.substack.com)
- Big Post About Big Context (gonzoml.substack.com)
- Muon Optimizer Accelerates Grokking (gonzoml.substack.com)
- BLT: Byte Latent Transformer (gonzoml.substack.com)
- Are Deeper LLMs Smarter, or Just Longer? (gonzoml.substack.com)
- Chain of Continuous Thought (Coconut) (gonzoml.substack.com)
- TextGrad: Automatic "Differentiation" via Text (gonzoml.substack.com)
- Tiny Recursive Model (TRM) vs. Hierarchical Reasoning Model (HRM) (gonzoml.substack.com)
- Stochastic Activations (gonzoml.substack.com)
- V-JEPA 2: Scaling V-JEPA (gonzoml.substack.com)
- ThoughtTerminator (gonzoml.substack.com)
- A Single 'Super Weight' Can Break Your Billion-Parameter Model (gonzoml.substack.com)
- Make Softmax Great Again (gonzoml.substack.com)
- Decoder-decoder architecture is coming (gonzoml.substack.com)
- Chronos: Using Pretrained LLMs for Probabilistic Time Series Forecasting (gonzoml.substack.com)
- Project CETI (gonzoml.substack.com)
- Paper FOMO and ICML 2025 Outstanding Papers (gonzoml.substack.com)
- Darwin Gödel Machine (gonzoml.substack.com)
- Intuitive Physics Emergence in V-JEPA (gonzoml.substack.com)
- Jax Things to Watch for in 2025 (gonzoml.substack.com)
- Deep Learning Frameworks: The Fourth Pillar of Deep Learning Revolution (gonzoml.substack.com)
- Superconducting Supercomputers (gonzoml.substack.com)
- Neural Network Diffusion (gonzoml.substack.com)
- Training LLMs with AMD GPUs on Frontier Supercomputer (gonzoml.substack.com)
- Beyond Chinchilla-Optimal Accounting for Inference in Language Model Scaling Law (gonzoml.substack.com)