Hackernews posts about MLX
- Fastest LLM decode engine on Apple Silicon. 658 tok/s on M4-max,beats mlx by 19% (www.runanywhere.ai)
- MLX: Basics (ml-explore.github.io)
- Jax Metal vs. MLX (ndalton12.github.io)
- MLX: CUDA (ml-explore.github.io)
- Show HN: AI Toys that don't need the internet (github.com)
- Show HN: Familiar – Open-source local AI agent for macOS(and iOS) (thoughts.jock.pl)
- Show HN: 100% local speech dictation app with wakeword detection (mohdali7.gumroad.com)
- Nvidia PersonaPlex 7B on Apple Silicon: Full-Duplex Speech-to-Speech in Swift (blog.ivan.digital)
- MacBook Pro with M5 Pro and M5 Max (www.apple.com)
- Get free Claude max 20x for open-source maintainers (claude.com)
- AirPods Max 2 (www.apple.com)