Hackernews posts about GPT-3
GPT-3 is a powerful language model developed by OpenAI that uses machine learning to generate human-like text based on the input it receives.
- Three Years from GPT-3 to Gemini 3 (www.oneusefulthing.org)
- The upcoming GPT-3 moment for RL (www.mechanize.work)
- Gemma 2B: Scores better than GPT 3.5 Turbo (twitter.com)
- Gemma-2 2B beats GPT3.5 on Chatbot Arena (huggingface.co)
- Signs of consciousness in AI: Can GPT-3 tell how smart it is? (www.nature.com)
- Pleading with OpenAI Developers to not retire GPT-3.5-turbo-0613 on June 13th (community.openai.com)
- Three Years from GPT-3 to Gemini 3 (www.oneusefulthing.org)
- The upcoming GPT-3 moment for RL (www.mechanize.work)
- Why Google failed to make GPT-3 – with David Luan of Adept [video] (www.youtube.com)
- The upcoming GPT-3 moment for RL (www.mechanize.work)
- Giving GPT-3 a Turing Test (2020) (lacker.io)
- GPT-3.5 and the Latest Models (omarabid.com)
- GPT-4o-mini vs. GPT-3.5-turbo for RAG: Wordier, but better? (blog.pamelafox.org)
- Scaling Laws for LLMs: From GPT-3 to o3 (cameronrwolfe.substack.com)
- Fuzzy API composition: querying NBA stats with GPT-3 and Statmuse and LangChain (www.geoffreylitt.com)
- Scaling Laws for LLMs: From GPT-3 to o3 (cameronrwolfe.substack.com)
- Agents keep thanking each other when using GPT-3.5-turbo (microsoft.github.io)
- GPT-3 can run code (ish) (mayt.substack.com)
- Gpt4 missing and gpt3.5 responses instead (community.openai.com)