OpenAI’s Sam Altman predicts AGI by 2025
OpenAI CEO Sam Altman anticipates reaching artificial general intelligence (AGI) by next year, despite slower progress in GPT and large language models (LLMs). Altman said he remains confident that the company can achieve AGI in 2025, even as advancements in the foundational elements of GPT have slowed in recent months.
Altman’s optimism comes amid mixed reports about the slow pace of large language model (LLM) development and scaling challenges across the AI industry.
Artificial general intelligence, commonly known as AGI, refers to a form of artificial intelligence that matches or exceeds human intelligence in nearly all aspects. It would have the capacity to learn, reason, adapt, and execute any intellectual task comparable to a human.
In an interview with Y Combinator President and CEO Garry Tan, Altman described the path to AGI as “basically clear,” noting that it now primarily requires engineering work rather than scientific breakthroughs. Altman and his team at OpenAI have led significant advancements in machine learning and generative AI, with their latest LLMs capable of reasoning at advanced levels. And, according to Altman, this progress is just beginning.
In his latest essay, Altman predicted that Artificial superintelligence (ASI) could be achieved in just a few thousand days. Discussing how OpenAI reached this point, Altman joined Garry Tan for an episode of “How To Build The Future,” where they explored OpenAI’s journey, upcoming goals, and Altman’s advice for founders facing this transformative platform shift.
Meanwhile, a recent report suggests that OpenAI’s rumored “Orion” model shows less improvement over GPT-4 than prior model updates, especially in coding tasks. OpenAI is said to have formed a new “Foundations Team” to address key obstacles, including a shortage of high-quality training data.
Researchers Noam Brown and Clive Chan support Altman’s outlook on AGI, suggesting that OpenAI’s new o1 reasoning model offers enhanced scaling potential.
According to an OpenAI employee who spoke with The Information, “Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks. Orion performs better at language tasks but may not outperform previous models in coding. This could pose a challenge since Orion may cost OpenAI more to operate in its data centers than recent models.”
Why this matters: Altman’s projection signals a significant step forward in OpenAI’s AGI roadmap, with the company currently positioned at level 2 out of 5 in AGI development. Altman’s unwavering confidence stands out, and OpenAI’s recent emphasis on the o1 model indicates that it may hold fresh potential for advancing scalability.
Looking ahead, OpenAI’s latest model saw a smaller quality increase compared to prior flagship releases, aligning with an industry-wide shift toward refining models post-training. To address the challenges of limited training data, OpenAI’s new Foundations Team is dedicated to finding solutions for sustaining model improvement. Check out the video of the interview below.