Massive Breakthroughs in Generative AI Slow Down, Raising Concerns
Generative artificial intelligence has made tremendous progress in the past two years, but recent weeks have seen a slowdown in advancements. Silicon Valley is increasingly concerned that the rate of improvement has plateaued. One early indication is the lack of progress between models released by the biggest players in the space. OpenAI’s next model, GPT-5, is expected to have a significantly smaller boost in quality, while Anthropic has delayed the release of its most powerful model, Opus.
Even Google’s upcoming version of Gemini is not meeting internal expectations. The lack of progress raises questions about the core assumption of scaling laws, which suggests that adding more computing power and data guarantees better models to an infinite degree. However, experts believe that AI companies are running out of data for training models and are turning to synthetic data, which is a band-aid solution.
The industry is divided on the issue, with some leaders pushing back on the idea that the rate of improvement is hitting a wall. Nvidia CEO Jensen Huang believes that foundation model pre-training scaling is intact and continuing, while OpenAI CEO Sam Altman claims there is no wall. Others, such as Scale AI founder Alexandr Wang, argue that the industry is facing a “data wall” and that synthetic data is not a viable solution.
If the rate of improvement is indeed slowing, the next phase of the race will be the search for use cases – consumer applications that can be built on top of existing technology without the need for further model improvements. The development and deployment of AI agents is expected to be a game-changer, with Meta CEO Mark Zuckerberg predicting that there will be hundreds of millions, if not billions, of AI agents in the future.