Artificial intelligence chipmaker’s financial results will be closely watched amid fears of a slowdown in the technology’s development
AI giants vow not to help wipe out the human race”, ran a May headline in this newspaper.
The world’s top artificial intelligence developers would not, they pledged, build technology that posed an extreme risk to humanity, such as helping to create weapons of mass destruction.
In San Francisco this week the latest AI safety summit kicks off, which will look at how those vows will be put into practice. At the same time the first meeting of the International Network of AI Safety Institutes also gets under way.
And this is the side of AI we often think of: the dangerous financial and technological arms race. Powerful companies straining the boundaries of technology and how we might contain it.
We are a mere decade from artificial general intelligence that can think, learn and solve problems like a human, some experts hype.
Yet the big debate in the AI world at the moment is quite the reverse: whether the meteoric development of the technology is being slowed down, as unexpected delays and challenges hit large language model (LLM) training. Titans of the industry are saying so in public.
Time was, a new model release from one of the big AI houses would make us all gasp in admiration. The difference between using ChatGPT 3.5 (the first one) and 4o was staggering. But last week Reuters published a report quoting Ilya Sutskever, the co-founder of OpenAI, ruminating that results from training have plateaued.
He may not be at the $157 billion company any more, but his words carry weight.
The article also found researchers at large AI labs “running into delays and disappointing outcomes” in the race to release a model that outperforms OpenAI’s GPT-4, almost two years old.
A Bloomberg report said Google and Anthropic were seeing similar reductions in gains. The founders of Andreessen Horowitz, the Silicon Valley investment powerhouse, agree they’ve noticed a drop-off in the improvement in capabilities of AI models. On a podcast, Marc Andreessen said “they’re sort of hitting the same ceiling on capabilities”, while Ben Horowitz asserted there was an infrastructure issue at stake: “Once they get chips we’re not going to have enough power, and once we have the power we’re not going to have enough cooling.”
This all spells a big change.
The narrative to date has been that the capabilities of LLMs would keep improving if you increased the size of the model, along with the amount of data and computer power it was fed. Now it seems these AI “scaling laws” may not be, er, laws at all. Data and tech limits are holding back growth.
On Monday, the Information reported that Nvidia, whose hardware is powering the AI revolution, sees customers worrying about their new Blackwell GPUs overheating when they are connected together in racks. They were banded into a cluster like this to improve AI training.
A spokeswoman for Nvidia said “the engineering iterations are normal and expected”.
This may be teething problems but any signs of AI slowing is closely watched. Billions of dollars are at stake, poured into the start-ups behind it and into the companies that look to profit from it.
Nvidia’s financial results, for example, have become so important that some people are gathering in bars to watch the announcement late on Wednesday. The stock has risen 190 per cent so far this year. “Merry Nvidia $NVDA earnings week to all that celebrate”, as one X user wrote.
AI labs are said to be working on new methods of training models to get round these issues. In his typical style the OpenAI boss Sam Altman posted on X to seemingly refute the idea of any kind of AI slowdown: “There is no wall.”
Eric Schmidt, the former Google boss, also dismissed it, saying there will be “two or three more turns of the crank of these large models” over the next five years. There may well be ways around any wall, but the prospect that an “AI winter” is descending is certainly turning the crank of the rumour mill and making investors nervous.