Unlock the full potential of AI with Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Latest   Machine Learning

Really? AI Revolution is Losing Steam?

Author(s): Kelvin Lu

Originally published on Towards AI.

Photo by Viktor Kirichenko on Unsplash

Over the past few months, several significant developments have occurred: Nvidia unveiled BlackWell, Microsoft launched Phi-3, and Meta introduced Llama 3. We also saw the debut of Sora, GPT-4o, and Gemini. Amid these exciting announcements, there was a less positive piece of news: Stability AI’s CEO stepped down.

“Stability AI’s Emad Mostaque was out following an investor mutiny and staff exodus that left the one-time tech darling in turmoil,” Analysts said.

People didn’t dwell much on this unfortunate news, given the numerous groundbreaking developments that kept optimism high about the longevity of the AI revolution. However, doubts have recently surfaced, with some beginning to speculate that the AI revolution might be drawing to a close sooner than anticipated.

At first glance, this seems counterintuitive. With excellent products being launched weekly and new models continuously expanding our understanding of AI, how could all this momentum suddenly vanish? Wall Street Journal business columnist Christopher Mims has a powerful analysis in “The AI Revolution Is Already Losing Steam.”

He found that:

  • AI development is slowing down as companies have nearly exhausted the internet data available for training models. Future advancements might be more incremental than transformative. “All of the best proprietary AI models are converging on about the same scores on tests of their abilities, and even free, open-source models, like those from Meta and Mistral, are catching up.”
  • AI may become more commoditized, as performance differences between models narrow, potentially challenging startups to differentiate themselves from larger tech companies. “The CEO of Stability AI, which built the popular image-generation AI tool Stable Diffusion, left abruptly in March. Many other AI startups, even well-funded ones, are apparently in talks to sell themselves.”
  • The cost of running AI models is significant, with expenditures on chips often surpassing revenue. This poses questions about sustainable profitability. “An oft-cited figure in arguments that we’re in an AI bubble is a calculation by Silicon Valley venture-capital firm Sequoia that the industry spent $50 billion on chips from Nvidia to train AI in 2023, but brought in only $3 billion in revenue. That difference is alarming, but what really matters to the long-term health of the industry is how much it costs to run AIs.”
  • Despite the enthusiasm around AI, its adoption in the workplace has been slower than anticipated. Many workers interact with AI, but only a small group depends on and financially supports it. “there is a massive gulf between the number of workers who are just playing with AI, and the subset who rely on it and pay for it. … Changing mindsets and habits will be key hurdles in rapidly adopting AI, a trend that is consistent across the introduction of all new technologies.”

In conclusion, the author suggested that people may be over-optimistic about the development and adoption of AI.

I appreciate the new ideas presented. It’s been just 500 days since ChatGPT was released, and predicting the future of AI in the coming years remains uncertain. Nonetheless, we should not ignore the issues highlighted by the author. I believe his arguments deserve more consideration.

From my point of view, the recent ground-breaking AI movement was based on the coincidence of three factors:

  • The scaling law theory,
  • The massive computation capabilities,
  • The Internet-scale data

These three factors combined have propelled the rapid development and deployment of AI technologies. The scaling law theory provided a foundational understanding that as models grow larger, their performance improves logarithmically, driving efforts to create ever-larger AI models. Massive computation capabilities, enabled by advances in hardware technologies, allowed these large models to be trained efficiently. Finally, the availability of vast amounts of Internet-scale data provided the necessary training material to teach these models on a wide range of tasks, making them more versatile and capable. Together, these elements have formed the backbone of the AI revolution, pushing the boundaries of what machines can learn and accomplish.

However, the problem is: that all three factors are losing steam.

The scaling law has been found inefficient. The model training is brute-forced, too slow, too costly, and unable to adapt to small datasets. It is sensitive to the quality and bias in the training data. As models become increasingly larger, the improvements in performance tend to diminish. Each doubling of model size yields smaller incremental benefits, making further scaling less efficient and more resource-intensive.

The computation capacity is also approaching the glass roof. Semiconductor manufacturing is reaching its physical limit. Moore’s Law, which predicted that the number of transistors on a microchip doubles about every two years while the cost of computers is halved, has been a guiding principle for the semiconductor industry. However, as transistors approach the size of a few atoms, the physical and economic challenges of continuing to scale down have become formidable. We’re reaching a point where it’s becoming increasingly difficult and expensive to make transistors smaller, faster, and cheaper.

There is also a practical limitation: Llama 3, for instance, was trained on 24,000 of Nvidia’s flagship H100 chips. That’s 24,000 x $30,000 (estimated) = $720 million in GPU hardware alone! How far can we go further, according to the power law? Can we drain a whole country’s wealth to train a new LLM?

There’s no more fresh training data. Where can we find another Internet-scale data set? To make things worse, we can predict that there will be a significant portion of AI-generated content on the Internet. That means the next generation AI will have to train on the data that is AI-generated. The resonation will make the bias problem a growing pain.

Last but not least, the sustainable business model is unclear. Nikkei Asia found that, “Of the 17 new unicorns, 12 raised funds shortly after starting up. Many do not yet have a clear path toward profitability, and major tech companies are seen to largely be betting on their potential in hopes of finding the next big thing.” A unicorn refers to an unlisted company whose corporate value exceeds $1 billion.[Nikkei Asia]

The enterprise AI adoption rate is still pretty low. Since the out break of Generative AI, the number of LinkedIn profiles with AI skills has increased by 140%. In the meantime, the number of companies investing in AI capabilities only increased by 40%. Literally, reliable commercial enterprise AI solutions are very rare.

Comparing the AI industry to the pyramid structure, we will find that the AI industry doesn’t have a healthy middle layer and solid client base at the moment.

Anshu Sharma, chief executive of data and AI-privacy startup Skyflow, and a former vice president at business-software giant Salesforce, thinks that the future for AI startups — like OpenAI and Anthropic — could be dim.

So, is AI revolution losing steam? I think that may not be an accurate description. We are lucky enough to have witnessed the big-bang in AI. However, once it happened, it is in history. We can’t expect AI development to keep growing as the speed of the last two years. There may not be that much excitement in the near future.

On the other hand, we may observe the development quietly merging to a different lane.

OpenAI is an example; all the leading companies are laser-focused on building cutting-edge models. They tried their best not to look at the bill. And they are not concerned about the market too much. This is the first thing that must be changed in the next generation of AI companies. The next-generation AI models may not aim for the top comprehensive performance, but they are definitely leaner and cheaper. And they are easier to adapt to small datasets. In this direction, the small language models like Phi-3 may be worth more attention.

At the AI application side, we will see a lot of research and adoptions of domain specific solutions to help businesses solving their problems. The second part is the most interesting development. In the near future, the AI development is not only led by the few top companies, but by the large number of AI service providers, consultancies, and developers. The next generation of AI would be driven by business requirements. It will be a healthy and sustainable ecosystem.

In the future people will be more interested in making the AI cheap, fast, reliable, repeatable, auditable, and profitable. All the words can be summarised as AI engineering.

In conclusion, I believe AI engineering is the key to leading us into the next stage of AI.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓