State of AI 2025
Last Updated on December 29, 2025 by Editorial Team
Author(s): Igor Novikov
Originally published on Towards AI.

The year is almost over, and it’s time to review the State of AI for this year and look at forecasts for the next. This overview is based on a wide range of global sources, including MIT, PwC, OpenAI, OpenRouter, and others.
“High Adoption, Low Transformation” Paradox
There is a huge disconnect in the enterprise AI landscape where organizations are widely utilizing AI tools, yet very few are getting measurable financial returns. In fact, 95% are getting zilch despite the huge amounts of money buried in the ground. Oh yes, you can finally breathe out — it is not only you who is a failure, that’s pretty much everybody. Despite quadrillion tech conferences where companies brag about AI miracles — anonymous surveys show it is still pretty much the same situation as with teenage sex — lot’s of them talking about it but very few doing it.
The MIT report calls this GenAI Divide.
The data strongly shows both sides of the paradox:

Now, why would that be the case?
Well, as with any new technology this is rather expected turn of events. The same happened with the internet and with electricity before that. In fact AI is being adopted at record speeds, after 2 years only we see massive investment and deployments whereas for the internet it took around 10 years and around 30 years for electricity.
So partially this is expected, the system parts are evolving at different speeds and this is where the system breaks.
Let’s look at concrete reasons:
1. Organizational and Cultural Inertia
The main barrier to successful scaling is often cultural and organizational, not technical.
Failure to Redesign Workflows: most organizations approach AI as a tool for augmentation (doing the same task slightly faster) rather than a catalyst for transformation (or redesigning the entire business process).
The Pilot-to-Production Chasm: Despite piloting in 80% of organizations, converting these experiments into scalable, workflow-integrated systems remains rare, often failing due to brittle workflows or lack of contextual learning. According to reports only about 5% of custom enterprise AI tools actually reach production. I reckon it is a highly skewed number because of the higher success rate in tech, in other industries it is likely less than that.
The Shadow AI: A significant portion of adoption happens outside official channels; over 90% of surveyed workers use personal AI tools (like ChatGPT or Claude) for work tasks, even though only 40% of companies officially purchased a subscription. We all know people do that, right?
It’s like trying to forbid a school kid to use AI to do his homework.
Therefore, this “shadow AI” proves utility but shows a critical organizational failure to securely and reliably embed these tools at scale.
Psychological Safety: Employee fear and reluctance measurably impede adoption. 83% of business leaders reported that psychological safety directly impacts the success of AI initiatives. Furthermore, 22% of leaders admitted they hesitated to lead AI projects due to fear of failure or potential criticism. With those sort of adoption success rates I don’t blame them at all — it is a career suicide.
2. Technical and Strategic Misalignment
This very smart sounding title hides a very simple idea. It is easy to blame everything on the companies and people, but let’s be honest — the technology itself is to blame too. NVidia, OpenAI and other pundits have been touting and overpromising things quite aggressively for 3 years now, no wonder that people believed that the new era of AGI is upon us right about NOW. Which is a total lie.
It will come but definitely not tomorrow, and not next year. So people (and by people I mean C-level people, although some say they are not people at all but bonus money operated reptiloids) had exorbitantly high expectations of the technology.
Reality is the technology is not there yet in many cases for complex, company wide cases. The tools are fundamentally limited in the context of complex, core business processes. Generative AI tools frequently fail in critical workflows because they are static — they do not retain feedback, adapt to context, or improve over time.
Users report being highly skeptical of custom or vendor-pitched tools, demanding persistent memory and contextual awareness for high-stakes work. People on the other hand, do learn, albeit not that fast but at least they kinda do it themselves as well in the process of work — in some sense this is true unsupervised learning.
In more complex, e.g. agentic, workflows that are able to retain context for longer — errors that might happen at any step of the workflow, compounds with workflow length, making them not very reliable as well. In human workflows where each step is performed by an individual employee there is a high chance that the error will be caught at the next step and the system will retroactively self correct, although that may take a lot of time to happen.
3. Investment Bias
Organizations exhibit an investment bias toward front-office functions like sales and marketing, allocating substantial budgets there because the results are highly visible and easier to measure (e.g., personalized content and smart lead scoring). And therefore it is much easier for managers to brag about and get a promotion. However, back-office automation in areas like finance and manufacturing typically receive less attention and resources.
This is largely because the immediate financial impact of these internal functions is often hard to quantify in executive conversations or investor updates, although this is where the financial impact could have been much more substantial.
4. The Agentic Governance Lag
The current rapid deployment of Agentic AI — autonomous systems that can act independently — exacerbates the transformation problem. This shift introduces entirely new operational and compliance risks that traditional governance models cannot handle. This regulatory monitoring lag is mentioned as a leading adoption challenge by 40% of AI decision-makers, directly slowing scaled deployment.
5. Difficult to measure impact
It is very difficult to determine the exact impact in many areas, especially in a short time that has passed. For structural changes or small process improvements it will take several years of collecting data to understand the difference it makes, assuming those numbers are collected, which often is not the case.
Ones who succeed
There are roughly 5% of companies that succeed. What are they doing differently?
1. Strategic direction and top-down discipline
Successful organizations treat AI not as a peripheral tool but as a strategic priority managed from the highest levels of leadership, meaning senior leadership defines a top-down program, selecting a few focused areas for investment rather than spreading efforts thin across sporadic, crowdsourced initiatives and assigning top talent to those key focus areas. Top talent with resources and mandate to introduce significant changes.
2. Operational transformation and workflow redesign
The highest-performing organizations overhaul their internal processes to maximize the inherent speed and power of AI, rather than simply applying AI to old tasks.
In our experience this is the main hurdle as well — it is very difficult to automate an already bad process. For example if data is stored in the systems where search and extraction is taking minutes per query — that is going to make any AI (or non -AI) assistant unusable. Any AI transformation will require re-thinking the workflow entirely, sometimes turning a multi-step process into a single step rather than just automating small portions.
The aim is deep system integration — that is converting internal knowledge into machine-readable formats, building APIs for key data pipelines, and turning on connectors to give AI secure access to company data inside core tools for context-aware responses. That is much more expensive than just building a RAG system on top of existing data infrastructure.
3. Cultural readiness and psychological safety
Success depends heavily on building an organizational culture that mitigates resistance and fear related to new technology and resulting job changes.
They try to explicitly communicate how AI will and won’t impact jobs to foster trust and mitigate employee resistance. They treat AI as a means to increase productivity and reallocate tasks rather than immediately reducing headcount.
Along with that they are tracking non-financial indicators like employee sentiment, usage rates, and perceived productivity in the initial stages and adjust according to that.
4. Focused Implementation and External Partnerships
Instead of relying on slow, internal IT projects, successful firms leverage external specialization and empower operational staff to lead the change.
Strategic partnerships reached deployment roughly twice as often (66%) as internal development efforts (33%).
Successful vendors deliver systems that learn from feedback, retain context, and adapt to specific workflows, addressing the “learning gap” that stalls generic tools.
5. Measuring Financial Returns
Successful adopters focus on converting efficiencies into tangible financial gains, especially by displacing existing external costs. They establish concrete outcomes and “hard” performance metrics (like P&L impact, market differentiation, or customer satisfaction scores) instead of accepting abstract reports or vague process improvements.
The financial effect is seen not only as optimization of costs but as growth in revenue due to product innovation, personalization, and improved margins.
One key source of quantifiable value in the back office comes from replacing external costs, such as eliminating BPO contracts and cutting agency fees for content creation, rather than reducing internal employee headcount.
Industries with highest adoption
Data from ChatGPT Enterprise customers, OpenRouter stats and Anthropic on generated tokens show that Tech, Sales&Marketing, and Media&Entertainment and e-commerce sectors have the highest adoption.
Other notable sectors are professional services, finance, healthcare&wellbeing and telecommunications (customer support).
OpenRouter statistics shows that token usage is dominated by two primary categories:
• Programming: Queries related to coding assistance have become the most consistently expanding category, growing from roughly 11% of total token volume in early 2025 to exceeding 50% in recent weeks across all models on the platform. Programming workloads are identified as the dominant driver of prompt token growth, frequently exceeding 20,000 input tokens.
- Roleplay: This category, which includes creative dialogue (e.g. sexting), storytelling, character roleplay, and gaming scenarios, accounts for immense usage, nearly rivaling programming volume,. Among open-source models specifically, Roleplay accounts for more than half (approximately 52%) of all usage. In this segment, users treat LLMs as structured roleplaying or character engines, often for interactive fiction and scenario generation. For Deepseek it constitutes 80% in OpenRouter.


This is not at all surprising — the sectors where hallucinations are not critical or manageable (programming) or where it is not a problem but a feature (roleplaying and marketing) — have the highest adoption.
It is also notable that Anthropic dominates in coding category, according to both OpenRouter and Anthropic data:

Workload Segmentation by Cost and Value
The OpenRouter analysis maps LLM use cases based on aggregate usage volume versus unit cost, revealing distinct market segments.

The rapid increase in use of reasoning models, longer sequence lengths, and tool-calling behavior suggests LLM usage is moving away from single-turn requests toward agentic inference. This shift is reflected in the fact that the share of total tokens routed through reasoning-optimized models now exceeds fifty percent. This change is fundamentally reshaping the market, as models are increasingly used for complex, multi-step operations like calling external APIs (tool-calling behavior) and structured problem-solving (reasoning), often concentrated in coding workflows
Geographic segmentation
Enterprise adoption
In a reversal of traditional tech-adoption cycles, emerging economies are outpacing the West in business deployment. India (59%), the UAE (58%), and Russia (71% of large companies) lead in implementation rates, significantly higher than the United States (33%) and the United Kingdom (37%).
This surge is largely driven by aggressive national mandates, such as the UAE’s goal to become the first fully AI-native government by 2027 and India’s massive investments in AI public infrastructure, but also by the fact than in absolute numbers there are significantly more large companies in the US, and many of them are not public and don’t publicly report on AI usage.
Geography significantly influences how AI is used
• According to Anthropic data in high-Adoption Countries (e.g., Singapore, Canada) usage is highly diverse, spanning education, science, and complex business operations. Singapore has one of the highest per-capita usage rates, at 4.6x its expected population share and Canada is at 2.9x. In contrast, emerging economies, including Indonesia at 0.36x, India at 0.27x and Nigeria at 0.2x, use Claude less.
• Emerging Markets (e.g., India, Vietnam): Usage is heavily concentrated in coding and software development. In India, coding accounts for over 50% of all AI usage, compared to roughly one-third globally.
• United States: Usage is deeply integrated into household management, job searches, and medical guidance. Within the U.S., Washington, D.C. and Utah lead in per-capita usage, outpacing California.
The Infrastructure
A massive disparity exists in the physical “backbone” of AI:
• High-income countries host 77% of the world’s data center capacity as of mid-2025.
• Low-income countries hold less than 0.1% of this capacity.
Languages usage
According to the 100 trillion token study conducted by OpenRouter, the distribution of languages used in AI interactions is heavily concentrated in a few key areas, with English being the overwhelmingly dominant language.
The specific breakdown of token volume by language is as follows:
• English: 82.87%
• Chinese (Simplified): 4.95%
• Russian: 2.47%
• Spanish: 1.43%
• Thai: 1.03%
• Other (Combined): 7.25%
The sources suggest that English’s extreme dominance is not merely a preference but a reflection of two structural factors: the prevalence of English-centric models and the developer-heavy skew of the OpenRouter user base. Since programming is the fastest-growing and most context-heavy category on the platform (representing over 50% of recent volume), the use of English as the primary language for code and documentation naturally inflates its share.
Main trends of 2025
1. The Transition from Assistive Tools to Agentic Autonomy
The most significant technical shift across the sources is the rise of agentic AI, moving from single-pass text generation to autonomous systems capable of reasoning, planning, and executing multi-step tasks with minimal human intervention. This evolution is marked by a 320x year-over-year increase in reasoning token consumption as enterprises move away from casual querying toward integrated, repeatable processes. These “agents” are being deployed in high-stakes environments, such as autonomous logistics negotiation and financial decision-making, which introduces a new era of operational risk and a demand for modernized governance.
2. The Vibe-Coding Surge
AI-native development tools have sparked the era of “vibe coding,” where individuals build entire applications from natural language prompts with minimal technical oversight. This trend has already produced breakout successes, such as the startup Lovable becoming a unicorn just eight months after launch with roughly 95% of its code written by AI. However, this rapid creation introduces new vulnerabilities; for instance, malicious actors have already begun hijacking AI IDE extensions to steal credentials and mine cryptocurrency on developer machines.
3. Geographic division and the Sovereign AI
Global AI adoption is characterized by a shift in leadership; while the U.S. remains the investment leader, emerging economies like India and the UAE now lead the world in operational deployment rates. This has triggered a global push for “Sovereign AI,” where nations and firms prioritize keeping sensitive data, models, and compute resources within their own national borders to comply with regional regulations and ensure technological independence. This trend is supported by massive state-directed investments, such as India’s ₹10,300+ crore AI Mission and the UAE’s goal to become a fully AI-native government by 2027 and China’s push to become hardware independent and produce its own GPUs.
4. The “Shadow AI” usage
While official enterprise-grade systems are often stalled — with only 5% of custom enterprise AI tools reaching production — employees are crossing the value divide individually. Our sources identify a thriving “Shadow AI” economy where 90% of employees report using personal AI tools for work, even though only 40% of their companies have purchased official subscriptions.
5. The Surge of Chinese Models
The open-source ecosystem has shifted from a near-monopoly dominated by a few Western or early Chinese models to a highly competitive environment.
Chinese-developed models (such as DeepSeek, Qwen, and Kimi) have grown from a negligible base to accounting for approximately 13% to 30% of total weekly token volume in some periods.
On Hugging Face, Alibaba’s Qwen has overtaken Meta’s Llama as the primary choice for developers, accounting for over 40% of new monthly model derivatives, while Llama’s share dropped from roughly 50% to 15%.
To align with “America-first AI” interests and counter international competition, OpenAI released its first open models since GPT-2, launching the gpt-oss family (120b and 20b variants) in August 2025.
6. Video Generation evolution from clips to World Models
Video generation has evolved from “clip models” that produce fixed sequences to “world models” capable of predicting future frames based on state and user actions. The industry is moving toward Diffusion Transformers (DiT), which replace traditional convolutional U-Nets to better model dependencies across frames and pixels.
This new model generation (Sora 2, veo3) introduces synchronized dialogue and sound, stronger physics, and the ability to insert “cameos” of real people with their actual voice and appearance. Chinese labs have matured rapidly, with models like Kling 2.1 and Vidu 2.0 focusing on speed and realism, while Tencent’s HunyuanVideo has seeded an open-weights ecosystem that reportedly outperforms some Western proprietary models.
Google DeepMind’s Genie 3 can generate explorable 3D environments from text prompts at 24 fps, supporting promptable events like weather changes with persistent objects.
Real revenue has finally arrived, with Synthesia crossing $100M in Annual Recurring Revenue (ARR) and generating over 30 million avatar video minutes for 70% of the Fortune 100. AMC Networks has formally embraced Runway AI for television production.
The sources project that by 2026, real-time generative video games will become some of the most-watched titles on platforms like Twitch. Additionally, AI-produced short films are expected to win major audience praise while simultaneously sparking significant industry backlash.
Predictions
1. A Significant Market Correction
Experts predict an AI market correction in 2026, driven by a widening gap between inflated vendor promises and the actual value delivered to enterprises. Investors are expected to shift from “growth at all costs” to a strict demand for tangible returns, leading to a rotation of capital out of speculative tech stocks
2. Massive Expansion in Global AI Spending
Despite corrections, worldwide spending on AI is forecast to reach nearly $1.5 trillion in 2025 and top $2 trillion in 2026. This growth will be led by the integration of AI into consumer products like smartphones and PCs, alongside continued massive investment in data center infrastructure
3. A $15.7 Trillion Economic Contribution
By 2030, artificial intelligence is projected to contribute a staggering $15.7 trillion to the global economy. In specific regions like India, AI is expected to add $1.7 trillion to the national economy by 2035.
4. End-to-End Autonomous Scientific Discovery
One of the boldest research predictions is that open-ended AI agents will make a meaningful scientific discovery end-to-end, handling everything from the initial hypothesis and experimentation to the final iteration and published paper
5. As mentioned above by 2026, it is predicted that a real-time generative video game — one where the environment and narrative are created on the fly by AI — will become the most-watched title on the streaming platform Twitch
6. Global Debates Over Agentic Attacks
The sources predict that a deepfake or AI agent-driven cyber attack will eventually trigger the first-ever UN emergency debate specifically focused on AI security. This follows observations that cyber-offensive AI capabilities are doubling in strength every five months
7. Offensive AI capabilities for cyberattacks are currently doubling in strength every five months, significantly outpacing defensive measures. This rapid acceleration has led to the rise of vibe hacking, where criminals use AI to orchestrate multi-stage fraud operations, including the infiltration of Fortune 500 companies by North Korean operatives using AI to pass technical interviews
8. Agentic Commerce and a $5 Billion Ad Market
The sources forecast that AI agents will soon handle consumer transactions independently, with agentic checkout accounting for more than 5% of all online sales. Parallel to this, the spend for advertising directly to AI agents is predicted to hit $5 billion
9. Voice-First Empowerment for 490 Million Workers
By 2035, India envisions a future where voice-first AI interfaces remove all literacy and language barriers for the nation’s 490 million informal workers.
10. Medium Models Becoming the Industry Standard
The open-source market is bifurcating, with small models (under 15B parameters) losing favor and medium models (15B to 70B parameters) becoming the sweet spot for model-market fit. These models are increasingly preferred because they provide a balance of high capability and operational efficiency
11. Power and Electricity as the Primary Scaling Bottlenecks
By 2028, the primary constraint on AI scaling will shift from chip availability to electrical grid capacity. Leading supercomputers are projected to require 9 GW of power (equivalent to nine nuclear reactors) by 2030, making power availability — rather than capital — the main constraint on development
12. Selective Rather Than Mass Job Displacement
While AI is expected to replace 85 million jobs by 2025, it is simultaneously projected to create 97 million new roles. Displacement will likely remain concentrated in non-core business activities such as customer support, administrative processing, and standardized development, rather than resulting in broad-based layoffs across the entire economy
13. Sovereign AI as a National Mandate
Sovereign AI will become a top strategic priority for both governments and enterprises to ensure data, models, and compute remain under local jurisdictional control. This is driven by the need to comply with regional data localization laws and reduce dependence on a few foreign hyperscale providers.
Sources:
https://openrouter.ai/state-of-ai#model-and-token-variants
https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
https://www.anthropic.com/research/anthropic-economic-index-september-2025-report
https://hai.stanford.edu/ai-index/2025-ai-index-report
https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html
Have fun!
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Towards AI Academy
We Build Enterprise-Grade AI. We'll Teach You to Master It Too.
15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.
Start free — no commitment:
→ 6-Day Agentic AI Engineering Email Guide — one practical lesson per day
→ Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages
Our courses:
→ AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.
→ Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.
→ AI for Work — Understand, evaluate, and apply AI for complex work tasks.
Note: Article content contains the views of the contributing authors and not Towards AI.