From Knowledge to Power: How AI Is Reshaping the World
Last Updated on October 4, 2025 by Editorial Team
Author(s): Michele Mostarda
Originally published on Towards AI.

Introduction
The journey you are about to embark on is a map of the near and distant future of artificial intelligence. Not an abstract list of possibilities, but a trajectory that begins with what is already before our eyes — search engines becoming assistants, apps dissolving into personal agents — and extends to the most radical scenarios, where AI could even assist governments and political systems in decision-making.
The chapters organize the content as a temporal countdown. Throughout this journey, you’ll find a common thread: how software is becoming increasingly autonomous, intelligent, and capable of shaping experiences, markets, and institutions.
A key clarification: this article isn’t about robotics. We won’t be covering mechanical arms, automated assembly lines, self-driving cars, or drone swarms. That’s a parallel chapter in the technological revolution — worthy of its own analysis, and perhaps a dedicated guide in the future.
The focus of this work is instead on the invisible heart of transformation: software. Not the bodies of machines, but their minds. The digital agents that live in our phones, in company servers, and in the services we use every day. Systems capable of collecting data, analyzing it, making decisions, and even generating content.
This intangible yet powerful space will host the most important game: the one concerning access to information, financial markets, creativity, health, and politics. This unseen terrain, lacking physical form, can profoundly alter our daily lives, institutions, and even the values upon which society is based.
1. From Research to Answer: How AI Agents Are Changing Access to Information
We’re witnessing a sea change: the traditional list of search results could soon be a thing of the past. We’ll no longer have to open dozens of links and manually compare information; AI agents will do it for us. These systems will perform searches, retrieve relevant content, analyze it, and return a clear, targeted summary. It’s the shift from “searching for pages” to “searching for answers,” with a radical impact on the user experience.

A concrete scenario
Imagine Sonia, a university student. She has to write an essay on the climate impact of meat production. A traditional search would inundate her with hundreds of scientific articles, blogs, institutional reports, and conflicting opinions. The hardest part wouldn’t be reading, but sorting through them: figuring out what’s reliable, what’s up-to-date, and what’s contradictory. With her AI-powered search engine, Sonia no longer starts with an endless list of links. Instead, she receives a streamlined overview: the latest emissions data, academic sources ranked by reliability, and concise infographics ready to be dropped straight into her essay. The agent also shows her the sources’ provenance and the margins of uncertainty, allowing her to decide how much to trust. The most surprising step, however, is another: AI also offers counterarguments. Alongside data highlighting the environmental impact, it points out studies showing how innovative techniques — such as lab-grown meat or water-efficient supply chains — can mitigate part of the problem. Sonia doesn’t just summarize, she constructs a critical and balanced essay, closer to true academic reasoning than a simple collage of sources. For her, research is no longer a dispersive process, but an ongoing dialogue with an assistant who filters, organizes, and stimulates critical thinking.
Socio-economic consequences
This new paradigm challenges the advertising model that has supported search engines for decades: if there are no longer links to click, how will the visibility of content and the online information market change? Publishers and content creators face a crucial challenge: being “read” and valued by AI agents. Those who adapt will connect with audiences in new, direct ways — while those clinging to old models risk fading into the background of machine-generated summaries.
Examples
The transformation is already underway. Google has introduced Gemini and AI Overviews in its search results; Microsoft has integrated Bing Chat, and new engines like Perplexity and You.com have emerged. All these products adopt a conversational approach in which queries no longer return a list of links, but a summary response with cited sources. They are still hybrid systems, maintaining the old logic alongside the generative one, but clearly indicating the direction of change.
Time horizon
0–2 years — Multimodal searches (text, images, voice) will become standard in the main engines. Already today, over 20% of Google users declare having interacted at least once with Gemini/AI Overviews (Statista, 2024). The main limitation remains accuracy: hallucinations, delays in updating datasets, and copyright risks prevent a fully “blind” adoption. Indicators: the share of queries with generative answers compared to traditional SERPs and the number of publishers who choose to be indexed directly by agents.
3–5 years — AI agents will become capable of conducting complex and continuous research, such as long-term monitoring or cross-sectional comparative analyses. Within this window, 30–40% of global searches could be managed in an “answer-first” manner (McKinsey, 2024). Risks: inference costs are still high for long and multimodal queries, and regulatory resistance related to the transparency of sources and the economic impacts on publishers.
5–10 years — The traditional list of links could disappear entirely, replaced by organic and personalized results with adaptive interfaces. Assistants will become permanent and proactive: they will monitor interests, anticipate information needs, and provide real-time alerts. Risks: Concentration of power (a few global providers controlling access to knowledge) and loss of information pluralism. Regulators, especially in Europe, could impose visibility quotas or extensive citation requirements to preserve a balanced ecosystem.
2. No more product and service comparisons, but tailor-made advice
The search for and selection of products and services are changing. Until now, we’ve relied on specialized sites that allow us to compare items based on technical specifications and price: phones, cars, and electronic components. These tools work well in some sectors, but they don’t cover the entire range of available goods. If today we wanted a detailed comparison of dietary supplements, cosmetics, or artisanal products, we’d be hard-pressed to find dedicated platforms.
With agentic AI, this limitation could disappear. Thanks to the combination of search, extraction, and analysis, intelligent agents will create personalized comparison tables in real time, even for categories that currently lack dedicated portals. In just a few seconds, what would take an expert hours of work can be summarized into clear, dynamic tables, tailored to the needs of each user.

A concrete scenario
Marco needs to buy a dietary supplement, but he’s inexperienced and gets lost among dozens of labels and conflicting reviews. Until now, he would have consulted unreliable forums or blogs, or e-commerce sites with promotional descriptions. With his AI product comparison tool, however, he only needs to ask one question: “Which vitamin D supplement is best for a 40-year-old man who spends little time in the sun and does light exercise?” In just a few seconds, he gets a clear comparison: a personalized table with active ingredients, dosages, average prices, verified reviews, and even alerts on potential side effects. What’s more, the agent identifies the safest products based on clinical data and directs him to the most reliable retailers. Marco no longer has to navigate through ten product pages: he finds the right answer, ready to use.
Socio-economic consequences
For e-commerce, this is a radical change. It will no longer be the user who has to navigate dozens of storefronts, but the agent who will bring only the most suitable solutions to the user’s attention. Brands will therefore be under increasing pressure to ensure data quality and transparency. Incomplete descriptions, unreliable reviews, or inaccurate specifications could result in AI being excluded from the selection process. The competition will therefore shift from flashy marketing to robust and verifiable information.
Examples
The current landscape is dominated by vertical comparators such as Versus, Kimovil, and the PCPartPicker, which offer very detailed comparisons but remain confined to their respective categories. There are also AI demos such as GravityWrite, capable of generating product comparisons, but they are experimental tools, designed for a professional audience rather than for consumer use. In other words, there is not yet a universal smart comparator that covers all product categories transversally: this is precisely where agentic AI promises true disruption.
Time horizon
0–2 years — Traditional vertical comparators will continue to dominate, supported by a global market estimated to be over 20 billion dollars(Allied Market Research, 2023). Consumer AI for product comparison will remain mostly experimental demos or internal tools within e-commerce platforms. Limitations: difficulty in ensuring the accuracy of the data collected, poor API integration in the most fragmented sectors (e.g., cosmetics, nutraceuticals). Risks: lack of shared standards and possible legal disputes over liability for comparisons.
3–5 years — The first ones will begin to emerge intelligent comparators intended for the general public, capable of collecting data from heterogeneous sources and generating personalized dynamic sheets in real time. Users will be able to ask “which cream is best for my skin and my budget?” and receive not only a table, but also contextual recommendations. Indicators: growing share of product searches (20–30%) filtered by generative systems integrated into engines such as Google and Amazon. Limitations: risk of hallucinations (non-existent products or incorrect combinations), slow data collection from sources without APIs. Risks: resistance from companies to make prices and complete technical data sheets transparent, which can limit the database.
5–8 years — Classic comparators risk becoming marginal: autonomous personal agents not only prepare the comparisons, but they can also make purchases directly on behalf of the user, choosing based on explicit preferences and behavioral history. Indicators: over 50% of purchasing decisions online are influenced by personal AI agents (McKinsey, 2030 scenario). Limitations: complexity in managing trust — the user will have to understand whether the agent is acting in his or her best interests or in that of the provider. Risks: antitrust regulation and algorithmic transparency (the EU and the US could impose stringent constraints on the opacity of recommendation systems).
3. Beyond apps: the arrival of multimodal personal assistants
For over a decade, apps have mediated our relationship with technology: icons on phones or PCs, each with a specific function. But this paradigm is set to change. Multimodal always-on personal assistants promise to become the new primary interface: agents that live simultaneously on smartphones, PCs, smartwatches, headphones, or AR glasses, capable of understanding context (voice, location, screen viewed, even physiological state) and completing complex tasks from start to finish. No more than ten different apps to book a flight, fill out a form, or request a refund: a single assistant orchestrates everything, communicating with the systems on our behalf.
The difference compared to the voice assistants of the past (Siri, Alexa, Google Assistant) is radical. They could only execute simple commands. Imagine: “Book me a flight to Milan the day after tomorrow morning, the cheapest one that fits my schedule.” The agent not only searches and compares, but also cross-references commitments, calculates travel times, fills in payment details, sends the receipt to accounting, and adds the reservation to the calendar.

A concrete scenario
Lucia, a manager at a multinational company, wakes up at 6:30 a.m. She doesn’t pick up her smartphone: the personal assistant integrated into her smartwatch has already prepared her day. It has monitored her sleep quality, synced her schedule with her calendar, calculated traffic, and suggested moving a meeting by half an hour to avoid delays. During breakfast, Lucia simply says, “Find the cheapest flight to London on Friday that works for my schedule.”The assistant not only books the ticket, but also automatically adjusts reminders, adds the reservation to the shared calendar, sets the alarm earlier that day, and suggests a taxi ride at the optimal time. Later, while she’s driving, the AR headset reads a document to her and, at her nod, transforms it into a bullet-point summary to send to colleagues. In the evening, the assistant detects that Lucia is tired and suggests rescheduling a non-urgent call for the next day. In this daily routine, Lucia no longer interacts with individual apps but with a single agent that orchestrates devices, services, and decisions, becoming a true digital extension of her mind.
Socio-economic consequences
The arrival of always-on personal assistants will radically change our relationship with digital services. For consumers, it will mean greater convenience, less wasted time, and seamless access to information without navigating a thousand interfaces. For companies, it will be a shock: competition will no longer be between apps, but to be “chosen” by the agent. This will require transparency in pricing and terms, reduce customer lock-in, and push toward API-based models and interoperability. From an ethical perspective, significant risks arise: if a single agent filters all our digital decisions, who can guarantee it’s acting in our best interests and not those of the provider that develops it? And what happens to privacy if an entity collects and connects every fragment of our daily behavior?
Examples
Some signs are already visible. OpenAI GPT-4o and ChatGPT with memory. They represent the first multimodal agents that combine voice, text, and images and can be integrated into mobile devices. Samsung Galaxy AI and Google Gemini Nano bring similar functions directly to smartphones with on-device models. Devices such as the Humane AI Pin and the Rabbit R1 try to embody the idea of an always-on assistant, while remaining immature in terms of usability and diffusion. Even the wearables like the Apple Watch begin to incorporate predictive health features and AI-powered contextual assistance.
Time horizon
0–2 years — Multimodal assistants will begin handling simple end-to-end tasks — booking travel, managing calendars, filling out basic forms — and will be integrated into consumer devices like smartphones and smart speakers. Indicators: diffusion of multimodal functions in models such as Gemini Nano, the Apple Intelligence, already pre-installed on millions of devices by 2025–26. Limitations: high latency for complex processes, need for continuous connectivity, and limited contextual memory capacity.Risks: resistance related to the privacy of personal data and difficulties in regulating informed consent.
3–5 years — The daily use of multimodal assistants will become mainstream. Integrated into major operating systems and wearable devices, they will orchestrate activities across work, private life, and interactions with public and private services. Indicators: over 40% of voice searches and device-to-service interactions mediated by AI agents (Gartner estimate, 2027 scenario). Limitations: risk of contextual errors (wrong flight choices, unresolved scheduling conflicts), dependence on closed ecosystems (Google, Apple, Microsoft). Risks: possibility that dominant providers will limit interoperability, creating lock-in and holding back truly universal adoption.
5–8 years — The very concept of the “app” could dissolve, replaced by a model centered on intelligent agents that know us, track us, and act for us. Users will interact with services through personal agents that mediate access, creating a new economy based on“competition for the agent” more than for the app. Indicators: over 60% of personal digital transactions managed by AI assistants (McKinsey, 2030 scenario). Limitations: complexity in ensuring trust and transparency in choice algorithms, risk of personalized biases that reinforce habits that are not advantageous for the user. Risks: EU/US regulations on the opaqueness of decision-making systems and the concentration of power in the hands of a few providers. Some analysts (e.g., Shoshana Zuboff) warn that the model risks strengthening an even more pervasive surveillance capitalism, making the agent more loyal to the platform than to the user.
4. The dream of the web of data comes back to life
In 2001, Tim Berners-Lee, inventor of the World Wide Web, proposed the idea of the semantic web: an internet capable of teaching machines to understand natural language and extract unambiguous information. The goal was to transform the web into a vast “web of data,” where products, services, and knowledge could be described in a structured form, aggregated, and combined to generate new insights. The project never fully took off: there was a lack of financial incentives, and without a critical mass for adoption, the idea remained unfinished.
But today, AI agents can finally make that dream come true. Intelligent agents can extract structured data from text content and transform it into queryable information. This means that different sources can be collected and cross-referenced in real time: from booking a food and wine tour, combining winery, museum, and hotel opening hours, to more complex systems that integrate heterogeneous sources with no dedicated APIs.

A concrete scenario
Marta is an architect designing a small, zero-impact building. She needs to know which suppliers in her region offer certified materials, what the average delivery times are, and what public incentives are available in her municipality. Today, she would have to spend days combing through ministerial PDFs, regional websites, and local company web pages, each with tables, regulations, and brochures written in different formats. With the new AI agent, however, Marta asks just one question: “What eco-certified materials can I purchase near me, quickly, with available incentives?”The agent visits company websites, reads PDF regulations, interprets ministerial press releases, and even municipal information posts, transforming everything into a single, clear, and verifiable table. No APIs are required, and the data doesn’t need to be published in an open format: the agent extracts, normalizes, and connects it instantly. In just a few minutes, Marta has a purchasing plan that integrates availability, prices, certifications, and tax contributions. A task that previously required weeks of manual research becomes an immediate and personalized process.
Socio-economic consequences
The difference compared to the past lies in the incentives. The use of intelligent agents reduces operating costs, simplifies access to services, and opens new monetization channels for providers. This creates a virtuous circle that could finally lead to the emergence of a true shared data ecosystem. For companies, it will mean greater efficiency and new business opportunities; for citizens, more transparent and immediate access to reliable information. But challenges will also emerge: data quality, preventing biases introduced by agents, and the risk of power being concentrated in the hands of those who control the aggregation tools.
Examples
Some projects already recall the spirit of the semantic web. Google Data Commons integrates large amounts of public data into a single queryable graph; DBpedia extracts structured information from Wikipedia and links it as Linked Data; Semantic Media Wiki allows content to be enriched with metadata; in specific sectors, such as climate and energy, semantic knowledge graphs are being created for research and policy. At the same time, we will see the first experiments of an agentic web where AI agents transform unstructured data into coherent, navigable information.
Time horizon
0–2 years — These applications will remain vertical and niche: experimental dashboards, mashups limited to structured datasets, and tools used by SMEs or research centers. The main bottlenecks will be the quality of available data and the interoperability: Many sources don’t offer common APIs or standards, forcing manual scraping or conversions. Indicators: number of open-access datasets made available under machine-readable licenses and diffusion of AI-driven data visualization tools in SMEs.
3–5 years — Intelligent agents will automate data mashup on a larger scale, combining heterogeneous sources (finance, health, environment, consumption). Within this window, the OECD Digital Outlook 2024 expects 30% of European SMEs to adopt AI tools for data analysis and visualization. Risks: lack of shared standards across platforms, risk of bias in incomplete or manipulated datasets, and still high costs for complex queries and real-time updates.
5–10 years — We could witness the birth of real consumer services, capable of generating dynamic and personalized visualizations upon user request. Digital assistants will become capable of connecting public and private sources in real time, building interactive knowledge tailored to individual preferences. Risks: market concentration in the hands of a few global providers, increasing reliance on proprietary datasets, and regulatory challenges to ensure transparency in data sources.
5. Education: from universal tutor to learning companion
Education is one of the fields where AI can have the most transformative impact. Intelligent tutors are already emerging as tools capable of guiding students step by step, generating targeted exercises, explaining the same concept in different styles, and adapting to their individual pace. Unlike traditional e-learning platforms, these new learning agents are interactive, multimodal, and capable of building a truly personalized learning path. They don’t just check whether an answer is right or wrong, but offer targeted feedback, study options, and ongoing assessments that inform customized plans.
This transformation overturns the paradigm of standardized education: schools today must be organized into homogeneous classes, with the same time and content for everyone, but tomorrow, learning could become deeply individualized. An AI tutor could explain algebra with football examples to sports enthusiasts or with musical metaphors to instrument players, ensuring a deeper and more motivating understanding.

A concrete scenario
Luca, 14, struggles with math but loves soccer. When he returns home from school, he opens his AI tutor. Instead of a standard assignment, the agent presents him with an algebra problem modeled after calculating the scores of a Champions League match. When Luca fails a pass, the tutor doesn’t just point out the error: it guides him step by step, offering alternative explanations, interactive graphics, and even short simulations. His sister Giulia, meanwhile, is learning English. The AI tutor presents her with an interactive dialogue set in a London restaurant, adapting the level of the sentences to her pronunciation and the speed with which she responds. When she encounters a difficulty, the system provides suggestions in real time, just like a private tutor would. For parents, the tutor generates a weekly report: it shows progress, gaps, and targeted advice, helping families understand how to support their children. For schools, the same data becomes a support for teachers, who can personalize classroom lessons rather than following a rigid, uniform curriculum.
Socio-economic consequences
The widespread adoption of AI tutors brings enormous opportunities but also significant risks. On the positive side, it could democratize access to quality education: students in countries or communities with few qualified teachers could receive ongoing, personalized support. It could also reduce the gap between those with access to private tutoring and those without. For teachers, AI will not replace their role, but will relieve them of some of the repetitive burden (testing, marking), allowing them to focus on empathy, motivation, and the development of critical thinking.
Risks include reliance on private platforms, with privacy concerns regarding student data, bias in generated content, and the possibility of systems becoming tools for standardization rather than personalization. Economically, the growth of educational AI will fuel a new global edtech market, based on subscriptions, institutional licensing, and integration into school systems.
Examples
Concrete signs are already visible. Khan Academy launched Khanmigo, an AI tutor that guides the student with Socratic questions and personalizes learning. Duolingo Max integrates GPT for dynamic language exercises and tailored explanations. Socratic, a Google app, explains school problems step by step. In China, Squirrel AI is pioneering the use of adaptive AI to build truly personalized educational plans.
Time horizon
0–2 years — We will see serious pilot projects, especially in extracurricular activities (online tutoring, language courses, professional training). We will use AI tutors as complementary support, never as a replacement, with strong human supervision. Indicators: share of schools launching official trials (<10% in the EU according to OECD Education 2024), number of platforms obtaining educational certifications. Limits: inconsistent accuracy, risk of hallucinations, need for moderation by human teachers/tutors.
3–5 years — The use of large-scale AI platforms will become a reality, especially in global extracurricular ecosystems(MOOCs, edutech, corporate courses). Within this window, HolonIQ (2024) estimates that over 40% of worldwide students will regularly use an AI tutor. Risks: trade union and cultural resistance, inequalities in access (gap between rich and poor schools), and infrastructure costs in emerging countries.
5–10 years — Full integration into official school systems could transform teaching: from rigid and standardized programs to dynamic and personalized paths for each student. AI assistants will become an integral part of the ministerial platforms, ensuring continuous monitoring and adaptive curricula. Risks: excessive dependence on a few global platform providers, loss of pedagogical autonomy for teachers, and the need for strong regulations on privacy and sensitive data of minors.
6. When software will write itself
Today, agents already exist that can build simple, quick-to-implement, and surprisingly effective web or mobile applications. But this is just the beginning: the level of complexity these systems will be able to handle is destined to grow rapidly. Eventually, we will be able to entrust the entire lifecycle of a software project to AI: from requirements gathering to design, from front-end and back-end development to testing and production deployment.

A concrete scenario
Agnese works for a small cooperative that specializes in local tourism. Until now, whenever a client requested a personalized booking platform, they had to turn to an external agency, which was prohibitively expensive. One day, she decided to try out an AI-powered development tool: she described in words what she wanted — “a portal where tourists can view itineraries, book guided tours, and pay online” — and within hours, she had a working prototype. The tool generates code, an interface, and even the payment processing. Agnese doesn’t have to write a single line of code: her job is to supervise, ensuring the itineraries are clear and the prices are accurate. Within a week, the site was online, and the cooperative could finally offer a digital service without having to spend huge sums. The experience also changed her role: from “a client commissioning software” to “product owner” capable of shaping and directing projects. Her task is no longer technical, but strategic: defining what is needed, who needs it, and why. AI does the rest.
Socio-economic consequences
The human role will not disappear, but it will change profoundly. The developer will change from a “code worker” to a strategic product owner, focused on goals, priorities, and the overall vision. People will be freed from repetitive tasks to focus on innovation and product value. Certain technical skills will lose their centrality in the labor market, while demand will grow for professionals capable of managing complex projects, translating business needs into clear requirements, and supervising the work of AI agents. Overall, software professions will be less hands-on and more focused on strategy, communication, and the ability to drive intelligent systems.
Examples
Some tools already show the potential of this evolution. Replit, Lovable, and Bolt. They allow users to describe an app in natural language and create a working prototype in just a few minutes, complete with front-end, back-end, testing, and deployment. These solutions are still limited to simple projects, but they demonstrate how automatic software generation is moving from theory to practice.
Time horizon
0–2 years — Automatic software generation tools will continue to grow in code quality (bug reduction, assisted refactoring) and usability. Already today, beyond 30% of professional developers use GitHub Copilot or equivalent (Stack Overflow Survey 2024). Immediate risks: inference costs remain high for large projects, fragmentation among non-interoperable tools, and licensing/copyright concerns about generated code.Indicators: share of companies integrating AI coding assistants into official development processes, growth of the “AI in Software Development” market estimated at about $20 billion by 2026 (Markets&Markets).
3–5 years — The platforms will be able to manage more complex full-stack projects, including mobile apps and systems integrated with APIs and databases. During this period, up to 40–50% of code in new company projects could be produced by AI (McKinsey, 2024). The human role will shift towards supervision, architectural design, and strategy, with the main risks related to safety, quality of training datasets, and the difficulty of auditing the generated code. Indicators: diffusion in SMBs and enterprise IT departments, increasing number of “AI-first” platforms used for MVPs and prototypes.
5–8 years — Automatic software generation will become common practice, especially for startups and SMEs that will be able to launch complete products with small teams. Almost entirely AI-driven pipelines will manage development, testing, and deployment, leaving humans with a key role as supervisor, strategist, and validator. Opportunity: drastic cost reduction and accelerated innovation cycles. Risks: market concentration in a few global providers, inconsistent security and compliance standards, and the potential loss of technical know-how among new generations of developers.
7. Cybersecurity and the invisible war between AI
Artificial intelligence isn’t just a driver of positive innovation: it can also become a weapon in the hands of attackers. Already today, we can glimpse agents capable of analyzing complex systems and quickly identifying vulnerabilities to exploit, and in the future, these attacks will become increasingly sophisticated, to the point of rendering traditional defenses insufficient. The only effective response will be to rely on AI-based defenses, agents capable of constantly monitoring systems, detecting anomalies in real time, and responding with immediate corrections.

A concrete scenario
Enrico is the IT manager of a medium-sized manufacturing company. One morning, he receives an alert from the security system: a suspicious access to the company servers from a foreign IP address. Before his team can even open the logs, the AI defense agent has already identified the anomalous behavior, isolated the compromised machine, and redirected traffic to a secure environment. Meanwhile, a concise report appears on Enrico’s screen: “Intrusion attempt using stolen credentials. Mitigation complete. No data exfiltrated.” The attack is never noticed by his employees, who continue working uninterrupted. What Enrico doesn’t see is the “invisible” battle taking place behind the scenes: another agent, this time hostile, was testing vulnerabilities in the system. The defensive AI reacted faster than a human could ever have, updating its protection algorithms in real time. Enrico, instead of spending hours putting out fires, can focus on strengthening company procedures and training staff. Its function shifts from reaction to attack to strategic prevention.
Socio-economic consequences
The transformation of cybersecurity into an invisible war between AI has implications that go beyond technology. At the geopolitical level, the spread of offensive agents developed by states, criminal groups, or terrorists could trigger a true digital arms race, with escalation risks that are difficult to control. It is therefore urgent for governments and institutions to define international rules and agreements that balance innovation and security. Businesses and citizens will also be affected: hyper-realistic phishing attacks, customized malware, and deepfakes will become accessible to everyone, requiring a combination of technological defenses and cultural training. On the positive side, AI can make systems more resilient, reduce response times from days to seconds, and automate patch releases, but the question of trust remains: to what extent can we delegate digital defense to autonomous entities?
Examples
The market is already showing the first concrete signs. Darktrace, with its Antigena platform, uses AI to detect and neutralize threats in real-time; Microsoft Security Copilot integrates language models to translate complex logs into defensive actions. Startups like Reco monitor the anomalous use of SaaS applications, while emerging realities, such as Vastav.AI, are developing countermeasures against deepfakes. Academia is also contributing: projects like CYGENT or HuntGPT. They are experimenting with models capable of transforming huge volumes of logs into clear, prioritized alerts, reducing the burden on human operators.
Time horizon
0–2 years — The solutions of cyber defense AI-driven will become more widespread and refined, especially for functions of automatic triage (filter real events from false positives) and immediate response to known threats. Already today, over 35% of global companies use AI systems for cybersecurity(Capgemini, 2023). Current risks: high number of false alerts, integration costs, and lack of qualified personnel to supervise agents. Indicators: share of IT budget allocated to AI-driven solutions (today approximately 15–20%, Gartner 2024), number of attacks detected and neutralized without human intervention.
3–5 years — AI agents will be permanently integrated into corporate and government infrastructures, capable of defending themselves even from never-before-seen attacks through few-shot and continual learning techniques. Within this window, it is expected that over 60% of large enterprises will adopt native AI in cybersecurity (BCG, 2024). Opportunity: almost zero reaction times, predictive ability on attack patterns. Risks: possible vulnerabilities of the AI models themselves (data poisoning, adversarial attacks), dependence on centralized cloud providers, and difficulty in auditing and explaining decisions.
5–8 years — The systems will reach a level of proactive autonomy, capable of recognizing unprecedented patterns and adapting defense strategies in real time. This could lead to “autonomous digital warfare” scenarios, with agents directly engaging each other in cyberspace without human intervention. The challenge will not only be technical, but also ethical and political: who is responsible for an automated counteroffensive? How can we govern a digital conflict waged by increasingly autonomous machines? Indicators: first international regulatory policies for the use of autonomous AI in cyberwarfare, percentage of incidents mitigated without direct supervision, and documented cases of escalations avoided or exacerbated by AI.
8. No more one-size-fits-all interfaces, but tailor-made experiences
For years, the web was a “uniform” environment: sites had a fixed layout, standard graphics, and a precise way of representing data. Whether it was statistics on the cost of living, sports scores, or company balance sheets, users were forced to adapt to the rendering chosen by the content producer. With the arrival of AI agents, this paradigm is fading. Agents can extract raw data from any content and regenerate it into dynamic, personalized representations. The same set of information can take on different forms depending on the context, preferences, and even the cognitive abilities of the individual user.

A concrete scenario
A small consulting firm manages its performance through an AI system that collects raw data on clients, revenue, and project schedules. The same numbers are rendered differently depending on who consults them, because the AI builds each view tailored to each user’s preferences and habits. The owner, Giorgio, opens the dashboard and finds a panel modeled after his decision-making style: cash flow projections, margins on individual projects, insolvency risks, and industry benchmarks. This information is selected and presented in his preferred format — predictive graphs and comparative tables — to guide his strategic decisions. Marta, the project manager, accesses a visualization built around her criteria: an operational dashboard with progress bars, hours allocated to teams, and visual alerts on delays. The view reflects her need for immediacy and practical control, without getting lost in financial details. The same system thus becomes two different tools: strategic for Giorgio, operational for Marta. The AI doesn’t change the data, but interprets and reorganizes it based on the viewer, transforming the same numerical reality into personalized and relevant experiences.
Socio-economic consequences
The ability to completely decouple data from its visual representation opens up a new ecosystem of services. Data will increasingly become an “API” service, distributed in raw form and made accessible through user-tailored visualizations. This brings enormous advantages in terms of accessibility, transparency, and inclusiveness: anyone will be able to read data in the format best suited to their needs. But new challenges also emerge: if every user sees a different representation, who guarantees that the substance has not been altered or distorted? For companies, the impact is significant: competition will no longer be just over data ownership, but over the ability to reliably interpret it. On the social side, the information experience risks fragmentation: “shared truth” could give way to subjective and potentially manipulable representations.
Examples
Some signs of this transition are already visible. Fitness apps and educational platforms are adjusting their content and layout based on individual behavior. Tools like Google Stitch generate interfaces from text prompts or images, while solutions like Polymer AI Dashboard Generator create custom data visualizations. On the academic front, prototypes such as Drillboards or SituationAdapt. They showcase adaptive dashboards and mixed reality interfaces that can transform based on the user’s context and skills.
Time horizon
0–2 years — The solutions will remain limited to adaptive layout and simple basic preferences (e.g., light/dark mode, text resizing, dashboard customization). Indicators: share of consumer apps with adaptive UIs over 30% (data already observed in mobile banking and e-learning, Statista 2024). Risks: lack of standards, with fragmented experiences across platforms; perception of “cosmetic novelty” that limits actual adoption.
3–5 years — The first solutions will emerge as dynamic generative interfaces, capable of changing in real time based on the user’s actions. Indicators: penetration of the concept of “adaptive dashboards” in B2B SaaS (over 25% of enterprise tools by 2028, Gartner); first ISO guidelines for generative design. Risks: latency and computational costs in live adaptation; fears of loss of control by users (“the interface decides for me”).
5–8 years — UIs will really become proactive, learning from the environmental context and habits, until they anticipate needs without explicit intervention. Indicators: diffusion of devices with adaptive UIs (projection over 100M global users by 2030, McKinsey); increased spending on generative UX. Risks: privacy violations if personal context is tracked opaquely; dependence on a few global providers that control generative UI libraries.
8–10 years — Interfaces will become elastic and pervasive, transforming not only based on personal profile, but also on emotional factors (mood, stress, fatigue) or environmental factors (light, noise, social context). Indicators: First clinical studies on the use of emotional UIs in healthcare and edtech; penetration into government and institutional systems. Risks: risk of “over-adaptation,” which reduces pluralism and comparison (everyone sees only their own version of digital reality).
9. The video game that never repeats itself
Artificial intelligence is revolutionizing gaming, introducing engines and agents capable of dynamically generating scenarios, characters, and missions. The underlying idea is powerful: no two games will ever be the same. Real-time environments, natural-interaction NPCs, and missions that adapt to the player’s profile mark the transition from a scripted and deterministic model to self-generated and unique experiences.

A concrete scenario
There are two players, Aria and Zeno. They begin the same generative game on an autumn day. They both start in a mysterious village on the edge of a forest, but thanks to the AI agent, their experiences diverge completely. Aria, a lover of storytelling, finds a path filled with poetic dialogue, characters with moral depth, and quests that push her to explore the human side of the forest: among sages who speak in riddles and tree spirits who share ancient stories. Zeno, on the other hand, prefers strategic action: his itinerary is dominated by warlike trials, dungeons with aggressive creatures, and combat that requires timing and tactical decisions. The AI shapes the game world not with fixed scenarios, but by adapting settings, tone, and conflicts to its style. At the end of the session, Aria and Zeno compare notes: they have played games with the same title, but Aria has discovered legends and secrets, and Zeno has overcome challenges and clashes. Both have the same starting point and shared data, but views and paths tailored specifically for them.
Socio-economic consequences
For the video game industry, the “no two games are the same” paradigm presents both opportunities and challenges. On the one hand, the longevity of titles could increase dramatically, with games capable of entertaining for years without becoming repetitive, enabling new business models based on subscriptions, personalization, and tailored experiences. On the other hand, the risk of losing creative control emerges: a content generation that is too autonomous could sacrifice narrative coherence, balance, and overall quality. Studios will have to reinvent their pipelines and roles: less manual work on assets, more strategic oversight and direction. For players, this means more engaging and personalized experiences, but also the risk of cultural fragmentation: the “shared game” that builds community could give way to individual and unique experiences.
Examples
The sector is already an active laboratory. Inworld AI offers NPCs with memory and objectives that communicate naturally (demo Inworld Origins). Artificial Agency works on goal-driven behavioral engines for non-scripted characters modl.ai develops agents for QA and level balancing, simulating “virtual players”. Scenario and Kaedim accelerate the creation of 2D/3D assets, while Latent Technology generates reactive animations. On the narrative front, Charisma.ai and Hidden Door experiment with multi-branch storytelling, while UGC worlds like DreamWorld integrate generative construction. Among the playable products, AI Dungeon was the first example of real-time generative storytelling; even mainstream titles like Candy Crush use AI to adapt, albeit within limits, to level generation.
Time horizon
0–2 years — Early prototypes and indie titles with semi-generative narrative and environments. The games offer alternative missions and more fluid dialogue thanks to LLM, but narrative coherence remains fragile. Indicators: Number of indie games using LLM/conversational NPCs; beta testing adoption on platforms like Steam Early Access.Limits: weak narrative coherence, high inference costs, and a lack of integrated authoring tools for developers. Risks: inflated expectations compared to actual capabilities; risk of inconsistent or inappropriate content.
3–5 years — Birth of real hybrid game engines, where level design, missions, and dialogue are dynamically generated based on the player’s profile. Open-world environments that adapt to preferences (exploration vs. combat).Indicators: adoption in mid-tier/AAA titles; integration of generative plugins into major engines (Unity, Unreal); growth of dynamically generated assets (Scenario, Kaedim, Inworld). Limits: latency in real-time content generation; lack of standards for testing and balancing generative gameplay; difficulty ensuring balanced experiences. Risks: excessive variability that undermines online competitiveness; unbalanced generated content (missions that are too easy/too difficult).
5–8 years — Diffusion of proactive engine: the game not only responds to actions, but anticipates the player’s style, offering tailored narratives. Stories are no longer branched but truly open, constructed in real time. Indicators: over 30% of AAA games integrate generative systems for questing, storytelling, and world-building; growing active communities developing mods based on AI-driven engines. Limits: training sets still expensive; difficulty maintaining cross-session consistency (remember 100+ hours of play). Risks: loss of creative control by developers; risk of increased addiction (experiences that are too “tailor-made”).
8–10 years — Arrival of fully generative game engines: Universes that are built from scratch each session, with narrative coherence, adaptive rules, and persistent worlds that evolve alongside the players. Every game is unique. Indicators: Generative engines as standard in AAA titles; first fully generative UGC (user-generated content) platforms powered by AI; widespread use in VR/AR. Limits: massive cloud infrastructure required; high energy costs; need for new metrics to balance generative gameplay. Risks: concentration in a few global providers (generative engine monopolies); ethical risks related to uncontrollable narratives (bias, toxic or manipulative content).
10. Machines that design machines
For centuries, design has been the exclusive domain of human ingenuity. Engineers, architects, and designers have always been tasked with analyzing constraints, developing solutions, and designing complex systems. But what will happen when this ability passes — at least in part — to machines?
AI will not be limited to writing software or generating personalized experiences. In the near future, it will be able to conceive complete engineering systems: infrastructures, systems, electronic circuits, even mechanical components, and urban architecture. We’re not talking about simple CAD-assisted designs, but real digital co-designers, capable of evaluating scenarios, simulating performance, optimizing materials, and suggesting innovative solutions that a single human team would find difficult to explore.

A concrete scenario
In a small mechanical design company, Livia, the chief engineer, and Carlo, a young project manager, must develop a new cooling system for a line of industrial drones. Traditionally, this would have required weeks of analysis, CAD drawings, and iterative simulations. This time, however, they activate a specialized AI agent: Livia enters only the functional requirements (“cooling up to 40°C in high-dust environments, maximum weight 200 grams”), while Carlo specifies the financial and supply constraints. In a few hours, the AI generates dozens of design variants complete with technical diagrams, fluid dynamic simulations, and cost estimates. The system doesn’t work blindly: it shows Livia the engineering results with graphs and stress tests, while Carlo receives financial dashboards, material comparisons, and production time estimates. Everyone receives a personalized view, built on their priorities and skills. Ultimately, the team no longer discusses preliminary designs to be refined, but chooses from already simulated and validated solutions. In practice, design becomes a strategic selection process, with AI acting as an invisible driver of innovation.
Socio-economic consequences
This development could dramatically reduce design times and costs, lowering barriers to entry in capital-intensive sectors such as automotive, construction, or advanced manufacturing. Small companies and startups could access design capabilities previously reserved for large industrial groups. On the other hand, concentration risks will emerge: whoever controls the most powerful design models and datasets will have a huge, potentially unbridgeable competitive advantage. Furthermore, the role of engineers will change: fewer designers and more supervisors, called upon to ensure safety, ethics, and regulatory compliance.
Examples
We are already seeing the first signs today. Systems like Autodesk Generative Design or Siemens NX with integrated AI allow users to explore thousands of design variants optimized for weight, strength, or cost. In the semiconductor industry, tools such as Synopsys DSO.ai. They design chips with reduced power consumption and improved performance. In construction, experiments with generative urban design. They create entire virtual neighborhoods by evaluating traffic, energy consumption, and environmental impact. These are still support tools, but they represent a preview of what could become a nearly autonomous design cycle.
Time horizon
0–3 years — Diffusion of vertical generative design tools, already growing today: AI software for chip design (e.g., Synopsys DSO.ai), modeling of mechanical components (AI-driven generative design), and parametric architecture(Autodesk Forma). Indicators: share of manufacturing companies that adopt generative design tools (currently estimated at 18% globally, McKinsey 2024), number of AI-assisted patents in the engineering sector. Current limitations: limited capacity in narrow domains, high reliance on proprietary training datasets.
3–5 years — Appearance of systems capable of orchestrating the entire design cycle in specific sectors: not only designing concepts, but also integrating physics simulations, choice of materials, and cost analysis. Indicators: first implementations in automotive and modular construction, an increase in estimated project reduction times up to 30–40% (BCG, 2025). Risks: difficulty in validating structural safety, computational costs of multi-variable simulations, regulatory resistance for critical applications (infrastructure, aerospace).
5–10 years — Birth of real“independent design laboratories”: AI ecosystems in which models generate concepts, validate them through complex simulations, and propose solutions ready for production. Humans remain supervisors and decision makers, but much of the pipeline — from engineering creativity to verification — is automated. Indicators: share of complex projects (bridges, chips, turbines) largely generated by AI, reduction of industrial R&D costs estimated up to 50% (OECD, 2026). Risks: concentration of power in the companies that control the design platforms, lack of transparency in algorithmic decisions, and possible biases embedded in simulation models.
11. A doctor in every pocket
Anyone who has tried ChatGPT or similar systems has found themselves, at least once, asking for advice on symptoms or treatments. Today, these responses should be treated with caution, but in the future, AI could become institutionalized interlocutors within healthcare systems. Medicine already uses machine learning models to analyze X-rays, MRIs, or blood tests, identifying anomalies that the human eye might miss. Increasingly advanced generative models add the ability to communicate with patients, gather information, cross-reference it with large knowledge bases, and propose diagnostic or therapeutic hypotheses.

A concrete scenario
Sofia lives in a small mountain village where her GP is only available twice a week. One evening, she feels persistent chest pain and, unsure whether to go to the emergency room immediately, opens her healthcare app, connected to the biometric bracelet she’s been wearing for months. The AI agent collects her vital data in real time (heart rate, oxygen saturation, blood pressure), comparing them with her medical history and millions of similar cases in a certified database. After a few seconds, the platform provides her with a risk assessment: not just a generic warning, but a detailed triage with percentages, possible causes, and a clear recommendation: “Go to the nearest emergency room immediately. We’ve notified the on-call doctor, who will receive your updated data in real time.” When Sofia arrives at the hospital, the cardiologist doesn’t have to start from scratch: on his tablet, he finds an AI-generated report already available, with graphs of parameter trends, a summary of her medical history, and possible diagnoses ranked by probability. This allows him to intervene immediately, saving precious time. For Sofia, AI was a vital filter and mediator between her and the healthcare system. It didn’t replace a doctor, but it acted as a bridge between her symptoms and specialized care, transforming a concern into a lifesaving action.
Socio-economic consequences
The impact of this transformation would be enormous. AI can reduce healthcare costs through faster diagnoses, optimize resource use, expand access to care in countries with a shortage of doctors, and enable mass preventive medicine based on continuous monitoring. At the same time, significant risks emerge: healthcare data management requires high standards of privacy, and legal liability for diagnostic errors remains an unresolved issue. The human role does not disappear: empathy, clinical judgment, and responsibility remain essential, but the doctor of the future will work side by side with AI, in a more efficient and inclusive system.
Examples
Concrete applications already exist today. Symptoma, Babylon Health, and Ada guide users through initial triage; Aidoc, PathAI, and Zebra Medical Vision apply AI to medical image analysis, identifying anomalies invisible to the human eye. Microsoft Dragon Copilot helps doctors transcribe visits and summarize clinical data, reducing bureaucratic burden. Companies like DxGPT and Cera demonstrate how AI can support GPT-based diagnoses or predict risks for elderly patients.
Time horizon
0–3 years — Capillary diffusion of virtual assistants per initial triage, booking management, and patient-hospital interactions. At the same time, the growing use of AI in medical image analysis (X-rays, MRIs, CT scans)is being conducted with constant supervision by doctors. Indicators: over 30% of hospitals in Europe already declare to use AI for diagnostic support (OECD Health Data, 2024); the global AI market in healthcare is estimated at $28 billion in 2025 (Statesman). Limits: risk of false positives/negatives, lack of diversified datasets, and high costs of integration into hospital systems.
3–5 years — AI integrates deeper into clinical processes: dissemination of predictive systems for chronic diseases (diabetes, heart failure), continuous monitoring through intelligent wearables, and consolidation of institutionalized digital assistants to support doctors. Indicators: growth of the market of medical wearable devices with integrated AI, expected to surpass $60 billion by 2027(Markets&Markets); first clinical guidelines on AI adopted by regulatory bodies (e.g., FDA, EMA). Risks: regulatory resistance, fears about health data privacy, and poor interoperability between hospital systems.
5–8 years — AI could take a recognized institutional role, with certified agents that contribute directly to the diagnostic and therapeutic process. Advanced telemedicine and fully personalized care will be regulated by clear regulatory frameworks on responsibility and privacy protection. Indicators: percentage of diagnoses co-signed by AI in national health systems; share of digital health records managed with AI predictive modules (expected over 50% by 2030, McKinsey Health Report). Risks: inequalities of access (gap between high and low-income countries), risk of blindly relying on systems that are not always transparent, and potential resistance from professional categories.
12. Stories that are written as you read them
Reading has always been a linear experience: an author writes, a reader reads. With the arrival of AI, this paradigm could radically change. No longer will texts be the same for everyone, but dynamic, personalized books that no one else will ever read the same way. This idea has its roots in the gamebooks of the 1980s and 1990s, in which readers could choose different narrative paths: innovative but limited experiences, because the twists and turns and endings were always predetermined by the author. With AI, however, the possibilities become virtually infinite: stories that write themselves as they are read, shaped by each reader’s choices, preferences, and even reading style.

A concrete scenario
Imagine Giulia and Lorenzo downloading the same AI book from the platform. The opening lines are identical: a young journalist moves to a new city, where a sudden blackout throws everything into chaos. For Giulia, a fan of mystery and action, the story immediately takes on the pace of a thriller: clandestine investigations, hackers at work, and a criminal network exploiting the blackouts to cover up illegal trafficking. The journalist becomes the protagonist of a race against time, filled with chases, suspicions, and political conspiracies. For Lorenzo, however, the plot transforms into a romantic drama: the blackout becomes the backdrop for an encounter with an unknown neighbor. The tension of the locked-down city brings out emotions, intimacy, and unexpected connections, leading the protagonist to experience a love entanglement she never imagined. In the end, Giulia and Lorenzo discover they have read two radically different stories, yet both coherent and engaging: the author had established the characters and context, while the AI had modulated the narrative genre and development according to each reader’s preferences.
Socio-economic consequences
Such a leap would profoundly transform the publishing market and the cultural experience. On the one hand, reading would become more engaging and accessible, especially for new generations accustomed to the interactivity of video games. On the other hand, the collective dimension of the book as a shared experience risks dissolving: we will no longer read the same novel, but unique and unrepeatable versions. For publishing, it will mean redefining business models: from the sale of identical copies to the “pay-per-experience” model, personalized digital libraries, and interactive subscriptions. The role of the writer, however, remains central: no longer the author of every detail, but a narrative architect who establishes the universe, tone, and coherence, leaving the dynamic execution to AI.
Examples
Ongoing experiments show that publishing is moving in this direction, even if we’re still far from “infinite” storytelling. Startups like Inkitt and StoryFit use AI to predict book success, generate voices for audiobooks, or suggest personalized readings. Tools like AI Interactive Books enrich the texts with multimedia elements or quizzes, while experiences such as Inanimate Alice offer hybrid narratives with interactive minigames. These are interesting experiences, but they remain tied to limited and predetermined paths: true dynamic narrative generation is yet to come.
Time horizon
0–3 years — Growth in the diffusion of books enriched with light interactivity: multimedia elements (quizzes, dynamic images, audio links) and surface customizations based on reading preferences or style (e.g., faster or more descriptive pace). However, endings and structure still remain fixed and predetermined. Indicators: share of e-books with advanced interactive functions, today estimated at 5–7% of the global market(PwC, 2024); publishers are increasingly experimenting with “enhanced e-books”. Limits: lack of interoperable standards, risk of fragmented experiences across different platforms.
4–6 years — truly dynamic narratives, with plots and endings that are shaped by the reader’s choices and the data collected from their interactions. No more rigid branching paths, but stories generated in real time. Indicators: growth in dedicated AI-driven publishing platforms (currently fewer than 50 startups listed globally); first partnerships between traditional publishers and generative model providers.Risks: limited narrative coherence, licensing costs of the models, skepticism of the authors with respect to “creative delegitimization”.
6–8 years — AI will reach a maturity that will ensure stylistic and narrative coherence in dynamically generated texts. Publishers will be able to distribute on a large scale unrepeatable books, which are never read the same way. Indicators: percentage of editorial catalogues with dynamic generation modules (expected over 20% by 2032, Deloitte Media Report); growth in subscriptions to personalized reading platforms. Risks: loss of shared collective experiences(everyone reads different versions), difficulty in validating content (historical plausibility, scientific accuracy), and new ethical questions on the role of the author.
13. Your personal director
If books can become dynamic and personalized, why not films and TV series as well? It’s not far-fetched to imagine a future where every viewer can have their own version of an audiovisual work, tailor-made. AI is already capable of producing credible and coherent films lasting a few dozen seconds; tomorrow, these technologies could extend to entire episodes or full-length feature films. Traditionally, cinema has always been a passive medium, but this paradigm could also change: branched movies and series, with scenes, dialogues, and endings that adapt to the viewer’s choices, would transform viewing into an interactive experience.

A concrete scenario
Marta and Karim access the same AI cinema platform and choose a film with the same premise: a group of strangers are stranded at an airport due to a sudden storm. For Marta, a suspense enthusiast, the film unfolds like a psychological thriller: the passengers suspect someone is manipulating the entire event, the dialogue becomes tense, and the storm seems like the prelude to an orchestrated plot. For Karim, however, the same story transforms into a dystopian sci-fi: the airport becomes a government control hub, the storm is the result of failed climate experiments, and the passengers discover they are part of a large-scale social test. Surveillance and collective rebellion dominate the plot, culminating in a denouement that calls individual freedom into question.
Socio-economic consequences
The impacts would be enormous. For viewers, it would mean constantly changing content, personalized to their tastes and even their decisions. For the industry, it would mean the opportunity to drastically reduce production costs, accelerate creative cycles, and experiment with new business models, from subscriptions to dynamic pay-per-view experiences. From a cultural perspective, however, significant risks emerge: the collective knowledge of cinema, made up of shared works and iconic scenes discussed by all, could give way to individual and unrepeatable narratives, eroding the social function of cinema as a common language.
Examples
Some concrete signs are already visible. Tools like Runway and HeyGen allow the generation of video clips or realistic digital avatars, while Meta, in collaboration with Blumhouse, has presented a model capable of producing video sequences complete with coherent audio. Startups like Odyssey are experimenting with interactive 3D streaming environments, where viewers can move and influence the scene. Projects like Evertrail generate characters, dialogue, and settings in real-time based on audience interactions. Consumer tools like Canva, with its AI video generator, also offer a first glimpse of this revolution.
Time horizon
0–3 years — Generative technologies will find applications in short clips, teasers, advertisements, and all stages of editing and post-production (e.g., scene regeneration, automatic dubbing, and the insertion of digital avatars). Indicators: today, less than 10% of professional video content uses generative AI (PwC Media Outlook, 2024), but the growth trend is estimated to exceed 40% per year. Limits: inconsistent visual quality, difficulty maintaining narrative coherence, and high costs of generating long-form video.
3–5 years — The first examples will appear as interactive short episodes or films, with alternative scenes and endings adapted to the viewer. Streaming platforms could offer personalized narratives in which choices (explicit or implicit, e.g., viewing patterns) influence the development of the story. Indicators: emergence of experimental AI-driven catalogs; first original productions on mainstream platforms. Risks: resistance from the creative industry due to fears of loss of authorial control, regulations on copyright of actors and screenplays.
5–8 years — Audiovisual works on demand will become a fully realized reality, featuring films and series with branching plots, alternative endings, and reactive characters that can evolve in real-time. Cinema will go from a passive medium to an interactive and unique experience, closer to gaming than to traditional television. Indicators: estimated share of interactive content on total new productions (between 15–25% by 2032, McKinsey Media). Risks: loss of the collective dimension of the cinematic experience, concentration of platforms in a few global players, and possible inequalities of access linked to computing infrastructure costs.
14. Blockchain at the service of collective intelligence
Blockchain was created to certify processes, making data immutable, transparent, and verifiable without intermediaries. But its logic remains rigid: smart contracts, although called “intelligent,” only execute simple, deterministic instructions. Artificial intelligence represents the other side of the coin: flexible, predictive, and adaptive, capable of managing complex decisions, but opaque like a black box and vulnerable to bias and manipulation. The integration of the two technologies thus paves the way for a new paradigm: autonomous, trustless, and permissionless systems that are both intelligent and verifiable. In this model, blockchain certifies data and decisions, while AI provides the missing adaptive capability.

A concrete scenario
Imagine AuroraDAO (not our real name), a global community that manages renewable energy projects. Its members are spread across the globe: a young engineer in Brazil, an environmental researcher in Kenya, and a small solar company in Germany. They all vote and participate in decisions via the blockchain, which certifies every proposal, vote, and transaction immutably and transparently. At the center, an AI agent acts as the “brain”: it analyzes data on climate, energy demand, and material prices, proposes concrete scenarios (e.g., “building three micro-wind farms in sub-Saharan Africa is now 20% more efficient than a solar farm in India”), and summarizes the pros and cons for the community. When members approve, the smart contract executes automatically: funds are moved, suppliers are selected, and milestones are tracked. The AI continues to oversee progress, adapting plans if unforeseen circumstances arise (a storm, a supply crisis, a regulatory change). In this model, governance is neither fully human nor fully algorithmic: it is an auditable fusion. AI brings the ability to analyze and predict, and blockchain ensures that no one can manipulate the rules or corrupt the processes. The result is a community capable of managing complex projects on a global scale with an efficiency and transparency impossible to achieve with traditional models.
Socio-economic consequences
The potential impacts are profound. On the one hand, the combination of AI and blockchain could drastically reduce the need for intermediaries, lower transaction costs, and foster more transparent, resilient, and accessible global markets. On the other hand, new challenges emerge: who will control the AI models powering such powerful systems? How can biases be prevented from becoming structural and immutable once recorded on-chain? And what role will governments and regulatory institutions have if economic or political decisions are made by autonomous and decentralized entities? This opens up fertile, but also risky, ground for redefining governance, trust, and the distribution of power.
Examples
Some prototypes are already emerging. In DeFi, AI systems dynamically adjust interest rates and liquidity while maintaining the transparency of on-chain transactions, as shown by AgileRate and the experiments discussed in Cointelegraph on AI-driven DeFi. Intelligent DAOs are experimenting with decision-making processes in which AI processes complex scenarios, while blockchain guarantees uncorrupted and verifiable executions — examples include the multi-agent approach of ISEK and the intent-based strategies of SuperIntent. In data marketplaces, blockchain certifies provenance and ownership, while AI extracts value through insights and predictions, as explored in AIArena and frameworks like opML. These are still early experiments, but they clearly point the way: infrastructures that combine adaptive intelligence and verifiable transparency.
Time horizon
0–3 years — We will see the first early prototypes, especially in DeFi (dynamic rates, AI-regulated liquidity) and supply chain (certified traceability + risk prediction). Indicators: an increase in the number of pilot projects integrating AI on-chain; first partnerships between startups and large logistics/financial companies.Limits: high on-chain computation costs, lack of shared standards, and risks of unverifiable bias in the models.
3–5 years — They could emerge hybrid platforms with autonomous governance: Intelligent DAOs where AI processes complex scenarios and blockchain certifies decisions and votes. Indicators: first cases of DAO with deliberative AI; regulators are starting to define dedicated policies. Risks: institutional resistance due to fear of loss of control, vulnerability to attacks on models or input data.
5–8 years — The AI + blockchain convergence could turn into a new economic and institutional infrastructure: markets and organizations capable of taking complex decisions without central hierarchies. The impact would extend to finance, energy, urban governance, and even local politics. Indicators: adoption of on-chain verifiable AI systems in at least 10–15% of large institutions (BCG estimate 2032). Risks: concentration of power in model providers, opaque governance of the same models, and large-scale technical audit difficulties.
15. A tireless financial advisor
Finance has always been fertile ground for technological innovation, and AI is already changing the way we invest. Today, robo-advisors that build balanced portfolios and algorithmic trading systems move billions in the markets, but they remain the prerogative of banks and hedge funds. With the arrival of increasingly sophisticated agents, this barrier is breaking down: AI will be able to create personalized portfolios, adapt them in real time, and integrate unconventional signals such as social media or consumer trends. Previously exclusive tools will become accessible to everyone: a true democratization of investing.
If AI makes investing smarter, blockchain makes it transparent, verifiable, and intermediary-free. We can imagine funds managed by AI agents through public smart contracts, with immutable on-chain decisions, or DeFi protocols that dynamically regulate rates and liquidity. In this scenario, even small savers will be able to access complex logic without relying on banks or centralized funds.

A concrete scenario
Amanda, a freelance consultant, doesn’t have access to corporate pension plans and has always put off building a private pension, fearful of the sector’s complexity. So she decides to rely on an AI agent. The agent collects data on her variable income, recurring expenses, her savings habits, and the level of risk she’s willing to take. Based on this, it builds a personalized portfolio, diversified across long-term bonds, global ETFs, and a marginal portion of more dynamic assets. Each month, based on Amanda’s actual income, the agent decides how much to allocate to the fund, adapting her contributions without forcing her into rigid commitments. In times of market volatility, it automatically reduces exposure to risky assets, protecting the stability of her capital; when markets are more favorable, it gradually increases dynamic assets to maximize growth. As the years pass, Amanda doesn’t have to worry about studying complex charts or comparing dozens of financial products: the AI agent accompanies her every step of the way, sending her simple and clear updates and showing her a projection of her future pension. Thus, a problem that seemed unsolvable becomes a fluid, transparent process tailored to her professional life.
Socio-economic consequences
The convergence of AI and finance, whether centralized or decentralized, brings both opportunities and risks. On the one hand, it can democratize access to advanced tools, reduce information asymmetries, and increase transparency. On the other hand, critical issues emerge: the opaqueness of AI models (unexplainable decisions), volatility amplified by automatic reactions, data manipulation (false signals that mislead algorithms), the lack of protections for small investors, and the regulatory vacuum, with governments forced to regulate decentralized and difficult-to-control systems.
Examples
Several experiments are already underway. In the world of purely AI-driven financial advisors, Jump has raised $20M to empower financial advisors with intelligent workflows; PortfolioPilot helps investors monitor portfolios, optimize taxes, and receive personalized recommendations; More Wealth offers a robo-advisor that also tracks users’ psychological behavior; FP Alpha leverages AI to read complex financial documents and generate planning insights; Origin proposes a “personal AI financial advisor” integrated with budgeting and investments; Rebellion Research applies advanced quantitative models for investment recommendations; while Moneyfarm is an established European robo-advisor that builds diversified portfolios for small and mid-sized investors.
At the same time, hybrid AI + blockchain solutions are emerging: Sahara AI is developing a decentralized advisory platform that rewards users, data providers, and trainers; Roobee aims to democratize access to tokenized investments; SingularityNET is creating a decentralized marketplace for AI services, including wealth management and predictive analytics; startups like Allium combine intelligent queries with on-chain certification to analyze large volumes of data with applications in security and traceability; research projects like AgileRate propose dynamic interest rates in DeFi lending markets; while ISEK and SuperIntent are experimenting with multi-agent models and intent-based strategies for decentralized decision-making.
Time horizon
0–3 years — AI financial advisors will remain complementary tools: advanced robo-advisors, smart budgeting apps, and personalized savings agents.Indicators: Robo-advisor market growth over $3 trillion in AUM by 2027(Statista, 2024); Regulatory sandboxes are emerging in the US, EU, and Asia to test AI-driven solutions.Limits: poor transparency of algorithms, difficulty in auditing, and a low level of trust among retail users.
3–5 years — The first applications will appear as always-on independent financial advisors, capable of monitoring income/expenses, allocating funds, managing risk, and even proposing personalized pension plans.Indicators: percentage of retail investors using at least one AI advisor (McKinsey estimates more than 20% by 2030); first institutional adoptions in banks and insurance companies as automated advisory services.Risks: amplification of volatility due to automatic market reactions, possible manipulation of input data, and lack of protection for small investors.
5–8 years — Autonomous financial advice will become mainstream: personalized AI platforms will handle not only investments, but also taxation, retirement planning, and estate planning. Indicators: at least 30–40% of retail portfolios managed in AI-driven autonomous mode; first stringent regulatory requirements on explainability and legal liability of models.Risks: vulnerability to systemic crises self-generated by emerging agent behaviors, loss of pluralism in available financial strategies.
16. The Holy Grail of AI: Automating Research and Development
Of all the applications of artificial intelligence, the most ambitious is the automation of research and development (R&D). The idea is that intelligent machines will not simply assist scientists but will autonomously design experiments, formulate hypotheses, and engineer innovations. This prospect is often called the “holy grail” of AI because it requires a level of creativity comparable to that of humans — a skill whose theoretical replicability is still unknown.
What is certain, however, is that AI already plays a crucial role as a vertical assistant. In biology and medicine, it identifies correlations between genes and diseases and accelerates the design of new pharmaceutical molecules. In physics and engineering, it uncovers hidden patterns in experimental datasets and optimizes complex models. In mathematics, some models are already able to prove theorems or suggest new conjectures, opening up scenarios that were once the exclusive domain of human ingenuity. In the fields of energy and materials, it suggests innovative combinations for batteries, solar panels, or high-performance alloys.

A concrete scenario
On the campus of a European biotech, a small team works on research into new antibiotics against resistant bacteria. In the past, designing and testing each molecule required months of work and enormous resources. Today, however, their lab has become a hybrid human-AI ecosystem. At night, while researchers sleep, AI agents orchestrate the work of robotic arms and automated platforms: they design new molecules, digitally simulate interactions, select the most promising ones, and launch real-world microexperiments. In the morning, the team finds the results already on the table: dozens of rejected hypotheses and two or three candidates worthy of further investigation. The team no longer has to start from scratch, but instead focuses on validating and interpreting the best results, discussing the ethical, clinical, and commercial implications. The idea-test-validation cycle, which once took months, is reduced to a few days. This doesn’t eliminate the role of researchers, but rather transforms it: less time spent repeating routine experiments, more energy devoted to strategic questions, scientific choices, and ethical oversight. Thus, the promise of the “holy grail” of AI no longer appears as science fiction, but as a laboratory operating 24/7, capable of generating knowledge at a rate never seen before in the history of science.
Socio-economic consequences
Automating R&D even partially means drastically reducing discovery and development times, shortening the idea-to-patent cycle, and reducing testing costs. Companies that adopt these tools first will gain enormous competitive advantages, capable of generating innovations in months rather than years. However, this acceleration risks concentrating power in the hands of a few players who control the most advanced platforms, raising barriers to entry for independent laboratories and less well-equipped countries. Globally, the geography of innovation could rapidly reshape itself, creating new economic and scientific polarizations.
Examples
The transformation is already visible. In biotech, Cradle Bio generates protein sequences with desired properties while reducing design-test cycles; in pharma, Eli Lilly launched TuneLab to put AI/ML drug discovery tools into the hands of small biotechs as well. The number of self-driving labs, which combine robotics (e.g., automated pipetting platforms like Opentrons) and AI models to autonomously design, schedule, and execute experiments. On the mathematical front, systems like Gödel-Prover prove theorems or propose testable conjectures. Projects with DeepMind and BioNTech aim to create real “laboratory assistants,” capable of monitoring instruments, predicting outcomes, and supporting experimental design.
Time horizon
0–3 years — AI will be the ubiquitous co-pilot of the research: data analysis, target selection, simulations, and automation of repetitive experiments. In parallel, we will see the first reliable results in Assisted in proving the theory. Indicators: increase in scientific publications reporting the use of AI (already over 7% on Nature and Science, 2024); growth of the global AI market in R&D, estimated at $20 billion by 2026 (Allied Market Research). Risks: still “black box” models that are difficult to explain; difficulty standardizing AI-driven scientific protocols.
3–5 years — Laboratories will emerge semi-autonomous, capable of covering a significant portion of the R&D cycle: molecular and materials design, experiment setup and reading, automatic iterations with increasingly standardized tools.Indicators: increasing partnerships between universities and biotech companies; growing number of self-driving labs funded by governments and VC funds. Risks: high integration costs, lack of regulatory frameworks for the use of sensitive data (especially in biotech and pharmaceuticals).
5–8 years — Automation could encompass most of the non-strictly creative phases: hypothesis generation, testing, and accelerated validation. AI-driven pipelines will be adopted by leading companies in biotech, chemicals, materials, and engineering. Indicators: at least 30–40% of molecular discoveries attributed to AI-first processes; average drug development time reduced from 10–12 to 5–7 years (OECD, 2025). Risks: concentration of power in a few actors with access to advanced AI infrastructure; ethical dilemmas on ownership of discoveries made by systems that are not entirely human.
17. Political systems and governance with AI
Politics is one of the most sensitive areas in which artificial intelligence can be applied. Experiments are already underway: in Iceland, in 2023, an AI model was used to help draft a bill; in the United Kingdom and the EU, assisted legislative drafting trials are underway. At this stage, AI acts as an institutional consultant, a sort of digital think tank: it analyzes vast amounts of economic, environmental, and social data and recommends evidence-based policies — tasks that a human team could hardly complete quickly.

A concrete scenario
Imagine Diego, the mayor of a medium-sized city. Every year, he must decide how to allocate the municipal budget: public transportation, schools, local healthcare, and urban maintenance. In the past, the process was long and contentious: dozens of meetings, polarized opinions, pressure from lobbies and interest groups. With the introduction of an AI-powered deliberative platform, the picture has changed. Citizens express their priorities through an accessible digital system: some ask for more bike lanes, others for support for the elderly, and still others push for the digitalization of schools. The AI collects input, eliminates duplication and manipulative messages, synthesizes data, and generates different budget scenarios, each with clear pros and cons and simulations of their social and economic impact. Diego no longer receives a sea of raw opinions, but a structured map of community preferences, balanced with predictive analytics on the impact of decisions. The city council discusses the AI-generated options and makes the final decision with greater awareness and transparency. Citizens, for their part, can consult online the reasons why certain priorities were accepted and others postponed. The result is not a government “delegated to the machine,” but a more transparent, inclusive, and data-based political process: AI becomes a trusted mediator between citizens and institutions.
Socio-economic consequences
The introduction of AI into governance can improve efficiency and transparency, reduce discretion, and increase the predictive capacity of institutions. It can also foster more inclusive decision-making processes, aggregating opinions and synthesizing them without the distortions typical of polarized political debate. At the same time, significant risks emerge: a loss of democratic legitimacy, the perception of increasingly technocratic and citizen-disconnected governments, the potential for manipulation by those controlling data and models, and new forms of concentration of power. In the long run, if poorly designed, these systems could undermine trust in democracy; if well designed, they could strengthen it by offering more transparent and inclusive tools for participation.
Examples
Some experiments are already a reality. In Iceland, AI has been used for legislative drafting; the European Parliament is testing automated drafting and policy analysis tools; Taiwan has been using vTaiwan for years, a digital deliberative platform that could be enhanced by AI to synthesize citizen contributions. Software such as Polis, already in use in Seattle and Taiwan, aggregates and synthesizes public opinion, anticipating what more advanced generative systems will be able to offer.
Time horizon
0–2 years — AI will be used primarily as an analysis and drafting tool to support parliaments and ministries, with initial experiments of digital citizen engagement. Indicators: Increase in institutions testing AI-assisted legislative drafting platforms (already adopted in Iceland and the EU); percentage of policy papers citing AI tools as a source of analytical support.Risks: poor transparency of algorithms, lack of shared standards to distinguish between technical support and political influence.
3–5 years — AI can be integrated into public consultation platforms, synthesizing collected opinions and transforming them into clear and actionable proposals.Indicators: growth of AI-powered digital deliberative platforms (e.g., Polis) adopted in cities or states; increased government budgets allocated to AI-based digital democracy systems.Risks: polarization if datasets do not represent all segments of the population; risk of manipulation of contributions (e.g., bots or orchestrated campaigns).
5–10 years — We could witness the birth of AI-based permanent advisory bodies, capable of proposing large-scale allocation policies or decisions. Indicators: Number of governments institutionalizing AI-driven committees as part of the legislative process; percentage of policy proposals originating from AI-first platforms. Risks: loss of democratic legitimacy, concentration of power in those who control models and data, and regulatory resistance. The ethical and political debate will become increasingly heated: to what extent is it acceptable? Delegate political decisions to algorithmic systems?
Conclusions
The future of artificial intelligence is not distant: it is already here — in searches that no longer return links but direct answers, in tutors that adapt to each student, in systems that generate software, assist in medical diagnoses, and even suggest public policies.
This transformation, however, is far from neutral. AI is a powerful accelerator of efficiency, knowledge, and economic growth. According to McKinsey (2023), its global impact could reach up to $4.4 trillion per year, equivalent to a 5–7% increase in worldwide GDP by 2030. Other studies, from the OECD and the IMF, confirm that the adoption of AI will deeply influence productivity across nearly all industrial sectors. Yet these projections, while striking, must be interpreted with caution: they are scenarios, not certainties.
Much of what has been described in this article — multimodal assistants, generative games, self-writing software, AI-driven finance — ranges across three levels:
- existing products and services that already shape daily life;
- emerging prototypes and pilot projects are still confined to niche use;
- speculative visions that may or may not materialize in the proposed timeframe.
Recognizing the differences between these levels is crucial. Otherwise, we risk confusing what is happening today with what may happen tomorrow, and underestimating the technical, cultural, and regulatory hurdles that stand in the way. AI brings opportunities for democratization, accessibility, and efficiency — but it also raises real challenges:
- high computational and energy costs that weigh on sustainability;
- biases and opacity that can entrench inequalities;
- risks of power concentration in the hands of a few global providers;
- the possibility of social fragmentation, where personalized realities weaken the sense of shared knowledge.
The real game is unfolding now — not in the robots walking beside us, but in the invisible software that mediates how we read, invest, learn, and govern. Guiding this revolution requires clear choices:
- transparency — because we cannot entrust our future to black boxes;
- inclusion — because the benefits of AI must reach everyone, not just a few;
- education — because only an informed society can avoid blind dependence on algorithms.
Ultimately, the future is unwritten. AI will not only reflect human choices — it will amplify them. Whether it becomes a tool of emancipation or of inequality depends on us. The call is both simple and radical: not to be passive spectators, but active protagonists in a transformation already rewriting the rules of how we live, work, and decide together.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.