The Rise of AI: A Look at the 2022 Landscape
Last Updated on December 13, 2022 by Editorial Team
Author(s): Salvatore Raieli
Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.
Innovation and disruption: a look-up on what happened in AI in 2022
December brings to a close a spectacular year in the world of AI. This article is a bit of a summary of a year that saw the presentation of incredible models and numerous AI applications.
Over the past five years, both research and the use of AI have exploded. Standford's report shows how in 10 years, the number of scientific articles about AI is doubled (and the publishing pace seems to accelerate). In a McKinsey survey, 50 percent of respondents said their organization had adopted the use of AI. In addition, investment and patents have grown steadily over the years.
This article is a brief summary of the innovations in AI, the trends that have emerged, and a bit of what lies ahead in the future.
In early 2022, DALL-E stunned the world with its ability to create images with a simple text prompt. It looked like it would be state-of-the-art for a long time instead, in a short time, Google released two new models (Imagen, Parti). Today we can say that the real game-changer was Stable Diffusion. In fact, as the source code has been released, it has been integrated into several applications (even Photoshop has created its own plugin on it).
It was not long after the release of DALL-E that META, Google, and other companies released templates that turned text prompts into videos. In addition, research also continues on generative music. We can say that 2022 was an explosive year for generative AI, and we can expect new models (especially in improving text-to-video and generative music).
- Microsoft’s Museformer: AI music is the new frontier
- Unleashing the Power of Generative AI: the Definitive List
AI and science
In 2022, Google released Minerva, a model capable of solving scientific problems. The model was capable of answering various math, science, engineering, and machine-learning questions. Meta has also invested in the field, both with Galactica, and we are likely to see more such models.
These models are capable of solving problems that have already been solved. Suggesting new hypotheses is something much more difficult. Therefore, DeepMind’s AlphaTensor has attracted the attention of researchers. DeepMind’s model succeeded through reinforcement learning to suggest for the first time in fifty years a new (and faster) way to multiply matrices.
The research does not stop at mathematics, DeepMind has teamed up with the University of Lausanne to design a model to help stabilize plasma in nuclear fusion. This shows how AI will help in solving complex problems, and we will probably have an AI research assistant soon.
- Google’s Minerva, solving math problems with AI
- DeepMind’s AlphaTensor: Deepmind’s Alphatensor: The AI That Is Reinventing Math
AI and biological science
In 2021, scientists were surprised by AlphaFold2’s ability to predict protein structure from its sequence. This was an extremely complex problem with important practical implications (medicine, drug discovery, agriculture, pollution control, and so on).
AlphaFold2 is a sophisticated model that requires large computational resources. META ESMfold and Salesforce (ProGen2) showed that even using language models, one can obtain predictions with fewer computational resources and in a short time. This year and in the coming years, we will see how these models will advance research. In fact, many start-ups have sprung up this year that are intent on designing proteins from scratch, using these models to improve drug discovery, and so on (even large pharmaceutical companies are investing in the field).
In addition, OpenCell has shown how AI enables a better understanding of protein localization. Another interesting work used machine learning to identify new anti-bacterial molecules (however, as demonstrated by King College researchers, the same techniques can be used to generate biochemical weapons). All these articles show an increase in the usage of AI in biomedical science (as a word of caution, not always machine learning is used properly).
- Speaking the Language of Life: How AlphaFold2 and Co. Are Changing Biology
- Meta’s ESMfold: the rival of AlpahFold2
A new phase in the use of AI in video games.
In recent years DeepMind has shown how an agent can easily win against humans (GO, chess, and so on). This is not research for its own sake; video games provide a playground for various tasks that can then be translated into the real world.
For example, OpenAI used Minecraft as a kind of testbed for computer-using agents. In addition, META’s CICERO has not only demonstrated that it could play a game but also that it is capable of collaborating with humans in a strategy game. This work demonstrates a new phase of research, more oriented toward practical applications and the possibility of the future of these models being used to interact with people (customer service, non-playable characters in video games).
On the other hand, CICERO also opens up disturbing ethical issues: in order to win the game, the agent had to convince other players to take action at the expense of their chance of winning. Similar models could be developed for scamming and other dangerous applications.
Code research, a new frontier
This year we saw the release of GitHub Copilot (based on OpenAI Codex). On the same front, DeepMind AlphaCode has shown human-like performance in competitive programming. Other companies are also pointing in the same direction (Salesforce’s CodeGen, Huawei’s PanGu-Coder), showing that the trend is growing. In addition, the recent ChatGPT is also capable of producing code on demand.
As some experiments show, this code is not always reliable. On the other hand, the programming community has not received well the arrival of these models that have been trained using code on GitHub. In fact, two lawsuits are pending against GitHub Copilot and whose outcome could have a disruptive effect (after all, even AI art models have been trained using the work of artists scraped from the internet).
In any case, we can expect more such models in the future. Probably, we will not see AI as data scientists for a few more years, but data scientists in the coming years will be writing code together with AI assistants.
Institutions are waking up: AI regulations are coming
In both the United States and the European Union, lawmakers are moving toward regulating AI. The new EU law is expected to arrive next year (although there are drafts today, and you can see the direction).
On the other hand, institutions have realized that research is in the hands of industries and a few big players. It seems expected in the near future that, institutions in both Europe and the States will invest in university research in AI. Not to mention that there will be new investments and regulations on semiconductors, chips, and other strategic materials.
Safety has been at the center of the debate
As the report ‘UK’s national strategy for AI’ showed in late 2021, safety would be at the center of the debate in 2022 and the years to come. Most researchers in a 2022 survey said they considered AI safety a serious concern (69 percent of respondents). In addition, the number of researchers working in the field has also grown, even though it is currently neglected by investments.
The debate also touched several companies. DeepMind has built a model that tests other models to check whether they exhibit unsafe behavior. HuggingFace, on the other hand, has invested in federated learning, a system that protects data privacy. Other notable works have focused on agent behavior in reinforcement learning (here, here, and here).
therefore, the explicability of AI models has become increasingly relevant and has also been an active field of research.
Other emerging trends
Briefly, here are other interesting trends and considerations about last year:
- This year is five years of Transformers, and it is also the year in which Transformers have been shown to replace RNNs and LSTMs in most articles and applications. Transformers are now used in many different. However, the challenge between convolutional neural networks and vision transformers is still open.
- Diffusion models are shown to be interesting models not only for text-to-image generation but also for other applications.
- Multimodal models are the new scene: Google, META, and other companies are trying to build that can work in different domains. The question is if we will use a transformer for all the tasks: DeepMind with GATO believe so.
- Language models (LLM) are empowering robots to execute instructions. Google used PALM to show how LLM can produce detailed instructions for robots and increase the likelihood that these actions are successfully executed.
- OpenAI scaling law is outdated or “not always bigger is better.” DeepMind Chichilla shows that data quality is perhaps more important than the number of parameters. Moreover, with increasing parameters, there are emerging properties that are not fully understood and raise concerns (here and here to deepen).
- The rest of the community is more active than ever: cloning or even improving models that are produced by DeepMind or OpenAI. An example is BLOOM which is an institutional effort.
- Other interesting lectures of the year: the three eras of computing in machine learning, Bootstrapped Meta-learning, A Commonsense Knowledge Enhanced Network, LaMDA: Language Models for Dialog Applications, A Path Towards Autonomous Machine Intelligence, Self-Supervision for Learning from the Bottom Up, Why do tree-based models still outperform deep learning on tabular data?
In this article, I have tried to condense some of the most interesting trends that have emerged this year. Of course, for the sake of brevity, I have not cited other interesting articles. What do you think? Are any other works that deserve mention? Have you noticed any other trends? Let me know in the comments.
if you have found it interesting:
Here is the link to my GitHub repository, where I am planning to collect code and many resources related to machine learning, artificial intelligence, and more.
Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI