How Will ChatGPT and LLMs Accelerate Paths to AGI?
Last Updated on July 17, 2023 by Editorial Team
Author(s): Koza Kurumlu
Originally published on Towards AI.
About a year ago, I wrote an article that summarised Nick Bostromβs book βSuperintelligenceβ, which essentially provided possible paths to a general intelligence system (AGI). At the time that book was published and also when I wrote this summary, LLMs, such as GPT3, werenβt as incredibly impressive as they are today and definitely werenβt attracting the same attention. Therefore these models werenβt considered as a possible route or option to a singularity.
However, now that OpenAI has released GPT4 and we are starting to see human-like behavior in text-based domains, these LLMs may prove as a powerful tool to reach AGI. Itβs important to note before I start that I believe language models themselves wonβt reach AGI and will act as an aid; check my article here.
In the previous article, these were the routes Bostrom dived into:
- Whole brain emulation
- Biological cognition
- Brain-computer interfaces
- Networks and organizations
So in this article, I will dive into each one once again and suggest how GPT-4 would be able to accelerate the progress. If you want to read in-depth about each strategy separately, check this article.
Whole brain emulation
In this method, intelligent software would be produced by scanning and closely modelling the computational structure of a biological brain, and making this brain function on the hardware of a computer.
The role of LLMs in this approach could be two-fold. Firstly, LLMs could assist in the interpretation and analysis of the vast amounts of data that come from scanning and digitally recreating a brain. This makes sense as language models are more likely to understand trains of thought due to their sequential nature, but at the same time, Graph Neural Networks could be a close competitor for this same task.
Secondly, LLMs could assist in the control and operation of a virtual brain once it has been created. With their ability to understand and generate human-like text, they could act as an interface between the human operators and the emulated brain, helping to translate the brainβs computations into understandable outputs and vice versa.
Biological cognition
This method would enhance the intelligence of human beings themselves. In theory, superintelligence doesnβt need a machine, it could be done with selective breeding on an gamete level, or even via education.
LLMs like GPT-4 could be used as sophisticated learning tools to augment human cognition. For instance, such models could be used to design personalized education plans that optimize learning for individual students or to create interactive tutorials in complex subjects, thus effectively augmenting human cognitive abilities.
Furthermore, LLMs could also assist in the research and development of biological cognition. For example, they could be used to analyze genomic data or to generate hypotheses for how different genetic factors might influence intelligence. They could also be used to model the potential outcomes of different genetic modifications, helping to guide research in this field.
Brain-computer interfaces
Brain-computer interfaces (BCIs) are another path towards AGI. In this method, a device is implanted into the brain to allow direct communication between the brain and a computer. LLMs could greatly enhance the utility and effectiveness of BCIs.
LLMs could be used to interpret the brainβs signals and translate them into commands for a computer or to take the computerβs outputs and convert them into signals that the brain can understand. In addition, LLMs could help to personalize and optimise the functioning of BCIs. For instance, they could be used to learn the unique patterns of an individualβs brain activity and adjust the BCIβs operation to match. This could make BCIs more efficient and user-friendly, potentially accelerating their development and adoption.
We are already seeing amazing development in this field. Elon Muskβs Neuralink has been tested on animals and has now just become FDA-approved for human testing.
This method may seem similar to whole-brain emulation. However, here we have an interface meaning there is still a human involved and not an artificially generated entity, i.e., an actual person connected to a computer vs. an artificially composed brain in the cloud.
Networks and organizations
This method explores a way of reaching superintelligence via the gradual enhancement of networks and organisations. This idea in simple terms means linking together various bots and form a sort of superintelligence called collective superintelligence. This wouldnβt increase the intelligence of every single bot but rather collectively it would reach superintelligence.
LLMs could play a crucial role in this approach by acting as the βglueβ that binds different AI systems together. With their ability to understand and generate human-like text, they could facilitate communication between different AI systems, or between AI systems and their human operators. This could allow for greater coordination and cooperation among different AI systems, potentially leading to the emergence of a collective superintelligence.
Moreover, LLMs could also assist in the management and organization of these networks. For instance, they could be used to monitor the performance of different AI systems, identify bottlenecks or inefficiencies, and suggest improvements.
In conclusion, while LLMs like GPT-4 may not lead directly to AGI, they could significantly accelerate its development by aiding in the progress of the various paths to AGI. As these models continue to improve and evolve, their potential contributions to the pursuit of AGI will likely only grow.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI