Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

The Quest for Artificial General Intelligence (AGI): When AI Achieves Superpowers
Artificial Intelligence   Latest   Machine Learning

The Quest for Artificial General Intelligence (AGI): When AI Achieves Superpowers

Last Updated on December 11, 2023 by Editorial Team

Author(s): Nick Minaie, PhD

Originally published on Towards AI.

The field of Artificial intelligence has seen tremendous progress over the past decade, yet achieving human-level intelligence remains the ultimate goal of many researchers. In this article, I will provide an overview of Artificial General Intelligence (AGI) β€” hypothetical AI with the capacity to reason, learn, plan, and operate like humans. I will also define key characteristics that set general intelligence apart from today’s narrow (or specialized) AI and obstacles towards achieving AGI based on current schools of thought. I then discuss the potential impacts on the adoption of AI in our societies as we move closer to developing more broadly capable AI systems.

Photo by Milad Fakurian on Unsplash

Creating Artificial General Intelligence (AGI), often referred to as human-level AI, is seen as the next significant breakthrough in the field of Artificial Intelligence (AI) and Machine Learning (ML). Despite remarkable advancements in Artificial Narrow Intelligence (ANI, aka specialized AI, or as I refer to it in this article, AI), which excels in specific tasks, replicating the diverse cognitive abilities and learning capacities of humans remains a challenge the industry is facing.

The current landscape of AI predominantly focuses on specialized narrow AI systems, which focus on refining individual skills through talent, substantial investments, and vast datasets for training. The AI systems that handle different types of tasks are multi-modal, which means multiple AI systems work together to handle those tasks. Even in those cases, those skills are not transferable to new tasks, unless new training is done.

This limitation highlights the difference between specialized AI and the envisioned AGI. While AI demonstrates remarkable task-specific achievements, it lacks the multi-faceted reasoning, decision-making, creativity, and knowledge transfer capabilities that humans demonstrate.

Challenges in Achieving AGI

The complexity of human intelligence that evolved over millions of years to navigate our dynamic world and constantly adapt, presents the biggest obstacle in developing AGI. AI encounters hurdles in understanding several human cognitive aspects, including:

Contextual Reasoning: This is how humans integrate past experiences and learned knowledge to make sense of ambiguous or sparse information or complex situations, and this is where AI struggles to understand and reason. While humans can extrapolate from known information to fill gaps, AI systems often lack the extensive worldly knowledge and the capacity for flexible reasoning needed for this process, leading to certain weaknesses in their decision-making abilities.

Common Sense: This critical aspect of human cognition covers the ability to navigate everyday situations by balancing known and unknown factors. Humans rely on years of observations about the physical and social world, helping them make decisions based on contextual cues and experiences. Teaching AI systems to incorporate common sense for sound decision-making remains an ongoing challenge, since they lack the natural understanding of situational awareness that humans have.

Communication: The way humans communicate remains a significant hurdle for AI. While language serves as a natural and intuitive interface for humans, AI systems often struggle with understanding elements like irony, humor, sarcasm, and cultural references deeply embedded in human conversations. Achieving human-level language skills requires understanding the intricacies of human communication.

Creativity: Another fundamental aspect of human cognition involves the ability to connect ideas and generate novel concepts or solutions. Human creativity often leads to innovative breakthroughs by synthesizing diverse information, often from different specialized fields. Presently, AI systems primarily replicate or modify existing ideas without demonstrating genuine innovation, highlighting the difficulty in replicating human creativity. For example, generative AI for vision can generate new images based on images it was trained on. Some may argue this is a creative process, but is it truly? When can an AI system come up with an idea which is so novel and creative that humans have not seen before? For example, a completely new style of painting, like what Van Gogh or Picasso created.

These complexities embedded within human cognition manifest naturally through human development but pose significant challenges for AI systems. As researchers and developers work toward AGI, understanding and addressing these gaps in cognitive functionality remains instrumental for narrowing the distance between AI and human intelligence.

Different Schools of Thought on Achieving AGI

There is currently no consensus for when AI can achieve human-like intelligence. Different leaders in academia and industry have proposed different theories, which I explain below. However, we will have to wait and observe what will actually unfold.

Organic AGI: Some AI leaders anticipate that AGI will organically evolve by enhancing existing AI methods like deep neural networks. They suggest that with substantial data, neural network scalability, and advancements in computing power (e.g., new, more powerful chips), AGI might emerge organically through current AI research without the need for any new AI architectures. This perspective envisions AGI as an outcome of specialized AI evolving to higher levels.

Multi-modal AGI: Another school of thought, which is close to Organic AGI, believes that combining different AI approaches is the key in achieving AGI. While deep learning has been transformative in tasks like image recognition, they argue that achieving more flexible learning might require leveraging various techniques such as graph networks, knowledge bases, and causal inference models. A coordinated system of diverse models tailored for specific tasks might pave the way to AGI.

Fundamental Gaps: Other AI leaders believe that significant gaps exist in fundamental cognitive aspects β€” such as reasoning, knowledge representation, memory, and common sense β€” which current AI struggles to comprehend. They suggest groundbreaking breakthroughs in new areas of AI is required to move us towards building AGI.

When Should We Expect AGI?

There is no concuss on this topic, and you can get different answers depending on whom you ask, or which school of thought they fall in, and it could be years to even decades or centuries. In a recent talk at Nov 2023 New York Times’ annual DealBook Summit, Nvidia CEO Jensen Huang said that such an evolution would be here soon, maybe in 5 years or so. That is certainly a bold claim! With the recent advancements in computing chips such as those introduced by Microsoft, Amazon, and NVIDIA, and billions of dollars of investments pouring into AI research (such as Microsoft’s $10B investment in OpenAI ChatGPT, or Amazon’s $4B investment in Anthropic AI), we should expect interesting, and groundbreaking advancements in AI in coming years, pushing us inches closer to achieving AGI. Whether we get there in 5 years or 50 years, is a question only time can answer.

The Global Pursuit of Safe and Fair AI

Discussions on AGI always bring up concerns about the safety, security, and fairness of AI systems, and how AGI could potentially be harmful to humans if not developed or supervised responsibly. Governments worldwide and ordinary people alike are concerned about the safety, fairness, and responsible use of AI systems to safeguard individuals from potential risks. Governments have been working on establishing regulatory frameworks and guidelines that address AI’s ethical implications and potential threats. For instance, in 2021, the European Union introduced the AI Act. With the AI Act, the EU Parliament wants to ensure that AI systems that are implemented in the EU are safe, transparent, traceable, fair, and eco-friendly. They believe humans should oversee AI systems (and that’s the key!) instead of relying solely on automation to avoid potential harm to humans. Across the Atlantic Ocean, the United States has also proposed various initiatives, such as the National AI Research Resource Task Force, to advance AI research and innovation while prioritizing safety and fairness. In addition, in October 2023, President Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence that established new standards for AI safety and security. However, specific global standards or universally accepted regulations for AI safety and fairness are still evolving, with ongoing discussions among policymakers, experts, and stakeholders, such as the 2023 Global AI Safety Summit in the UK.

Closing Thoughts…

The pursuit of AGI demands a combination of interdisciplinary efforts, innovative approaches, and continuous advancements in AI and ML technologies. Policymakers, industry leaders, researchers, and ethicists must collaborate to navigate the ethical implications, societal impacts, and regulatory frameworks surrounding AGI development. Ultimately, responsible and collaborative progress in developing AI systems that are more transparent, safer, fairer, and capable of cooperative interactions with humans can bring substantial benefits to humans, such as new drug discoveries, new technological advancements, and creative arts, to name a few. Though the path to achieving AGI may be difficult, the idea of how humans and machines could potentially collaborate remains substantial and worthy of this pursuit.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓