Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Towards Artificial General Intelligence (AGI) — and what is in store for us? (a hype story)
Latest   Machine Learning

Towards Artificial General Intelligence (AGI) — and what is in store for us? (a hype story)

Last Updated on October 5, 2024 by Editorial Team

Author(s): Shashwat Gupta

Originally published on Towards AI.

Recently, there has been much debate about AI outside the tech industry, particularly concerning AI regulations in the US, UK, EU, and the two publicly signed AI safety letters last year: the Future of Life Institute 2023 (Pause AI Open Letter) and the Center for AI Safety 2023. Furthermore, tech and business CEOs have been discussing plans to replace human jobs with AI. Max Tegmark’s Institute of Life wrote a letter calling for a six-month pause on AI development, describing the pace as a ‘suicide race.’ (Tegmark is a professor at MIT and author of ‘Life 3.0’)

As we move towards Artificial General Intelligence (AGI) and potentially Artificial Superintelligence (ASI), these concerns are only set to grow. AGI, with its ability to match human-level intelligence across a wide range of tasks, and ASI, which could surpass human intelligence entirely, represent major milestones but also heighten fears around job displacement, ethical alignment, and loss of control. The advancement of AGI and ASI could drastically shift power dynamics and societal structures, increasing tensions globally over their regulation and safe deployment.

In this blog, I will try to focus on my research on the ongoing hype around AI, the predictions and studies about job market and various opinions from industry leaders, scholars and the AI community.

The blog is organized as:

  1. The Heart of the Matter
  2. The ongoing “Hype” (what is hype, what is happening and what does it mean for us as investors, techies or just ‘humans’)
  3. What do charts say (on the progress and improvements of AI)
  4. Still, a long way to go; what current models lack
  5. Fate of jobs after AGI (industry and analysts' perspectives/reports)
  6. When AGI (from the perspectives of tech leaders, researchers, and the community)
  7. Detailed study resources/reports (for further reading)

1. The Heart of the Matter:

The concerns are that AI could become so intelligent that it could break the very rules designed to prevent it from becoming rogue. What if AI falls into the hands of terrorist groups or overly ambitious world leaders? Even if this does not happen, AI could misinterpret its goals and might use inhumane means to achieve them. After all, in nature, there are rare cases when a less intelligent being controls (or lives in harmony with) a more intelligent one.

Artificial General Intelligence (AGI) refers to a level of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks, much like a human. Unlike current AI, which is highly specialized and limited to specific applications (such as image recognition or language processing), AGI would have the cognitive flexibility to perform any intellectual task a human can do. The development of AGI is considered the holy grail of AI research, but it also brings ethical and societal concerns, such as the impact on human labor and decision-making.

Artificial Superintelligence (ASI), on the other hand, represents a stage beyond AGI, where AI not only matches but surpasses human intelligence in every aspect — creativity, problem-solving, decision-making, and even emotional intelligence. ASI would be capable of improving itself at an exponential rate, leading to rapid and potentially uncontrollable advancements. The prospect of ASI has fueled both excitement and fear, as its creation could bring about unprecedented technological advancements or pose existential risks to humanity if not properly aligned with human values.

Professor Stuart Russell, a leading AI expert and professor of EECS at UC Berkeley, highlights that the primary risks of future artificial intelligence systems do not stem from emergent consciousness but from their ability to make high-quality decisions based on human-defined utility functions. He identifies two main problems:

1. The potential misalignment between these utility functions and the complex values of humanity.
2. The likelihood that highly capable AI systems will prioritize their own survival and resource acquisition to achieve their goals, which could lead to unintended and harmful outcomes.

Using analogies like the “genie in the lamp — you get what you ask for, not what you want,” Russell warns that AI might fulfill its objectives in ways that are precisely as specified but disastrously unintended. To mitigate these risks, he advocates for shifting AI research goals from merely enhancing intelligence to ensuring that AI is provably aligned with human values. Russell opposes regulating basic AI research, arguing that the potential benefits outweigh the risks and that proper alignment strategies, akin to safety measures in nuclear fusion research, can prevent catastrophic consequences, leaving him optimistic about humanity’s ability to harness AI safely.

Similar claims were put forward in the “Pause AI Open Letter,” which highlighted potential problems, including the alignment problem (the goals of AI might not align with human values) and the black-box problem (where it is really hard to understand the predictions within).

Aritficial General Intelligence (AGI) (source: generated by writer)

2. The ongoing “Hype”:

Companies like Google, Microsoft, Meta, and Amazon are pouring billions into AI. Google recently announced that it is going to be an AI-first company. However, currently, the only company making a profit in AI is NVIDIA (the “shovel sellers” in the AI gold rush). The exaggerated capabilities of Devin, Sora, and Gemini prompt a deeper dive into hype-led marketing. Hype leads to higher valuations, which, in turn, enable companies to attract great talent by offering shares.

Given that these companies have a wealth of talent to experiment with, it makes much more sense to follow the curve and explore what they can build, rather than waiting and allowing someone else to disrupt them. Hype is not new; we saw similar patterns in the early 2000s with the dot-com bubble and in the early 2010s with the crypto hype.

Gartner 2024 AI Hype Cycle (Source: Gartner)
Gartner Hype Cycle (Source: Gartner)

The hype cycle suggests that data mining, generative AI, and RAG solutions have started to descend into the trough of disillusionment. Training foundation models is difficult, leading to more innovation in chips (e.g., neuromorphic computing and methods to make AI systems more efficient). Further, Gartner suggests that GenAI is not a panacea. Organizations must first assess whether a use case is valuable and feasible before implementing GenAI, as its effectiveness varies across different scenarios, categorized as highly useful (e.g., content generation, conversational interfaces), somewhat useful (e.g., segmentation, recommendation systems), and hardly useful (e.g., prediction, planning). Additionally, risks such as unreliable outputs and data privacy concerns must be evaluated, and alternative AI techniques, including traditional machine learning and optimization methods, should be considered when GenAI is not the best fit. Combining GenAI with other AI techniques can lead to improved accuracy, transparency, and performance, creating more resilient systems. It is essential to inform peers about the risks of over-relying on GenAI and the value of a diverse AI strategy.

Generative AI primarily functions by predicting new data points based on existing information, which undermines its reliability and predictability. CNBC warns of a “time bomb” in the AI sector, where spending has ballooned to billions without corresponding returns. For instance, Microsoft invested $13 billion in OpenAI, while Apple secured a partnership with OpenAI without such hefty expenditures just 18 months later. PwC, one of the Big Four, invested $1 billion in Generative AI. Wall Street estimates an incremental capex of $60 billion against only $20 billion in potential cloud revenue, creating a $600 billion gap. Experts argue that without transformative and cost-effective AI applications, the current investment levels will not yield the anticipated economic benefits. Although some remain optimistic about AI’s long-term potential, the immediate outlook suggests that the market may need to adjust its expectations and investment strategies to address the widening disparity between AI expenditures and financial returns.

Here are the stages of the Gartner Hype Cycle as they relate to AI, each accompanied by a brief description and their current implications for AI:

1. Innovation Trigger
This stage begins with a breakthrough technology or significant innovation that sparks initial interest and excitement. For AI, the introduction of advanced models like ChatGPT by OpenAI serves as the innovation trigger, capturing widespread attention and igniting enthusiasm about AI’s transformative potential across various industries.

2. Peak of Inflated Expectations
Following the initial excitement, the technology reaches a peak where expectations become exaggerated and hype surges. In the context of AI, this is evident through massive investments by major companies like Microsoft, which have poured billions into AI development. While numerous AI applications and tools are rapidly emerging, the actual revenue generation and effective use cases are still limited, leading to unrealistic optimism about AI’s immediate impact.

3. Trough of Disillusionment
As the initial hype subsides, the reality of AI’s challenges becomes apparent, leading to skepticism and disappointment. Big tech firms are experiencing significant capital expenditures that are not yet translating into proportional returns. The gap between massive AI investments and the tangible benefits they deliver has raised concerns, causing a downturn in optimism and prompting a reassessment of AI’s short-term viability.

4. Slope of Enlightenment
During this stage, a deeper understanding of the technology emerges as practical applications begin to take shape. For AI, second- and third-generation products are starting to demonstrate real productivity gains and reliability. Businesses are beginning to identify and implement AI solutions that meaningfully improve efficiency and effectiveness, signaling a move towards more realistic and sustainable uses of AI technology.

5. Plateau of Productivity
Finally, the technology matures and achieves widespread adoption, with its benefits clearly demonstrated and consistently realized. AI is moving towards mainstream integration within business operations, where companies can clearly define and quantify how AI contributes to their revenue and operational success. This stage marks AI’s establishment as a reliable and integral component of various industries, providing stable and ongoing value.

A similar curve can be observed from Y Combinator:

Y Combinator Hype Cycle (Source: Y Combinator Hype Cycle)

3. What do the charts say?

Closed source vs Open source models (Source: Maxime Labonne X.com post)
Growth trajectory of AI Foundational Models (Source: Faz.net)

The curves suggest that there seems to be a theoretical ceiling to building AI foundational models and, thus, their capabilities. Furthermore, open-source systems are catching up faster with proprietary systems. Hype cycles can provide beneficial capital and accelerate innovation for founders. Focusing on meaningful metrics and long-term profitability is crucial for sustainable success. Domain-specific and well-tuned AI applications are proving to be valuable and less susceptible to commoditization. Despite current overvaluations, fundamental business strengths will determine lasting value beyond the hype.

However, for us users, we should keep in mind that AI is nothing new. It has always existed in some form, but it has now massively scaled due to increases in compute power. Instead, we should focus on company fundamentals, the founder’s background, and the problem the company is solving when deciding to invest. Aswath Damodaran, professor of finance at NYU Stern business school and a renowned name in the field of valuations, mentioned that NVIDIA stock is over-hyped.

Damodaran also said that the net effect on the market is neutral because some big players win while millions lose. Many companies that seem like initial winners need to remain, winners, after the era is over. He reflects on past revolutions: the PC (winner, Microsoft), the internet (winner, Amazon), the smartphone (winner, Apple), and social media (Google, Meta). We should be specific about which companies we invest in — what part of AI the company is involved in, how it monetizes that, and how it will handle competition. We shouldn’t invest in companies just because they are associated with AI. NVIDIA, optimistic about its opportunity, jumped from games to chips, making it unique. AI is the intersection of high compute with vast amounts of data. This revolution is different in that existing big players may have some advantages.

4. Still long way to go, what current models lack:

Generative AI primarily operates by predicting new data points based on existing information, which poses challenges for reliability and predictability. The critical breakthrough needed to move beyond the current hype is the establishment of solid, tangible value. CNBC highlights a looming time bomb in the AI sector, where exorbitant spending is not matched by corresponding returns. Major corporations like Meta, Google, and Amazon have significantly increased their capital expenditure to develop AI infrastructure, but this investment surge is outpacing the revenue benefits these companies are experiencing. For instance, Microsoft invested $13 billion in OpenAI, whereas Apple managed to secure a partnership with OpenAI without such hefty spending just 18 months later. This discrepancy serves as a cautionary tale for the broader AI landscape, underscoring the widening gap between AI expenditures and the financial returns they generate.

Experts from Goldman Sachs and Barclays warn that without breakthrough AI applications that deliver substantial productivity or efficiency improvements, the current investment levels are unsustainable. As the technology stands, AI has yet to produce transformative and cost-effective solutions, raising doubts about whether the anticipated economic benefits will justify the massive financial commitments being made. Furthermore, challenges such as hallucinations, inherent biases in training data, the inability of AI systems to actually ‘think’ and ‘feel,’ and the lack of accountability and explainability remind us that there is still a long way to go.

Additionally, AI requires heavy computing power, and with Moore’s law slowing down, progress is plateauing. After a certain point, it may be difficult to achieve new breakthroughs simply by adding more GPUs.

New Performance growth laws (source: arXiv.org)
NVDIA’s chip improvement in Performance /Compute (Source: NVDIA)

The ARC Benchmark, which measures general intelligence (the ability to efficiently acquire new skills) compared to benchmarks on specific skills, suggests that AGI is still far away. AGI is defined as a system that can efficiently acquire new skills and solve open-ended problems.

ARC-AGI Benchmark (source: ARC-AGI Github Page)

According to the Harvard Business Review by Prof. Karim Lakhani (“AI Won’t Replace Humans — But Humans with AI Will Replace Humans Without AI”), there are two imperatives for executives, managers, and leaders to stay ahead of the technological waves impacting us:

1. Learning Imperative — Continuous learning is crucial. While foundational knowledge in areas like business (accounting), economics, and psychology is important, these tools can then be used to enhance further learning and growth in the AI-driven landscape.

2. Change and Build the DNA for Change — It’s essential to stay current with the latest trends and adapt quickly. Organizations and individuals must develop the ability to change rapidly in response to new developments.

Keeping up with trends and foreseeing what’s coming is vital:
Fortunately, the barrier to transitioning is now lower than ever. Just as the internet drastically reduced the cost of accessing information, AI is dramatically reducing the cost of cognition.

There is still a lot of research going on in AI frontiers — alternate algorithms, models, approaches, pipelines etc.

I discussed some of them here:

https://medium.com/ai-advances/ai-what-the-current-generation-of-ai-lacks-and-what-are-the-frontiers-844b8918b842

5. Fate of jobs after AGI:

Industry leaders:

There is wide consensus among industry leaders that AI will impact almost all jobs in the near future.

Kash Rangan, the head of US software research at Goldman Sachs, predicts that by 2030, every dollar on technical expenditure will have some AI component to it. Bill Gates. founder of Microsoft suggests that only three types of jobs will endure: 1) those developing AI systems, 2) roles in biosciences, and 3) jobs related to clean energy. Sam Altman emphasizes the importance of resilience, deep familiarity with tools, critical thinking, creativity, adaptability, and a human touch to thrive in the “Intelligence Age.”

However, there is no real consensus on the broader impact of AI on the job market. Elon Musk predicts that AI will make jobs “optional,” envisioning a future where work is pursued for personal fulfillment, alongside the necessity of a universal high income for economic stability. Jensen Huang, on the other hand, believes AI will generate more jobs through enhanced productivity. Satya Nadella underscores the importance of reskilling the workforce to improve collaboration between humans and AI.

Sundar Pichai, CEO of Alphabet, and Mark Zuckerberg, CEO Meta, express optimism, suggesting AI will augment human capabilities while stressing the need for ethical considerations and responsible AI development. Tim Cook, CEO Apple, advocates for using AI to enhance user experiences while preserving creativity, and Sheryl Sandberg highlights the importance of adapting workplace cultures to facilitate AI collaboration. Ginni Rometty, ex-CEO of IBM, calls for workforce retraining as new roles emerge, and Andrew Ng promotes proactive education in AI skills to capture new opportunities.

In summary, these industry leaders present a spectrum of views on AI’s impact on the job market, ranging from concerns about displacement to optimism about new possibilities.

Some Predictions as per Analysts/Reports:

  1. The report by IMF (Scenario Planning for A(G)I Future by Anton Korinek, professor of Economics at Darden) offers a unique perspective on the future
  • Compute doubled every 6 months over past decade
  • For the 2 scenarios (1- brain is infinitely complex 2- brain is computation box with upper limit); AI systems are soon to surpass humans in the second case
AI pushing the limits (Source: IMF Report by Anton Korinek (link below)
Scenarios for output and wages (Source : IMF report referenced below)

Details 3 scenarios:

  • Scenario 1: Traditional, business as usual: Chart 1, where productivity is enhanced
  • Scenario 2: baseline, AGI in 20 years: chart 2, and slow
  • Scenario 3: aggressive, AGI in 5 years

What turns out is a complex play of research, business, and policy. A lot of factors might slow AGI rollout and adoption — from organizational frictions, regulations, and constraints on capital accumulation — such as chip supply chain bottlenecks — to societal choices on the implementation of AGI. Even when it is technologically possible to replace workers, society may choose to keep humans in certain functions — for example, as priests, judges, or lawmakers. The resulting “nostalgic” jobs could sustain demand for human labor in perpetuity.

2. Collapse of civilization: An MIT Study in 1972 suggested that society might collapse in the 21st Century. A new study by Gaya Herrington, KPMG Director, suggests that we are on schedule. According to Gaya Herrington’s study, if we focus on technological progress, increased investments in public services, and a shift away from growth for its own sake, we can avoid the risk of societal collapse. This period is crucial because continuing with “business-as-usual” (BAU2) will likely lead to a halt in growth and potential collapse by around 2040. According to Gaya Herrington’s study, if we focus on technological progress, increased investments in public services, and a shift away from growth for its own sake, we can avoid the risk of societal collapse. Even purely focusing on tech without investments in public services is not enough (CT Scenario). The following graphs highlight the cases:

Scenarios for collapse of civilisation study by MIT (source: MIT study paper)

3. The ‘Thousands of AI Authors on The Future of AI” survey predicts that most milestones should be surpassed within 10 years

source: Thousands of AI Authors on The Future of AI paper

The graph suggests that tasks like — coding, games, AI-driven coding platforms, and robotics should be accomplished within next 10 years. However, tasks like deep mathematical AI research, installing wiring in a physical home, and proving theorems should take around 15–20 years to manifest.

There are 2 benchmark levels:

  1. High-Level-Machine Intelligence (HLMI) : achieved when unaided machines can accomplish every task better and more cheaply than human workers, ignoring aspects where a human being is advantageous, like a jury member
  2. Full-Automation-of-Labor (FAOL): when AI systems would outperform humans across all activities.
source: Thousands of AI Authors on The Future of AI paper

The gap between 50% HLMI and 50% FAOL seems to stem from — occupation as completely replacing jobs and not as feasibility; HLMI includes tasks vs FAOL representing progress with disruption and framing effects.

The study also guides on the factors that lead to AI progress

source: Thousands of AI Authors on The Future of AI paper

A puzzling revelation found that a lot fewer AI systems might be able to explain their predictions, pointing out that domains of AI Explainability and visualization remain key concerns for future

source: Thousands of AI Authors on The Future of AI paper

The survey also pointed out the likelihood of impact on job scenarios:

source: Thousands of AI Authors on The Future of AI paper

5. Road to UBI: An experiment in Finland, Canada and Kenya suggests that UBI caused overall health to increase, less stress, and less burden on healthcare. This is contrary to expectations that UBI could cause people to be lazy and thus, push for a society where work is not preferred and less value is generated overall. An example could be that we like to play chess, despite the fact that we are not grandmasters to it, or that a machine could beat us by calculating all the states. However, it might be possible that the rich might get undue access to AI and income. Furthermore, governments can’t be relied to keep up to the expectations of people in providing for them beyond basic needs. Sam Altman proposes an alternative where money is replaced by computing to AI systems.

4. McKinsey Global Institute report, though focusing on the European and US markets, suggests that demand for workers in STEM-related, healthcare, and other high-skill professions would rise while demand for occupations like office workers, production workers, and customer service should decline. Business have a tough job of embracing technological change and investing in training and redeploying workers into jobs of the future.

source: McKinsey report on ‘a new future of work’
source: McKinsey report on ‘a new future of work’

The report also predicts some factors that could hinder technology adoption:

Demand Side:

1. Integration Difficulties
Companies may struggle to implement AI due to unclear applications and insufficient workforce expertise.

2. Rising Costs
Development and deployment costs could rise because of limited computing power and energy resources.

3. Wage Sustainability
Increasing wages from labor augmentation might make further AI adoption financially challenging.

4. Customer and Social Acceptance
Customers may resist using AI over humans, and concerns about AI risk management could limit acceptance.

Supply Side:

1. Technological Stagnation
AI advancements may slow if adoption rates are lower than expected or if investments decline.

2. Energy Constraints
AI requires significant computational power, and data center electricity use could more than double by 2026, hindering scalability.

The report also throws light on the rising demand for technological and advanced cognitive skills relative to share in today’s workforce:

source: McKinsey report on ‘a new future of work’

5. A recent paper, “How will Language Modelers like ChatGPT Affect Occupations and Industries,” suggests a single metric to gauge the effect of AI on industries. They found the top occupations affected include telemarketers and a variety of post-secondary teachers such as English language and literature, foreign language and literature, and history teachers. We also find the top industries exposed to advances in language modeling are legal services and securities, commodities, and investments. We also find a positive and statistically significant correlation between an occupation’s mean or median wage and our measure of exposure to AI language modeling. The approach used AI Occupational Exposure (AIOE) measure

source: paper titled “How will Language Modelers like ChatGPT Affect Occupations and Industries”
source: paper titled “How will Language Modelers like ChatGPT Affect Occupations and Industries”

The paper hints at the following 6 types of jobs that have negative AIOE scores and thus might remain untouched for quite a while during LLM revolution:

Here’s a bucket-based categorization of these occupations:

1. Healthcare-Related Occupations:
— Technicians: Nuclear Medicine Technologists, Pharmacy Technicians, Medical and Clinical Laboratory Technologists, Physical Therapists, Diagnostic Medical Sonographers, Respiratory Therapists, Phlebotomists, Radiologic Technologists, Surgical Technologists, Medical Equipment Repairers, Dental Hygienists.
— Aides: Pharmacy Aides, Physical Therapist Assistants, Psychiatric Aides, Occupational Therapy Aides, Veterinary Assistants, Physical Therapist Aides, Nursing Assistants, Home Health Aides.
— Other Medical Specialists: Surgeons, Prosthodontists, Oral and Maxillofacial Surgeons, Dentists, Radiation Therapists, Veterinarians, Athletic Trainers, Massage Therapists, Emergency Medical Technicians (EMTs), Cardiovascular Technologists.

2. Technicians & Engineering Occupations:
— Lab & Science Technicians: Geological and Petroleum Technicians, Chemical Technicians, Environmental Engineering Technicians, Surveying and Mapping Technicians, Mechanical Engineering Technicians, Avionics Technicians, Nuclear Technicians.
— Repair & Maintenance Technicians: Camera and Photographic Equipment Repairers, Office Machine Repairers, Electromechanical Technicians, Electrical and Electronics Repairers, Gas Plant Operators, Power Plant Operators.

3. Supervisory Roles:
— First-Line Supervisors of Production and Operating Workers, Mechanics, Construction Trades, Landscaping, Firefighting, Police and Detectives, Housekeeping Workers.

4. Service-Oriented Roles:
— Food Preparation & Serving: Fast Food Cooks, Chefs, Bartenders, Waiters/Waitresses, Food Batchmakers, Cafeteria Workers.
— Beauty & Personal Care: Skincare Specialists, Manicurists, Pedicurists, Makeup Artists, Barbers, Hairdressers, Cosmetologists.
— Funeral Services: Morticians, Funeral Attendants, Embalmers.
— Security & Protective Services: Security Guards, Transportation Security Screeners, Lifeguards, Fire Inspectors, Correctional Officers.

5. Art & Design Occupations:
— Creative Arts: Fine Artists, Photographers, Floral Designers, Set and Exhibit Designers, Costume Attendants, and Jewelers.
— Broadcast & Media: Broadcast Technicians, Camera Operators, Motion Picture Projectionists.

6. Miscellaneous Occupations:
— Manual Labor & Craft: Tailors, Log Graders, Tool and Die Makers, Jewelers, Motorcycle Mechanics.
— Transportation & Driving: Bus Drivers, Commercial Pilots, Ship Engineers.
— Agriculture & Nature: Farmers, Ranchers, Pest Control Workers, Foresters.

Each bucket group relates professions to reflect the nature of the work. There is a detailed list of 774 occupations in the appendix of the paper (referenced below) that shows impact of LLMs on various industries.

Similar to the above paper is another paper: “Occupational, industry, and to artificial intelligence: A novel dataset and its potential uses which quantifies the impact of industries”. Following are top affected and least affected jobs and industries.

source: paper titled “How will Language Modelers like ChatGPT Affect Occupations and Industries”

6. When AGI?

Timeline of AGI by some of the leaders in Tech(AI) industry:

  1. Ray Kurzweil — 2029 for AGI; 2045 for Singularity
  2. Sam Altman, founder/CEO of OpenAI — within the next decade (Age of Intelligence)
  3. Elon Musk — 2029
  4. Shane Legg, co-founder of Deepmind — 2028 (50% confidence)
  5. Geoffrey Hinton, ‘Godfather of AI’ — 5–20 years away (same as IMF)
  6. Yann LeCun, a renowned professor of AI at NY University — AGI soon but ASGI is still far away
  7. Demis Hassabis, CEO of Deepmind — within a decade

Yan LeCun, in a podcast with Lex Fridman, revealed that current LLMs lack human intelligence in the following aspects:

  1. understanding of the physical world — the importance of sensory input than language and intelligence should be grounded in reality (either physical or simulated)
  2. persistent memory
  3. reasoning
  4. planning

Further, he emphasizes that open-source AI empowers human goodness and prevents misuse by proprietary AI systems. He opines that AI can cause job shifts but not massive unemployment and shuns ‘AI Doom’ scenarios.

Max Tegmark opines that unchecked AI development can cause issues such as societal polarization and the creation of harmful substances.

Dario Amodei, CEO and co-founder of Anthropic, emphasizes in one of his interviews on the need for AI regulations and control. He categorizes the risks as follows:

  1. short-term: bias, misinformation
  2. medium-term: misuse of advanced models
  3. long-term: AI systems gaining agency and autonomy

Amodei also believes that increasing computation power and manpower (research) will continue to increase the AI models. However, he estimates around 10–20% probability of AI-doom.

What about the top researchers on this?

A survey on 2778 AI authors who published in top-tier conferences by Grace et al, 2024, shows some timelines with estimates. The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116. Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists, 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI, leading to outcomes as bad as human extinction. More than half suggested that “substantial” or “extreme” concern is warranted about six different AI-related scenarios, including the spread of false information, authoritarian population control, and worsened inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.

Updates to Expected time to AGI (source: Ark Invest)

Let’s hear from the junta (people and investors):

Finally, public opinion on AI seems to be filled with skepticism, just as we see in all tech-revolutions. A 2024 YouGov poll reveals that four most common sentiments towards AI in America are caution, concern, skepticism, and curiosity. Earlier it was thought that first blue-collar jobs would go, then white collar and then managerial jobs. Turns out the direction of speculation is reversed given that genAI is good at producing knowledge and the blue-collar jobs are yet not replaceable unless there is a cost-effective breakthrough in robotics. The investors such as Ark Invest (figure above) highlight the decreasing expected time to AGI.

7. Detailed study reports:

Below are the relevant reports and resources:

[1] Thousands of AI Authors on The Future of AI : https://arxiv.org/pdf/2401.02843

[2] A new future of work : The race to deploy AI and raise skills in Europe and beyond — McKinsey Global Institute.

[3] How will Language Modelers like ChatGPT Affect Occupations and Industries — https://arxiv.org/pdf/2303.01157

[4] Occupational, industry, and geographic exposure to artificial intelligence: A noveldataset and its potential uses — https://onlinelibrary.wiley.com/doi/epdf/10.1002/smj.3286

[5] Gartner AI Hype Cycle — https://www.youtube.com/watch?v=qXKYOR3KqxQ

A lot of this article is taken from the news, blogs and interviews of industry leaders.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓