Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

What AI Still Can’t Do #003
Latest

What AI Still Can’t Do #003

Last Updated on December 21, 2022 by Editorial Team

Author(s): Toluwani Aremu

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

This is my third article in the series ‘What AI Still Can’t Do.’ If you haven’t read the first two, this link will direct you to the reading list I created for this purpose. Please note that all views in my articles are entirely mine and may not reflect the perspectives of other writers/experts.

Now, let’s get to this new limit of AI.

#003 — Understanding cause and effect is still a large problem

It is a fact that Artificial intelligence (AI) systems are designed to analyze data, recognize patterns, and make predictions or decisions based on that analysis. However, understanding the concept of cause and effect — the idea that one event or action causes another to happen — can be a challenge for AI systems. For homosapiens, understanding cause and effect is a fundamental aspect of human cognition. It involves the ability to recognize that one event or action (the cause) can lead to another event or outcome (the effect). This helps us make sense of the world around us and predict what might happen in the future.

There are several factors that contribute to our ability to understand cause and effect. One is our ability to perceive and remember the events that occur around us. This includes our ability to see, hear, touch, and otherwise sense the world, as well as our ability to remember and recall these experiences. Another factor is our ability to make logical inferences about the relationships between events. For example, if we see a child playing with a toy, and then the toy breaks, we can infer that the child’s actions were the cause of the toy breaking. We do this by considering the sequence of events and applying our knowledge of how the world works to arrive at a conclusion.

Our understanding of cause and effect is also influenced by our social and cultural experiences. We learn about cause and effect through our interactions with others, as well as through observing and participating in the events that occur around us. For example, we might learn that turning on a light switch causes a light to turn on or that planting a seed and watering it causes a plant to grow.

Therefore, I could say that our understanding of cause and effect is a complex process that involves a combination of perception, memory, logical reasoning, and social and cultural experiences.

If this is possible for us as humans, why are AI systems struggling to keep up with natural intelligence? I could only think of a few reasons:

  1. Limited data: AI systems rely on data to learn and make predictions. However, if the data available to the system does not include examples of cause-and-effect relationships, it may be difficult for the system to learn about them.
  2. Complex relationships: Cause and effect relationships can be complex and may involve multiple factors or variables. It can be challenging for AI systems to accurately identify and understand all of the factors that contribute to a particular outcome.
  3. Lack of context: Understanding cause and effect often requires an understanding of the context in which an event or action occurs. AI systems may struggle to understand the context in which events take place, which can make it difficult for them to accurately identify cause-and-effect relationships.
  4. Limited reasoning abilities: Earlier, I talked about how our ability to remember events as well as use certain organs to perceive and draw conclusions. AI systems are very limited in their ability to reason and draw logical conclusions from data. This can make it difficult for them to understand the concept of cause and effect, which requires the ability to identify patterns and draw logical conclusions from those patterns.

Understanding cause and effect is a complex task that requires a deep understanding of the relationships between different events and actions. While AI systems have made significant progress in many areas, this remains a challenge for them.

Can we find a way to solve this conundrum? Can AI be taught to understand causes and effects? Yes, it is possible for AI systems to be trained to understand and recognize cause-and-effect relationships. Certain NLP systems have been trained to recognize that certain words or phrases are more likely to appear in sentences that describe cause-and-effect relationships. This could have involved training the AI system on a large dataset of sentences that include cause and effect relationships and then using this training to recognize these relationships in new sentences that the AI system has not seen before.

Similarly, an AI system designed for decision-making or predictive modeling might be trained to recognize that certain events or actions are likely to lead to certain outcomes. This could involve training the AI system on a large dataset of examples of cause-and-effect relationships and then using this training to make predictions about what might happen in the future based on new inputs.

I wouldn’t stand bold and say that it is totally impossible for AI systems to be trained to understand and recognize cause-and-effect relationships. It is possible but would require a significant amount of data and careful design and training of the AI system. Graphical neural networks could be used to represent the relationships between different events or actions and their corresponding outcomes.

A classical method, which I still believe experts will go back to, is the Knowledge-based system. Knowledge-based systems are classic AI systems that rely on explicit knowledge representation and reasoning. In this context, this system could be used to match the relationships between different events and their corresponding outcomes through a clear definition of rules and principles that govern cause-and-effect relationships and then use this knowledge to make inferences about new situations.

Also, it is possible that Causal Inference studies, especially using counterfactual reasoning, could be the long-term solution to this struggle. I can only hope that we come up with easier routes to fulfill our main quest of making AI become ‘human’.

If you enjoyed reading this article, please give it a like and follow. For questions, please use the comment section. If you want to chat, reach out to me on LinkedIn or Twitter.


What AI Still Can’t Do #003 was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓