Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Is Neurosymbolic AI the Future of Truth-Seeking AI? — Part 2: Probabilistic LLM Meets Symbolic AI
Latest   Machine Learning

Is Neurosymbolic AI the Future of Truth-Seeking AI? — Part 2: Probabilistic LLM Meets Symbolic AI

Last Updated on October 31, 2024 by Editorial Team

Author(s): Koby Choy

Originally published on Towards AI.

Image from Adobe stock

Previously, I explored the definition of reasoning, its significance in AI, and why some experts believe current Large Language Model (LLM) architectures fall short in achieving sophisticated reasoning capabilities.

In this article, I will delve into various approaches that combine logical and probabilistic models, including Neurosymbolic AI systems, the potential integration of LLMs with Prolog, and Markov Logic Networks. These hybrid approaches aim to harness the strengths of both symbolic logic and probabilistic reasoning.

Neurosymbolic AI

Before exploring Neurosymbolic AI, let’s understand Symbolic AI — a branch of artificial intelligence that processes symbols and representations rather than numerical data. These systems use logic-based approaches to represent knowledge. In fact, Siri serves as a prime example of Symbolic AI in practice. The major advantage of Symbolic AI lies in its ability to manipulate and understand complex concepts through structured, formal representations [1]. However, its significant drawback is the difficulty in handling ambiguous or large-scale information.

What is Neurosymbolic AI?

In a paper titled “Neurosymbolic AI — Why, What, and How” (Sheth et al, 2023) , Neurosymbolic AI refers to AI systems that combine both neural network-based methods and symbolic AI methods. [2]

Why do we want Neurosymbolic AI?

While symbolic AI excels at understanding and manipulating complex concepts, it struggles with large-scale data processing. By combining symbolic learning with neural approaches (which are particularly effective at handling large datasets), we can leverage the strengths of both methods while addressing their individual limitations. This hybrid approach creates a complementary system that overcomes the weaknesses of each individual method.

Approaches

It was reported in the paper there can be two big approaches for us to obtain a Neurosymbolic AI architecture [2]:

  1. Compressed symbolic knowledge graph + pass the compressed knowledge into the neural network model
  • Compressed symbolic knowledge into a knowledge graph
  • Compressed symbolic knowledge into logic-based representations such as predicate logic/ Prolog

2. Extract information using Neural Networks first and then construct a Symbolic knowledge base from that information

  • separated neural and symbolic components
  • combined neural and symbolic components

Approach 1: Compressed Symbolic knowledge into knowledge graphs & Integration with Neural Networks

One approach involves passing a knowledge graph through a Graph Neural Network (GNN), which generates a high-dimensional vector that can then be processed by a neural network model. GNNs are deep learning models that operate on graph-structured data by iteratively updating node representations through aggregating features from neighboring nodes.

While this method can enhance a model’s planning and abstraction capabilities, it suffers from a significant drawback: the compression process results in the loss of valuable relationship information.

An Knowledge Graph for the Harry Potter books by Neo4j
An image of a graph neural network by Neptune.ai

Approach 2: Compressed Symbolic knowledge to logic representation & Integration with Neural Networks

This approach aims to represent symbolic knowledge through logical statements. Researchers have used matrices and high-order factorizations to compress symbolic knowledge into formal logic statements.

This is where Prolog could play a crucial role. Prolog (Programming in Logic) is a logic programming language that represents hard facts and instructions defining relationships between objects. Let’s examine this through an example:

LovesPhysics(Einstein)

which is essentially the same as

LovesPhysics(Einstein) :- true

The “:-” clause refers to “LovesPhysics(Einstein) is true if the body predicate is true”.

Consider the “Alice in Wonderland” example from Part 1, where researchers asked LLMs: “Alice has M brothers and N sisters. How many sisters does Alice’s brother have?” With M=10 and N=4, when posed as a direct English question to Poe’s Assistant model, it responded that Alice’s brother has 4 sisters.

If we want to translate the question into predicate logic, it would be equivalent as:

% Prolog that describes the relationship in the statement
% Define Alice
person(alice).
% Define that Alice is female
female(alice).
% Define her brothers (using anonymous variables for simplicity)
male(brother1).
male(brother2).
male(brother3).
male(brother4).
male(brother5).
male(brother6).
male(brother7).
male(brother8).
male(brother9).
male(brother10).
% Define her sisters
female(sister1).
female(sister2).
female(sister3).
female(sister4).
% Define sibling relationships
% Brothers
sibling(alice, brother1).
sibling(brother1, alice).
sibling(alice, brother2).
sibling(brother2, alice).
sibling(alice, brother3).
sibling(brother3, alice).
sibling(alice, brother4).
sibling(brother4, alice).
sibling(alice, brother5).
sibling(brother5, alice).
sibling(alice, brother6).
sibling(brother6, alice).
sibling(alice, brother7).
sibling(brother7, alice).
sibling(alice, brother8).
sibling(brother8, alice).
sibling(alice, brother9).
sibling(brother9, alice).
sibling(alice, brother10).
sibling(brother10, alice).
% Sisters
sibling(alice, sister1).
sibling(sister1, alice).
sibling(alice, sister2).
sibling(sister2, alice).
sibling(alice, sister3).
sibling(sister3, alice).
sibling(alice, sister4).
sibling(sister4, alice).
% Rules to determine brother and sister relationships
brother(X, Y) :- male(X), sibling(X, Y).
sister(X, Y) :- female(X), sibling(X, Y).

After inserting the prolog into an appropriate prompt, the following output is generated.


To determine how many sisters **brother1** has, we can analyze the provided Prolog facts and rules.
### Relevant Facts:
- **Brothers**: There are 10 brothers defined (brother1 through brother10).
- **Sisters**: There are 4 sisters defined (sister1 through sister4) and Alice herself is also female.
### Sibling Relationships:
- Each sibling relationship is defined as:
- `sibling(alice, sister1).`
- `sibling(sister1, alice).`
- This applies similarly for sister2, sister3, and sister4.
### Counting Sisters:
To find how many sisters **brother1** has, we can use the `count_sisters` rule. This rule utilizes the `sister` predicate to find all sisters of a given person.
### Application of Rules:
1. **Define a query** for the number of sisters for **brother1**:
```prolog
count_sisters(brother1, Count).
```
2. **Sisters of brother1** include:
- sister1
- sister2
- sister3
- sister4
- Alice (since she is also a female sibling)
### Conclusion:
- **Total Sisters**: There are 4 defined sisters (sister1, sister2, sister3, sister4) plus Alice, which means **brother1 has 5 sisters in total**.
So, **one of Alice's brother1 has 5 sisters**.

Notably, using Prolog integration produced different results with the same model, yielding the correct answer. This simple demonstration highlights the potential benefits of combining Prolog with large language models to enhance reasoning capabilities. For more details about appointing LLMs as Prolog interpreters, I recommend reading the referenced blog.

Beyond LLM integration, these generated formal logic statements can be fed into Logical Neural Networks for further processing or prediction tasks [3]. For extra information, in a logical neural network model, each neuron represents either a concept (e.g: a cat) or a logical connective (e.g: OR, AND), with weights assigned on the connecting edges. [6]

Approach 3: Separated Neural & Symbolic Components

These models primarily use neural networks (particularly LLMs) to identify symbolic functions and relationships from text or image inputs, which are then passed to fact-based search tools for further processing. Modern packages like Langchain and Haystack have simplified the implementation of such agent-based pipelines. For instance, an LLM might analyze user input through Chain of Thought (CoT) prompting, extracting relevant information to be queried through the Google Search API.

However, this approach heavily relies on the LLM’s ability to accurately extract symbolic functions from the input. The system’s performance can be significantly compromised if the model’s relationship extraction capabilities are inadequate or if hallucinations occur during processing.

Approach 4: Integrated Neural & Sybmolic Components

The integrated approach shares significant similarities with approach 3, but with one key difference: its ability to train all pipeline components end-to-end. This capability allows the model to handle large volumes of data while fine-tuning its translation of inputs into symbolic logic statements based on specific application needs. This approach has demonstrated exceptional performance in real-world scenarios across multiple dimensions: user explainability, domain constraint handling, cross-use case scalability, and adaptability to application-specific changes.

Conclusion

In this article, I have explored several potential Neurosymbolic AI approaches that we can experiment with to build AI models that improve cognitive, reasoning, and planning capabilities. These approaches demonstrate how the integration of symbolic reasoning with neural networks can create more robust and capable AI systems. By combining the interpretability and logical reasoning of symbolic AI with the data-processing power and learning capabilities of neural networks, we can develop systems that are both powerful and explainable.

References

  1. https://www.datacamp.com/blog/what-is-symbolic-ai
  2. https://arxiv.org/pdf/2305.00813
  3. https://arxiv.org/pdf/2006.13155
  4. https://eugeneasahara.com/2024/08/04/does-prolog-have-a-place-in-the-llm-era/
  5. https://arxiv.org/pdf/2006.13155
  6. https://skirmilitor.medium.com/logical-neural-networks-31498d1aa9be

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓