Unlock the full potential of AI with Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Maximizing LangChain Efficiency: Agents and ReAct Method Review.
Latest   Machine Learning

Maximizing LangChain Efficiency: Agents and ReAct Method Review.

Last Updated on July 17, 2023 by Editorial Team

Author(s): iva vrtaric

Originally published on Towards AI.

Image by Lexica AI

Human intelligence is unique in its ability to combine task-oriented actions with verbal reasoning. This allows us to learn new tasks quickly and perform robust decision-making, even in unfamiliar situations. Recent results have shown the potential for combining verbal reasoning with interactive decision-making in autonomous systems. Large language models (LLMs) can perform “chain-of-thought” reasoning, but this method is limited by its lack of grounding in the external world. ReAct is a paradigm that combines reasoning and acting with LLMs to solve diverse language reasoning and decision-making tasks, allowing models to interact with external environments while maintaining high-level plans for action.

Large language models (LLMs) are really good at understanding language and making decisions, but usually, their thinking abilities for reasoning and their capacity for executing actions are studied separately.

The authors developed a new approach called ReAct and tested it on various tasks. For example, the model performed better at answering questions and verifying facts by using a Wikipedia API and created more understandable solutions than other models. Additionally, in interactive tasks, ReAct outperformed other learning methods and needed fewer examples to learn from.

Picture Agents as a game-changing idea that empowers large language models (LLMs) to use Tools in a way that mirrors human interaction. Consider how we rely on calculators to solve math problems or perform a Google search to find accurate information. LangChain’s Agents essentially provide the ‘reasoning’ behind these actions, deciding whether to involve multiple Tools, just one, or none at all in the process.

Let’s apply this ReAct paradigm with LangChain in a few combinations and capture the results.

DockstoreExplorer-Agent interacts with Wikipedia.

This agent uses DockstoreExplorer to interact with a document storage system, requiring two specific tools: a ‘Search’ tool and a ‘Lookup’ tool. The ‘Search’ tool is responsible for locating a document, whereas the ‘Lookup’ tool retrieves a term from the most recently discovered document. This agent’s functionality somewhat aligns with the original ReAct research paper, particularly the Wikipedia use case.

Let’s observe this Agent on a concrete Wikipedia example:

Install the requirements:

In the code example provided, a set of tools is introduced, designed for use during the Action phase of a specific task. The primary functionality showcased in this example is the ability to search and perform lookups within Wikipedia. To enhance this functionality, the ‘DocstoreExplorer’ from Langchain is employed. This way, we can track the ‘Observations’ of the Agent while searching on Wikipedia.

Agents generate language-based thoughts, called reasoning traces, to help them process contextual information and make better decisions. These thoughts can include decomposing task goals, extracting important information, or adjusting action plans.

It’s quite unfortunate that despite the ‘reasoning traces’, I was unable to successfully prompt the model to dissect the task or extract the essential information I was interested in. As observed earlier, what I received were technically scraped excerpts from Wikipedia, which the model perceived as valuable for ‘Look-up’ purposes. This leads to the thought that if I equip the Agent with adequate Tools, I could construct a more precise research mechanism and follow the Chain-of-thought within the Agent.

If you’re interested in understanding how the ReAct Agent preprocesses and retrieves information, I highly recommend diving into the following article:

ReAct (Reason+Act) prompting in OpenAI GPT and LangChain

LangChain is an open-source tool which helps you compose a variety of language chains (such as, chat system, QA system…

tsmatz.wordpress.com

The conversational agent is equipped with the WolframAlpha tool.

During my exploration, I tried out different types of Agents with Google search capabilities. My idea was to add as many tools as possible and see how the Agent would pick the right one from the variety of options. But I kept running into a stubborn error. It wasn’t until I found out that including at least one chain made things work smoothly. While there might be other ways to work around this logic, this approach turned out to be pretty effective.

After experimenting with a few combinations, I managed to achieve satisfying results by integrating Conversational Agents with Wolfram Alpha’s computational intelligence. This powerful synergy can be conveniently accessed through an API, streamlining the process.

To get started with using Wolfram Alpha in your conversational agent, you will need to set up a developer account on their website. After signing up, create an application and retrieve your unique APP ID. Then, you can install the Wolfram Alpha library by using the pip command. Finally, to ensure proper functionality, you should save your APP ID as an environment variable.

WolframU+007CAlpha: Making the world's knowledge computable

Uh oh! WolframU+007CAlpha doesn't run without JavaScript. Please enable JavaScript. If you don't know how, you can find…

www.wolframalpha.com

In the following stage, I successfully incorporated the additional tool into the system. Subsequently, I set up the Agent, while leaving the Memory parameter unspecified.

Which gave me the following output:

Conclusion

The relationship between reasoning and action is what allows the model to make smart decisions. It can think about what it wants to do (reason guiding action) and then make those plans a reality by interacting with the world around it (action informing reason). This dynamic process helps the model constantly improve and make better decisions over time.

The integration of reasoning and action may also help solve mathematical problems that are not well-suited for large language models but can be effectively handled by WolframAlpha. By combining the strengths of both approaches, we may be able to achieve accurate and contextually relevant responses to a wider range of problems.

Additional Resources

WolframU+007CAlpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT

Included in Stephen Wolfram's latest book: What Is ChatGPT Doing … and Why Does It Work? It's always amazing when…

writings.stephenwolfram.com

ReAct: Synergizing Reasoning and Acting in Language Models

Recent advances have expanded the applicability of language models (LM) to downstream tasks. On one hand, existing…

ai.googleblog.com

You can also find several other research papers describing the benefits of using external knowledge in LLM reasoning. (e.g., See LLM-Augmenter system.)

THE CODE

langchain/langchain-dockstore.ipynb at main · idontcalculate/langchain

You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or…

github.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓