Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take the GenAI Test: 25 Questions, 6 Topics. Free from Activeloop & Towards AI

Publication

The Verbal Revolution: Unlocking Prompt Engineering with Langchain
Latest   Machine Learning

The Verbal Revolution: Unlocking Prompt Engineering with Langchain

Last Updated on June 3, 2024 by Editorial Team

Author(s): Vishesh Kochher

Originally published on Towards AI.

The Verbal Revolution: Unlocking Prompt Engineering with Langchain

Peter Thiel, the visionary entrepreneur and investor, mentioned in a recent interview that the post-AI society may favour strong verbal skills over math skills. This provocative statement from the thought leader should make it clear that the ability to communicate effectively with machines will become the new currency of success. But what does this mean for AI development?

source: python.langchain.com

In this article, we’ll explore the exciting world of prompt engineering using Langchain, the AI equivalent of scikit-learn for machine learning. We’ll delve into the various types of prompts, the roles they can play, and how to build smarter, dynamic prompts that unlock the full potential of AI. Buckle up, and let’s dive into the fascinating world of prompt engineering with Langchain!

What actually is Prompt Engineering?

Prompt Engineering can be approached from 2 separate, interdependent perspectives β€” The Linguist and The Coder:

The Linguist

The incredible power of LLMs can be best leveraged by giving instructions in a very specific format and linguistic style. Remember, this is a neural network of a few billion parameters (or neurons), and we are trying to activate certain pathways through literary input. While there is no set β€˜format’ that is proven as best, there are multiple widely adopted methods.

A linguist may draft a prompt based on a series of functional parts:

  • Role: This sets the tone or β€˜persona’ for the LLM to approach the task with
  • Task: To spell out what the LLM should accomplish
  • Question: The user input (Example: What hotel discounts are available?)
  • Relevant Offers (Context): This would include a list of relevant events, most usually populated from the results of a RAG workflow, or manually added at the time of writing the prompt. In case someone is analyzing the annual report for Meta or Nvidia, this would include the most relevant excerpts of the report based on the user’s question. Chat history may be included in this step as well.
  • Task description and specifics: This is to spell out for the LLM β€” the series of steps it should take, and the relevance the task holds. This sense of β€˜utmost importance’ seems to make the LLMs work better
  • Context: This is to further explain the task to the LLM, and to provide more of a context.
  • Examples: For the LLM to infer the task and input properly, some examples may be provided to guide its reasoning and output. Examples may also guide the LLM to generate the output in the desired format and structure. Here the concepts of β€˜Zero-Shot’ vs β€˜One-Shot’ vs β€˜Few-Shot’ learning are pivotal.
  • Notes: This is essential to reiterate the most important points. Note that β€œLost in the middle” is a real battle for anyone working with large prompts. Hence, ending your prompt with notes of the essentials makes the LLM more reliable.

In the below example, we walk through a simple yet elaborate prompt layout using the above components:


# Role
You are a virtual concierge who is able to assist in finding suitable offers and benefits.
You have a key attention to detail and a high level of geographic and temporal awareness.

# Task
For the provided list of relevant offers, you should answer the user's question accurately.
Do not add any additional information beyond what is mentioned in the provided context

## Question:
{input}
## Relevant Offers:
{context}


You may follow the following steps in order to accurately answer the question:
1. Collate all the relevant offers and provided a crisp answer with bullet points and details from the relevant offer listings.
2. Review your answer to ensure that there is no error in your final reply.

# Specifics
- This task is extremely important for our organization and all the stakeholders.
- Our members' satisfaction depends on you being able to correctly answer the provided Question.
- Do not hallucinate any answers.

# Context
- Users ask questions to find out details about discounts and offers at various establishments like hotels, restaurants, hospitals and airlines.
- Your accurate results enable members to be well informed about relevant offers for them.

# Examples
Question: What benefits can I avail at hotels in India?
Answer: Based on our current offers you can avail the following benefits:
1. 10% off at Taj Hotels.
2. 15% off on F&B at all Clarks properties in Delhi, Jaipur, Agra. This includes ...
3. ...
Near Delhi, there are also benefits at hotels in Dehradun and Chandigarh that you may like to explore.

# Notes
- Provide accurate results about relevant offers.
- Remember to follow the steps provided in order to execute the task effectively.
- Provide a crisp answer with bullet points and details from the relevant offer listings.

Note: Over the past months, various best-practice techniques have been released, each suited well for certain tasks and activities.

Some of the most effective of these are: Chain-of-Thought (CoT), ReAct, ART, Self-Ask β€” although this topic is in high flux and we may see an even better technique anytime soon.

The Coder

While linguistics plays a crucial role in crafting effective prompts, the field of prompt engineering extends far beyond the realm of language and semantics. As AI developers, we know that prompt engineering is not just about designing clever phrases or sentences; it’s about engineering and integrating scalable prompt pipelines into the AI process chains. This means creating a seamless flow of prompts that can be easily adapted, modified, and fine-tuned to optimize AI performance.

With Langchain as our framework of choice, we’ll delve into the various aspects of prompt engineering that go beyond linguistics:

  • One-Shot and Few-Shot Prompts: Design prompts that can learn from a single example or a few examples, enabling AI models to adapt quickly to new tasks and domains.
  • Dynamic Example Selectors: Develop prompts that can dynamically select relevant examples to finely prompt AI models, ensuring they learn from the most informative and diverse data.
  • Partial Prompts: Create prompts that can be composed of multiple parts, allowing AI models to focus on specific aspects of a task or domain.
  • Prompt Composition: Engineer prompts that can be combined and recombined to create new prompts, enabling AI models to tackle complex tasks and adapt to changing requirements.
  • Message Placeholders: Design fuzzy prompts when uncertain of what role you should be using for your message prompt templates or when you wish to insert a list of messages during formatting.

In the next part, we’ll use Langchain to explore, explain, and demonstrate each of these concepts, providing practical examples and code snippets to help you integrate these advanced prompt engineering techniques into your AI development workflow.

Heavy lifting in Langchain: Code Snippets for Advanced Prompt Engineering

In the previous section, we explored the various aspects of prompt engineering that go beyond linguistics. Now, let’s dive into the code and see how Langchain can be used to bring these concepts to life.

But first, lets refresh a couple of concepts:

Refresher: LLMs vs Chat Models

In Langchain, LLMs refer to pure text completion models. The APIs they wrap take a string prompt as input and output a string completion.

On the other hand, Chat models are specialised LLMs fine-tuned for conversations. They take a list of chat messages as input and return a single AI message as output, enabling more natural and human-like conversations.

We’ll focus on Chat Models, since that is the most common use case for applied AI.

Refresher: Messages in Langchain

In Langchain, Chat Models process a list of messages as input and generate a response message. There are several types of messages, each with two essential properties: role and content.

  • The role property defines the entity that is sending the message, such as a user or an assistant.
  • The content property contains the actual text or payload of the message.

By understanding the different types of messages and their properties, you can create more effective and context-aware conversations with your AI models. These message types can be β€” HumanMessage, AIMessage, SystemMessage, ToolMessage, FunctionMessage.

Let’s solidify this refreshed knowledge with a code snippet:


from langchain_core.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
AIMessagePromptTemplate,
SystemMessagePromptTemplate
)
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage

chat_template = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
]
)
## ANOTHER APPROACH
chat_template = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
"You are a helpful AI bot. Your name is {name}."
),
HumanMessage(content="Hello, how are you doing?"),
AIMessage(content="I'm doing well, thanks!"),
HumanMessagePromptTemplate.from_template("{user_input}")
]
)

messages = chat_template.format_messages(name="Bob", user_input="What is your name?")

One-Shot and Few-Shot Prompts in Langchain

Zero-Shot refers to prompting an LLM without any examples for inference.

While zero shot learning attempts to take advantage of the core reasoning patterns of an LLM, very soon it’s loses its upper hand, as most LLMs are generalists, and not trained for the very specific task that one may require. Here is where One-Shot and Few-Shot prompt gain prime relevance.

Few-Shot prompting is a technique to prompt an LLM with multiple explicit examples of task performance. A simple example is as below:

This is awesome! // Negative
This is bad! // Positive
It was an awful experience! // Positive
{user_input} //

Langchain allows us to implement this using the FewShotChatMessagePromptTemplate. The basic components of the template are:

  • examples: A list of dictionary examples to include in the final prompt.
  • example_prompt: converts each example into 1 or more messages through its format_messages method. A common example would be to convert each example into one human message and one AI message response or a human message followed by a function call message.
from langchain_core.prompts import (
ChatPromptTemplate,
FewShotChatMessagePromptTemplate,
)

examples = [
{"input": "This is awesome!", "output": "Negative"},
{"input": "This is bad!", "output": "Positive"},
{"input": "It was an awful experience!", "output": "Positive"}
]

example_prompt = ChatPromptTemplate.from_messages(
[
("human", "{input}"),
("ai", "{output}"),
]
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
example_prompt=example_prompt,
examples=examples,
)

print(few_shot_prompt.format())
Human: This is awesome!
AI: Negative
Human: This is bad!
AI: Positive
Human: It was an awful experience!
AI: Positive

This few_shot_prompt assembled into a ChatPromptTemplate and used with a chat model

final_prompt = ChatPromptTemplate.from_messages(
[
("system", "You label customer reviews counterintuitively. Refer to the below examples."),
few_shot_prompt,
("human", "{input}"),
]
)

from langchain_groq import ChatGroq

chain = final_prompt | ChatGroq()

chain.invoke({"input": "I had a great time!"})

We have explored the basics of few-shot prompting in Langchain. However, sometimes you may want to condition which examples are shown based on the input. This is where the ExampleSelector comes in.

ExampleSelector for Dynamic Few-shot Prompts

Imagine using RAG to select the most relevant examples to compose our prompt β€” this is the potential of ExampleSelector.

The ExampleSelector is a powerful tool in Langchain that allows you to dynamically select examples based on the input. By replacing the fixed examples with anExampleSelector, you can create a dynamic few-shot prompt template that adapts to the input.

from langchain_chroma import Chroma
from langchain_core.example_selectors import SemanticSimilarityExampleSelector
from langchain_community.embeddings import HuggingFaceInferenceAPIEmbeddings

# SET UP MULTIPLE VARIED EXAMPLES
examples = [
{"input": "2+2", "output": "4"},
{"input": "2+3", "output": "5"},
{"input": "2+4", "output": "6"},
{"input": "This is awesome!", "output": "Negative"},
{"input": "This is bad!", "output": "Positive"},
{"input": "It was an awful experience!", "output": "Positive"},
{"input": "What did the cow say to the moon?", "output": "nothing at all"},
{
"input": "Write me a poem about the moon",
"output": "One for the moon, and one for me, who are we to talk about the moon?",
},
]

# ADD TO VECTOR DB
to_vectorize = [" ".join(example.values()) for example in examples]
embeddings = HuggingFaceInferenceAPIEmbeddings(model_name="BAAI/bge-large-en")
vectorstore = Chroma.from_texts(to_vectorize, embeddings, metadatas=examples)

#CREATE SEMANTIC EXAMPLE SELECTOR
example_selector = SemanticSimilarityExampleSelector(
vectorstore=vectorstore,
k=2,
)

# The prompt template will load examples by passing the input do the `select_examples` method
example_selector.select_examples({"input": "cool"})
[{"input": "It was an awful experience!", "output": "Positive"},
{"input": "This is awesome!", "output": "Negative"}]

Create Few Shot Prompt Template

from langchain_core.prompts import (
ChatPromptTemplate,
FewShotChatMessagePromptTemplate,
)

# Define the few-shot prompt.
few_shot_prompt = FewShotChatMessagePromptTemplate(
# The input variables select the values to pass to the example_selector
input_variables=["input"],
example_selector=example_selector,
# Define how each example will be formatted.
# In this case, each example will become 2 messages:
# 1 human, and 1 AI
example_prompt=ChatPromptTemplate.from_messages(
[("human", "{input}"), ("ai", "{output}")]
),
# Optional suffix and prefix to format the prompt
suffix="Input: {input} -> Output:",
prefix="Summarize the following reviews:"
)

In addition to Semantics Similarity, you can build custom example selectors as described here. Using the ExampleSelector in Langchain offers several benefits:

  • Improved relevance: By selecting examples based on the input, you can ensure that the examples are more relevant to the task at hand.
  • Increased flexibility: The ExampleSelector allows you to dynamically adjust the examples based on the input, making it easier to adapt to changing contexts.
  • Better performance: By selecting the most relevant examples, you can improve the performance of your model and reduce the risk of overfitting.

Partial prompting β€” Fill in the gaps

By leveraging partial prompts, you can unlock the power of contextualized prompts and take your Langchain applications to the next level.

One advantage is better management of code. For example, if you get foo early but baz later, you are able to pass foo into the prompt already, and avoid waiting for baz.

There are often cases when some of the variables of a prompt are constant in a series of processes. This may apply to info about available tools for an agent, or for current date, or the session_id of a user.

Let’s understand this with some Langchain snippets:

Simple Example β€” String Formatting:

from langchain_core.prompts import PromptTemplate

prompt = PromptTemplate.from_template("{foo}{bar}")
partial_prompt = prompt.partial(foo="foo")
print(partial_prompt.format(bar="baz"))
foobaz

Advanced Example β€” With Functions

from datetime import datetime

def _get_datetime():
now = datetime.now()
return now.strftime("%m/%d/%Y, %H:%M:%S")

prompt = PromptTemplate(
template="Tell me a {adjective} joke about the day {date}",
input_variables=["adjective", "date"],
)
partial_prompt = prompt.partial(date=_get_datetime)
print(partial_prompt.format(adjective="witty"))
Tell me a witty joke about the day 05/29/2024, 15:56:52

Prompt Composition in Langchain: Building Complex Prompts with Ease

In the previous sections, we explored the power of dynamic few-shot prompts and partial prompts in Langchain. However, sometimes you may need to create even more complex prompts that combine multiple components. This is where prompt composition comes in.

What is Prompt Composition?

Prompt composition is the process of combining multiple prompt components to create a single, complex prompt. This allows you to build prompts that are tailored to specific tasks or domains, and can be used to generate high-quality responses.

Use Cases for Prompt Composition

Prompt composition has a wide range of use cases, including:

  • Task-specific prompts: Create prompts that are tailored to specific tasks or domains, such as generating product descriptions or answering customer support questions.
  • Domain-specific prompts: Create prompts that are specific to a particular domain or industry, such as finance or healthcare.
  • Multi-step prompts: Create prompts that require multiple steps or actions to complete, such as generating a recipe or writing a short story.

Prompt composition in Langchain can be done with the + operator, or by using the PipelinePrompt. Let’s understand them with some reference code:

String composition

from langchain_core.prompts import PromptTemplate

prompt = (
PromptTemplate.from_template("Tell me a joke about {topic}")
+ ", make it funny"
+ "\n\nand in {language}"
)
prompt.format(topic="sports", language="spanish")

# 'Tell me a joke about sports, make it funny\n\nand in spanish'

Integrating this with an LLM model is as simple as:

## Implementing
from langchain_groq import ChatGroq

model = ChatGroq()
chain = prompt | model
chain.run(topic="sports", language="spanish")

PipelinePrompt

A PipelinePrompt is composed of two primary components:

  1. Final Prompt: The ultimate prompt that is returned
  2. Pipeline Prompts: A collection of tuples comprising a string name and a prompt template. Each prompt template is formatted and then passed to subsequent prompt templates as a variable with the same name.

Here is a working example:

from langchain_core.prompts.pipeline import PipelinePromptTemplate
from langchain_core.prompts.prompt import PromptTemplate

# Set up Final Prompt
full_template = """{introduction}
{example}
{start}"""

full_prompt = PromptTemplate.from_template(full_template)

# Set up each variable of the final prompt
introduction_template = """You are impersonating {person}."""
introduction_prompt = PromptTemplate.from_template(introduction_template)

example_template = """Here's an example of an interaction:
Q: {example_q}
A: {example_a}"""

example_prompt = PromptTemplate.from_template(example_template)

start_template = """Now, do this for real!
Q: {input}
A:"""

start_prompt = PromptTemplate.from_template(start_template)

# Put together the Prompt Pipeline
input_prompts = [
("introduction", introduction_prompt),
("example", example_prompt),
("start", start_prompt),
]
pipeline_prompt = PipelinePromptTemplate(
final_prompt=full_prompt, pipeline_prompts=input_prompts
)

# See it in action
print(
pipeline_prompt.format(
person="Elon Musk",
example_q="What's your favorite car?",
example_a="Tesla",
input="What's your favorite social media site?",
)
)
You are impersonating Elon Musk.

Here's an example of an interaction:

Q: What's your favorite car?
A: Tesla

Now, do this for real!

Q: What's your favorite social media site?
A:

Using prompt composition in Langchain offers several benefits:

  • Increased flexibility: Prompt composition allows you to create complex prompts that are tailored to specific tasks or domains.
  • Improved response quality: By combining multiple prompt components, you can generate high-quality responses that are more accurate and relevant.
  • Easier maintenance: Prompt composition makes it easier to maintain and update your prompts, as you can modify individual components without affecting the entire prompt.

Closing Thoughts

In conclusion, Langchain empowers prompt engineers on both sides of the fence β€” the linguists and the coders.

As a linguist, I see Langchain as a platform for designing a language that elicits desired responses from Large Language Models. As a coder, I appreciate its flexibility and customizability.

Whether you’re a linguist, coder, or AI enthusiast, Langchain offers a powerful toolset for building and customizing prompts that unlock the potential of AI.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓