Unlock the full potential of AI with Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Simulating Conversations with ChatGPT
Latest   Machine Learning

Simulating Conversations with ChatGPT

Last Updated on November 5, 2023 by Editorial Team

Author(s): Alex Waterston

Originally published on Towards AI.

The cast of a famous TV show sit around a table discussing sandwiches. (MidJourney. Prompt by the author)

I’ve been playing with the Open AI API recently. The API is just a way of using ChatGPT from within another program instead of simply through the web interface. It means you can get a little more fine control over the things you’re asking it to do for you.

Usually, when you use ChatGPT through the web, it’s like you’re having a conversation with a helpful virtual assistant; you ask it to do things for you, and it does them. You request information and it provides you with a (possibly) correct answer. You can get a bit fancier and get it to pretend to be things that it isn’t in order to have a bit more fun, but you’re limited to that back-and-forth conversation between you and the computer.

It turns out, however, that with a bit of tinkering and a bit of code, you can do some pretty fun things, including getting ChatGPT to have conversations with itself. I’m going to try to keep this explanation clear and simple because there is some really interesting stuff to explore, whether you’re a technical expert or not.

So buckle in and let’s explore how you get ChatGPT to have a conversation with itself…

Step 1: Conversations

The first thing to understand is that ChatGPT doesn’t really remember anything. When you are having a conversation with it through its chat interface, there are some clever shenanigans going on in the background that are sending your whole conversation up to this point through every single time to make a request. ChatGPT then parses the whole conversation again and comes up with a new response. When we use the API, we have to manually tell it what has happened before, or each time we ask a question it has completely forgotten the context. This sounds like a hassle, but we can use that by manipulating the conversation history to our advantage!

The OpenAI Python API is extremely simple to use. We need to request a chat completion. A chat completion is exactly what it sounds like: it’s basically asking ChatGPT, “When I say <prompt>, what do you say?”. We do that like this:

import os
import openai

openai.api_key = "YOUR OPENAI API KEY"

starting_prompt = "Talk about fruit"

chat_completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user",
"content": starting_prompt}]
)
conversation_output = chat_completion['choices'][0]['message']['content']

print(conversation_output)

Now we just need to do this over and over again in a loop in order to have a conversation. The conversation history we want to include each time we call the API is in the messages bit of the call. We need to pass on the history of our conversation so far. You can see that each message in the list has a “role” attribute. There are four roles. Right now we are interested in two of them: “user” and “assistant”. We’ve talked about using ChatGPT being like a conversation between a user and a helpful assistant, and well, this is where that is really made clear. When we have a conversation with ChatGPT through the web, a list of “user” and “assistant” messages is passed along so that ChatGPT knows what has happened before. When we use the API, we can fiddle with this a bit so that ChatGPT talks to itself:

import os
import openai

openai.api_key = "YOUR OPENAI API KEY"

starting_prompt = "Talk about fruit"
prior_messages = [{"role": "user", "content": starting_prompt}]

for _ in range(5):
chat_completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages = prior_messages
)
conversation_output = chat_completion['choices'][0]['message']['content']
print(conversation_output + "\n")
prior_messages += [{"role": "user", "content": conversation_output}]

Step 2: Characters

The second thing you need to know is that when we ask ChatGPT a question, we can provide a lot of additional contextual information alongside our statement. Not only can we put the conversation history into that context we can also tell ChatGPT to pretend to be someone.

Armed with these two pieces of information, I got ChatGPT to start having conversations with itself. I created two characters that I named and then briefly described. Then, I simulated a conversation by asking ChatGPT to pretend to be each character and to continue the conversation.

So when my character Andrew is ready to say something, the prompt we send looks like this:

"""
You are Andrew.
You are an extremely enthusiastic innovation consultant from Shetland.
You love to rap sometimes. You really, really love Warhammer.
You have a lot of hair. You are creative and thoughtful and inclusive.

Reply to the conversation in character.
"""

Then we go back and forth between our characters, including all the prior conversations, until we either get bored or we run out of credits for the API.

import os
import openai

openai.api_key = "YOUR OPENAI API KEY"

starting_prompt = "Talk about fruit"
prior_messages = [{"role": "user", "content": starting_prompt}]

andrew = """
You are Andrew.
You are an extremely enthusiastic innovation consultant from Shetland.
You love to rap sometimes. You really, really love Warhammer.
You have a lot of hair. You are creative and thoughtful and inclusive.

Reply to the conversation in character.
"""


alex = """
You are Alex.
You are an extremely excitable and very creative innovation consultant.
You love stories and games.
You like to tell jokes. You have a lot of unruly curly hair.
You used to be a game developer. You once worked on Crash Bandicoot.
You love to cook. You really dislike sports.

Reply to the conversation in character.
"""


characters = [alex, andrew]

for _ in range(5):
for character in characters:
chat_completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages = prior_messages + [{"role": "user", "content": character}]
)

conversation_output = chat_completion['choices'][0]['message']['content']
print(conversation_output + "\n")
prior_messages += [{"role": "user", "content": conversation_output}]

This works pretty well and, in fact, we can extend it out to have more than two characters quite easily. If we randomly switch between characters, the conversation can look quite natural. At this point, I bit the bullet and started pulling everything into classes. (NOTE: I’m an aged developer. I haven’t done any real dev work for many years. The best I get these days is Advent of Code each Christmas. So please bear with my code. I am not a pythonista. I’m an old-school C++ and Objective C dev at heart.)

import os
import openai

openai.api_key = "YOUR OPENAI API KEY"

starting_prompt = "Talk about fruit"
prior_messages = [{"role": "user", "content": starting_prompt}]

class Character:
def __init__(self, name, description):
self.name = name
self.description = description

def getContent(self):
return [{"role": "user", "name": self.name, "content": self.description + " Reply concisely, in a single paragraph, to the conversation in character."}]

andrew = Character(name = "Andrew", description = """You are a character called Andrew.
You are an extremely enthusiastic innovation consultant from Shetland.
You love to rap sometimes. You really, really love Warhammer.
You have a lot of hair. You are creative and thoughtful and inclusive."""
)

alex = Character(name = "Alex", description = """You are a character called Alex.
You are an extremely excitable and very creative innovation consultant.
You love stories and games.
You like to tell jokes. You have a lot of unruly curly hair.
You used to be a game developer. You once worked on Crash Bandicoot.
You love to cook. You really dislike sports."""
)

characters = [alex, andrew]

for _ in range(5):
for character in characters:
chat_completion = openai.ChatCompletion.create(
#model="gpt-3.5-turbo",
model="gpt-4",
messages = prior_messages + character.getContent()
)

conversation_output = chat_completion['choices'][0]['message']['content']
print(f"{character.name}: {conversation_output}")
print("---")
prior_messages += [{"role": "user", "name": character.name, "content": conversation_output}]

We can, however, go so much further.

Step 3: Hopes and Dreams

The next thing we need to know is whether we can convince ChatGPT to return its answers in a specific format, and whether we can use that to include additional information in the replies. If I include an instruction to structure replies in a certain way in the prompt, then it will do its best to comply. Here, I ask it to reply in JSON which is a commonly used data structure that lets us include a lot of additional bits of information in an easily readable and accessible format:

"""
Output the response to the prompt above as json.
The output should be a tuple where the tuple is in the form of:
{
"name" : <Name>,
"utterance" : <Utterance>
}
"""

Up to this point, we’ve been using the messages attribute of the API call to store the conversation history. What I’ve found, however is that you can get better results by including the conversation history in the main prompt instead. It also gives us more flexibility in what we include in history. Much more on this below but for now we’ll just make a small change when we go to include the request to reformat the output:

import os
import openai

openai.api_key = "YOUR OPENAI API KEY"

starting_prompt = "Talk about fruit"
prior_messages = ""

class Character:
def __init__(self, name, description):
self.name = name
self.description = description

def getContent(self, prior_messages):
return [{"role": "user", "name": self.name, "content":
f"""
Context:
{self.description}

Conversation so far:
{prior_messages}

Reply concisely, in a single paragraph, to the conversation in character.
Output the response to the prompt above as json.
The output should be a tuple where the tuple is in the form of:
{{
'name' : <Name>,
'utterance' : <Utterance>
}}
"""

}]

andrew = Character(name = "Andrew", description = """You are a character called Andrew.
You are an extremely enthusiastic innovation consultant from Shetland.
You love to rap sometimes. You really, really love Warhammer.
You have a lot of hair. You are creative and thoughtful and inclusive."""
)

alex = Character(name = "Alex", description = """You are a character called Alex.
You are an extremely excitable and very creative innovation consultant.
You love stories and games.
You like to tell jokes. You have a lot of unruly curly hair.
You used to be a game developer. You once worked on Crash Bandicoot.
You love to cook. You really dislike sports."""
)

characters = [alex, andrew]

for _ in range(5):
for character in characters:
chat_completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages = character.getContent(prior_messages)
)

conversation_output = eval(chat_completion['choices'][0]['message']['content'])
print(f"{conversation_output['name']}: {conversation_output['utterance']}")
print("---")
prior_messages += f"{conversation_output['name']}: {conversation_output['utterance']}\n"

The magical thing about asking our output to be formatted as JSON is that we can extend it out with whatever additional data we think ChatGPT could dream up for us. The most interesting thing we could add to our conversation engine is an understanding of each character’s thoughts and feelings as they speak.

Putting it all together, our prompt might look like this:

"""
Context: You are a character called Alex.
You are an extremely excitable and very creative innovation consultant.
You love stories and games.
You like to tell jokes. You have a lot of unruly curly hair.
You used to be a game developer. You once worked on Crash Bandicoot.
You love to cook. You really dislike sports.

You are discussing: The best fruit.
Current feeling: Happy.
Last thought: Fruit are really delicious. Especially apples.
Keep your answers concise and conversational.
Reply to the conversation in character.

{prior_conversation}

Output the response to the prompt above as json.
The output should be a tuple where the tuple is in the form of
{
"name" : <Name>,
"utterance" : <Utterance>,
"feeling" : <Feeling>,
"thoughts" : <Thoughts>
}
"""

Finally, adding an example output to the prompt seems to help the AI to nail the format every time. So we append:

"""
Example output json:
{
"name" : "Alex",
"utterance" : "Hello. I've been working down the mines today.",
"feeling" : Bored",
"thoughts" : "Working in the mines is awful. I must find a better job"
}
"""

So, putting all of that together, we get this:

import os
import openai

openai.api_key = "YOUR OPENAI API KEY"

starting_prompt = "Talk about fruit"
prior_messages = ""

class Character:
def __init__(self, name, description, topic):
self.name = name
self.description = description
self.feeling = "Neutral."
self.thought = "This is going to be a fun conversation."
self.topic = topic

def getContent(self, prior_messages):
return [{"role": "user", "name": self.name, "content":
f"""
Context:
{self.description}

You are discussing: {self.topic}.
Current feeling: {self.feeling}.
Last thought: {self.thought}.
Keep your answers concise and conversational.
Reply to the conversation in character.

Conversation so far:
{prior_messages}

Reply concisely, in a single paragraph, to the conversation in character.
Output the response to the prompt above as json.
The output should be a tuple where the tuple is in the form of
{{
"name" : <Name>,
"utterance" : <Utterance>,
"feeling" : <Feeling>,
"thoughts" : <Thoughts>
}}
Example output json:
{{
"name" : "Alex",
"utterance" : "Hello. I've been working down the mines today.",
"feeling" : Bored",
"thoughts" : "Working in the mines is awful. I must find a better job"
}}
"""

}]

andrew = Character(name = "Andrew", description = """You are a character called Andrew.
You are an extremely enthusiastic innovation consultant from Shetland.
You love to rap sometimes. You really, really love Warhammer.
You have a lot of hair. You are creative and thoughtful and inclusive."""
, topic = starting_prompt)

alex = Character(name = "Alex", description = """You are a character called Alex.
You are an extremely excitable and very creative innovation consultant.
You love stories and games.
You like to tell jokes. You have a lot of unruly curly hair.
You used to be a game developer. You once worked on Crash Bandicoot.
You love to cook. You really dislike sports."""
, topic = starting_prompt)

characters = [alex, andrew]

for _ in range(5):
for character in characters:
chat_completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
#model="gpt-4",
messages = character.getContent(prior_messages)
)

conversation_output = eval(chat_completion['choices'][0]['message']['content'])
print(f"{conversation_output['name']}: {conversation_output['utterance']} ({conversation_output['thoughts']})")
print("---")
prior_messages += f"{conversation_output['name']}: {conversation_output['utterance']}\n"

The Results

I struggled to think of the right example to put in this article and then, in the middle of the night, I had an epiphany. Here you go… a lost episode of Friends.

Starting prompt: You are having a heated debate. The first person to speak should choose a topic.

Joey : Hey, let’s talk about the best sandwich in the world. I vote for meatball sub! (I hope they agree with me. A meatball sub is just so delicious.)

Ross : Joey, Joey! How can you forget about my turkey sandwich with a moist maker? That’s the best sandwich in the world! (How could Joey forget about my favorite sandwich? It’s clearly the best one out there.)

Chandler : Oh, yes, because nothing says ‘culinary delight’ like a sandwich that’s been sitting in a fridge all day. But sure, Ross, your turkey sandwich is the best. (I’m enjoying this light-hearted debate about sandwiches. It’s a welcome distraction from work.)

Phoebe : Guys, guys! We’re forgetting about the spiritual essence of the sandwich. It’s not about the meat or bread, it’s about the love and energy you put into making it. (I need to teach them about the spiritual side of things.)

Joey : Look, I get it. Ross, your turkey sandwich is good and Phoebe, you’re right about the love and energy part. But come on, a meatball sub? It’s got everything! Meatballs, cheese, bread…it’s like a full meal in one! (I really hope they see my point. A meatball sub is just so satisfying.)

Phoebe : Joey, a meatball sub is indeed a full meal, but have you ever thought about the sandwich’s soul? Every ingredient has a spirit. The bread, the cheese, the meatballs… they all come together to create an energy that nourishes more than just our bodies. (I need to help them understand the spiritual essence of food.)

Rachel : Guys, you’re all missing the point. The best sandwich is obviously a grilled cheese! Simple, classic, and oh-so comforting. (I wonder if they’d like it if I made them my special grilled cheese.)

Ross : Rachel, a grilled cheese is comforting, I’ll give you that. But my turkey sandwich with the moist maker, it’s an experience! It’s like a Thanksgiving meal in every bite! (I need to make them understand why my sandwich is the best. Maybe I should bring one for everyone tomorrow.)

Chandler : Well, I hate to break it to you all, but the best sandwich is clearly the one that someone else makes for you. Because nothing tastes better than not having to do any of the work. (This debate is ridiculous but entertaining. I wonder how Monica would react if she knew I was arguing about sandwiches instead of working.)

Monica : Chandler, you’ve got a point. The best sandwich is the one someone else makes for you. But as a chef, I can tell you that the quality of ingredients and the way it’s prepared matters. So, my vote goes for a classic BLT. It has the perfect balance of flavors and textures. (I hope Chandler noticed that I agreed with him. Maybe this will make him see me in a different light.)

More Things to Know

  1. GPT-4 is much better than GPT-3.5 at this.
  2. GPT-4 is also twenty times as expensive as 3.5. You can very quickly rack up a hefty OpenAI bill doing this kind of thing. Be very careful.
  3. With some tinkering, it is possible to get this working with a local llama2 model if you have enough memory and a fast enough computer, but the results are nowhere near as good. Llama2 is bad at following the instructions for formatting the output, so you might have to change the hopes and dreams stuff, and if the output cannot be parsed, then you just need to try again until it eventually succeeds.
  4. OpenAI is obviously careful about misinformation and users getting ChatGPT to pretend to be real people. If you try to get it to do something sketchy or to run a phishing scam, they will rightly ban you.
  5. If you’re recreating conversations with real people you know, then please be careful that you’re not stereotyping or down-talking them with this. It should go without saying that it would be very easy to write a description of someone that hurt them.
  6. I ended up creating more classes, including a Conversation class to hold the context. I also moved the conversation memory into the Character class so that individuals could enter and leave the conversation midway through without being omniscient. It’s also possible to introduce new topics of conversation part way through.
  7. Longer term, I’m quite keen on giving it a front end, letting the conversation play out in real-time and then letting humans butt into the chat with their own input. I’d quite like to make a Teams plugin to do it.
  8. Finally, immense thanks to my incredibly smart and lovely friend Joe who told me that he’d built something like this himself and then gave me some pointers. Love you to bits, Joe.

Who are you?

I’m Alex Waterston. I’m the Associate Director of Innovation for a consultancy in the UK, and Australia called Waterstons.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓