Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

7 Interesting Experiments with ChatGPT
Latest

7 Interesting Experiments with ChatGPT

Last Updated on December 13, 2022 by Editorial Team

Author(s): Building Blocks

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

Drawing Images, Learning Rules in an Alternate World, Program Translation, Extracting Insights from Reviews, Cross Lingual Conversation, etc.

Introduction

Since its launch on the 30th of November, ChatGPT has taken the world by storm. The internet has been exploding with posts on numerous social media platforms showing the different ways ChatGPT can be used. In today’s article, we’ll walk through some interesting use cases that we think ChatGPT can be used for and some other fun experiments to test its capabilities.

Wordfinder/Thesaurus

Most of us have been in a setting where we are searching for the right/perfect word. A word that encapsulates what we are trying to express in a succinct but precise manner. ChatGPT is a nice alternative to using a thesaurus or a search engine and can give you an instant response.

With its capability of re-writing or creating content in a particular style, ChatGPT can compete strongly with the likes of QuillBot, Grammarly, etc.

Code Translation

Programmers can often find themselves re-implementing the same functionality across different languages. This can be for several reasons, such as certain languages being easier to explore but not good at scaling and vice versa. Take, for example any machine learning-based operations. Despite having high-level python interfaces, under the hood, the operations are run in a language like cython, rust, etc.

Another use case could be where a person is familiar with some languages, but the place they work might be using a different language for developing applications. In such a scenario, a programmer might need to spend plenty of time ramping up and learning a language. Having a tool to translate code across languages can be invaluable in both teaching a programmer a new language as well as ensuring that the programmer is productive, contributing to a repository sooner.

In the image above ChatGPT can convert a function in scala to a function in python. The cool thing in this example is that ChatGPT notices that we’re trying to compute the dot product for sparse vectors over dense vectors and mentions how we’d lose out on efficiency if we used the dot function available in NumPy.

Cleaning Up Code

Good programmers pride themselves on being able to write clean code. While the specifics of clean code can vary, the intention is to write code that is easy for a reader to understand and for future developers to make changes to. This often takes the form of writing clear documentation, mentioning data types, creating granular functions that serve only one purpose, reusing code wherever possible, etc.

However, code refactoring usually takes place after a programmer creates a working version of code that may not be clean. Having a single tool that cleans up code, generates comments, etc., can save programmers plenty of time and can replace the numerous linters and code analyzers that we use.

We send ChatGPT a couple of functions used in some of our articles regarding training your own Expected Goals model. Note that, we don’t explicitly state the purpose of these functions to ChatGPT.

Note: To make things easier to read we extract the code portions and paste them here. An image of the prompt and its output is shown later on as well.

def get_shot_angle(shot_x: float, shot_y: float):

deg2 = (360 + degrees(atan2(GOAL_POSTS[0][0] - shot_x, GOAL_POSTS[0][1] - shot_y))) % 360
deg1 = (360 + degrees(atan2(GOAL_POSTS[1][0] - shot_x, GOAL_POSTS[1][1] - shot_y))) % 360
angle = deg2 - deg1 if deg1 <= deg2 else 360 - (deg1 - deg2)
print(f"{angle}")
return angle

def get_pass_shot_zones(x, y):
pitch = Pitch(line_color='black', pitch_type='statsbomb')
ret = pitch.bin_statistic(x, y, None, 'count', bins=[10, 8])
zones = [(ret["binnumber"][0][i], ret["binnumber"][1][i]) for i in range(len(ret["binnumber"][0]))]
return zones

The code snippet above was the code sent to ChatGPT. Its response is below. Right off the bat, some impressive things are:

  • It knows what libraries to import.
  • Despite some of the comments being trivial, they are quite accurate. We were quite surprised to see how ChatGPT inferred that the code related to football/soccer. Especially the fact that it was able to understand that we were able to calculate the angle of a shot between the two goalposts.
from math import atan2, degrees

# Define goal post coordinates
GOAL_POSTS = [(0, 0), (0, 0)]

def get_shot_angle(shot_x: float, shot_y: float):
"""
Calculates the angle of a shot based on the coordinates of the shot and the location of the goal posts.
"""
# Calculate the angle of the shot relative to each goal post
deg1 = (360 + degrees(atan2(GOAL_POSTS[0][0] - shot_x, GOAL_POSTS[0][1] - shot_y))) % 360
deg2 = (360 + degrees(atan2(GOAL_POSTS[1][0] - shot_x, GOAL_POSTS[1][1] - shot_y))) % 360

# Calculate the difference in angles and return the result
angle = deg2 - deg1 if deg1 <= deg2 else 360 - (deg1 - deg2)
print(f"{angle}")
return angle

def get_pass_shot_zones(x, y):
"""
Calculates the pass/shot zones on a soccer pitch based on the given coordinates.
"""
# Initialize the soccer pitch
pitch = Pitch(line_color='black', pitch_type='statsbomb')

# Calculate the pass/shot zones and return the result
ret = pitch.bin_statistic(x, y, None, 'count', bins=[10, 8])
zones = [(ret["binnumber"][0][i], ret["binnumber"][1][i]) for i in range(len(ret["binnumber"][0]))]
return zones

In our, next experiment let's see if ChatGPT can also make our code more concise. We provide ChatGPT with a block of code that computes the dot product and then divide it by the length of the arrays.

a = [1,2,3]
b= [5,6,7]
c = 0
for i in range(len(a)):
c += a[i]*b[i]
c/=len(a)

ChatGPT responds with the following it gives us two options, the second option is shorter and leverages the zip function, another alternative could have been using the dot function available in a library like NumPy. ChatGPT, once against generates good comments that make it clear what the code is trying to accomplish.

Imagine having a programming assistant similar to this in an Intro to programming class. Students would be able to learn different ways of coding something up, leveraging different libraries and functions.

Cross-Lingual Conversations

In this test, we want to check if ChatGPT can understand questions in one language but respond in another. We ask questions to ChatGPT in English and ask it to respond in Telugu. We assume that there is very little, if any, training data that follows this kind of format making this a hard task.

On the whole, ChatGPT does seem to struggle on this task. Its explanation of what a transformer is didn’t make much sense, nor did its response to what the capital of India is. Strangely, it responded with Navindrapuram, which is a place we haven’t even heard of.

Another striking thing is that ChatGPT is trying to translate the question from English to Telugu and then complete the generations. Every first sentence in Telugu is an attempt at phrasing the question in Telugu. This seems like an attempt at restructuring the task to the single language setting, making it theoretically easier for the model to come up with a good generation.

Extracting Insights From Reviews

Today almost all websites, services, and products ask for customer reviews in a bid to showcase the value that they bring to their customers and also understand what their shortcomings are. While an organization would like to learn what improvements can be made, a customer who reads reviews would like to know about the pros and cons of a given service/product and identify whether it would be a good choice for them to purchase it or not.

With ChatGPT, we can extract key insights from different points of view quickly and efficiently without having humans wade through scores of reviews.

In the example, we present ChatGPT with a review of a fictitious game. We ask ChatGPT to provide us with key insights from the point of view of:

  • A customer interested in buying the game
  • The creator of the game
  • A platform on which the game is run.

ChatGPT’s response and how it rephrases its response to match the interested party's point of view are impressive. The response to the final question is really interesting because ChatGPT mentions how Microsoft can reach out to the game’s developer to better address the glitches in the game!

Alternate World

In one of the episodes of the Lex Fridman Podcast, Demiss Hassabis talks about how one of the main challenges to an AI system is to gauge its behavior when the rules are changed. Adding to this idea, ChatGPT is supposed to exhibit some capacity to learn when provided with feedback.

In an attempt to test ChatGPT on these ideas, we tell ChatGPT that we are in an alternative world and explain how the alternate world is different from ours step-by-step.

Word Reversal

Our first rule is that in the new world, all words are spelled in the reverse direction. This can be a hard task for an AI model trained on predicting the next word because it isn’t normal/frequent for the training data to have documents spelled in reverse unless it is some sort of programming question. Let’s see if ChatGPT can understand this and act correctly in an alternate world.

Some of the key insights to take away from this exchange are that:

  • ChatGPT understands that it needs to reverse each word.
  • When it is questioned about the reversed spelling of fridge, it correctly identifies that it made a mistake and can correct itself when asked again. This shows a capacity to learn from feedback.

Prepend Words that Represent Physical Objects with *

The next rule in the alternate world is that we prefix the * character to all words that are physical objects. The test here is to check if ChatGPT knows what objects take a physical form.

We were initially surprised by ChatGPT’s response since it had an asterisk before the words an and the. On asking ChatGPT to explain its behavior, we learned that it applied the rule to abstract objects as well.

After better understanding ChatGPT’s reasoning, we gave it another instruction to apply the rule to concrete objects only. ChatGPT was now able to modify its behavior and apply the asterisk to the word apple but not the and an. While it does not apply it to the word fridge in the second half of its response, it shows that it knows that it needs to apply it to the fridge as well. Guess AIs aren’t free from making typos, either.

The cool thing about this was that a human and AI were able to communicate with each other, identify the points of confusion and resolve the confusion to achieve a goal!

Our hypothesis on why the model isn’t perfect on this kind of task is that we’re asking it to generate text that is far from what its training data is likely to look like. We’re quite literally asking it to generate new words that may have never been seen (by the model) before.

Complementary Colors

Next, we try to probe ChatGPT on the knowledge it has about road signs. We ask what a stop sign would read as in the alternate world. It shows that it knows that a stop sign has the words stop on it and remembers that in the alternate world, it would be reversed.

We introduce a new rule stating that in the alternate world, all colors would complement our world. This test to identify if ChatGPT:

  • Knows the color of a Stop Sign in our world
  • Has a mapping of complementary colors

Despite being a bit verbose and hesitant, ChatGPT does eventually state that the stop sign would be colored green.

On the whole, this conversation tells us that ChatGPT can

  • Explain why it acts in a certain way
  • Condition its language generation on new rules
  • Incorporate feedback from the user to learn from its mistakes

Drawing Images

This was a long shot, but we were wondering if ChatGPT can generate images using text characters. For example, websites like this one show images generated using solely text characters. Unfortunately, ChatGPT isn’t equipped with this skill yet and might be one of the things GPT-4 conquers.

Conclusion

ChatGPT shows some remarkable capabilities. It is a step forward in the world of AI and the possible applications of AI in the real world. However, it is important to recognize despite OpenAI’s best efforts, plenty of users have shown vulnerabilities that exposed ChatGPT’s capacity to produce discriminatory, biased, racist, and harmful content. While we fawn over the immense value, such technology can bring, we should also be cautious and vigilant of the harm it can do to people if the proper safeguards aren’t in place.

If you have any cool experiments with ChatGPT that you’d like to share with us, please drop a comment below! Sayonara till the next time.


7 Interesting Experiments with ChatGPT was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓