Mastering GPT-4 Code Prompts With Guardrails and Custom Instructions
Last Updated on January 10, 2024 by Editorial Team
Author(s): John Loewen, PhD
Originally published on Towards AI.
The system prompts to minimize GPT-4 Python coding frustration
Dall-E 2 image: An impressionist painting of AI as a blue cube holding on to a guard rail
As a Com Sci prof, I have been using GPT-4 daily for the past 8 months as part of my data visual creation workflow.
Recently I have noticed that GPT-4 is improving at how it handles data visualization requests.
However, there are still some daily frustrations that I encounter within my GPT-4 prompting workflow β specifically related to 2 issues:
GPT-4 often loses its βtrain of thoughtβ from the start to end of a conversation, particularly as the responses become more complex.GPT-4 βmakes upβ data (and data field names) if it cannot find the actual data or field names that it needs. It calls it βplaceholderβ data.
To mitigate these two issues (and my overall frustration), I have a tool and a method that I am now integrating into my prompting workflow.
The Tool: I am creating GPT-4 system prompts using the Custom Instructions tool.
The Method: I am setting up my system prompts using Guardrails
Let me show you how it works.
With GPT-4 system prompts allows us to provide instructions to the LLM that it will remember for the entirety of a chat conversation.
Why is this important? We absolutely… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI