Sturdy GPT-4 Guardrails: Better Prompting For Python Code Results
Last Updated on January 14, 2025 by Editorial Team
Author(s): John Loewen, PhD
Originally published on Towards AI.
Minimizing frustration and inaccuracy in your data viz results
This member-only story is on us. Upgrade to access all of Medium.
As a Computer Science professor I have been using GPT-4 for well over a year now to assist me with my data visual creation workflow.
Recently, I have noticed that GPT-4 is showing great improvements at how it handles data visualization requests.
However, there are still some daily frustrations that I encounter within my GPT-4 prompting workflow:
GPT-4 often loses its βtrain of thoughtβ from the start to end of a conversation, particularly as the responses become more complex.GPT-4 βmakes upβ data (and data field names) if it cannot find the actual data or field names that it needs. It calls it Λ data.
To minimize these two issues (and my overall frustration), I have a tool and a method that I now follow every time for my prompting workflow.
The Tool: I am creating GPT-4 system prompts using the Custom Instructions tool.
The Method: I am setting up my system prompts using Guardrails
Let me show you how it works.
With GPT-4 system prompts allows us to provide instructions to the LLM that it will remember for the entirety of a chat conversation.
Why is this important? We absolutely want to avoid the situation where we are… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI