DSPy: Machine Learning Attitude Towards LLM Prompting
Author(s): Serj Smorodinsky
Originally published on Towards AI.
Transition from prompt string manipulations to a PyTorch like framework
This member-only story is on us. Upgrade to access all of Medium.
Link to the official tutorialFull code at your one stop LLM classification project
Hereβs a link to a short YouTube video with the code rundown
My goal is to showcase complex technologies through non trivial use cases. This time I have chosen to focus DSPy framework. Its raison dβetre (reason of being) is to abstract, encapsulate and optimize the logic that is needed for tasking LLM outputs.
DSPy allows coders to specify inputs and outputs for an LLM task, and let the framework deal with composing the best possible prompt.
Why should you care?
You can brag about it during lunchImprove code readabilityImprove LLM task outputs
This is the first part of a series, in which we will focus on an implementation of LLM based classifier. In the next instalment we go deeper with actual optimization.
What is DSPy?Why DSPy?Use case: LLM intent classifier for customer service
DSPy is a framework that was created by Stanford researches. I love the way that the official docs explain so Iβm attaching it here:
DSPy emphasises programming over prompting. It unifies techniques for prompting and fine-tuning LMs as well as improving them with reasoning and tool/retrieval augmentation, all expressed through a… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI