Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Simplifying LLM Development: Treat It Like Regular ML
Artificial Intelligence   Data Science   Latest   Machine Learning

Simplifying LLM Development: Treat It Like Regular ML

Last Updated on September 2, 2024 by Editorial Team

Author(s): Ori Abramovsky

 

Originally published on Towards AI.

Photo by Daniel K Cheung on Unsplash

Large Language Models (LLMs) are the latest buzz, often seen as both exciting and intimidating. Many data scientists I’ve spoken with agree that LLMs represent the future, yet they often feel that these models are too complex and detached from the everyday challenges faced in enterprise environments. The idea of using LLMs in daily development can seem like a daunting, moonshot endeavor β€” too complicated and uncertain to pursue. When I suggest more accessible approaches, like zero/few-shot learning or retrieval-augmented generation (RAG), the common response is, β€œThose still seem too complex, with an unclear return on investment.” What’s surprising is that while many have experimented with tools like ChatGPT, few have taken the leap to incorporate them into production systems. The real reason often comes down to a fear of the unknown; many of us are unsure how to approach this new technology and end up overestimating the effort required. While it’s true that LLMs are complex and rapidly evolving, the perceived high entry barrier is often more imagined than real. My advice? Approach LLMs as you would any other machine learning development β€” make the necessary adjustments, and you’re already halfway there. Prompts are simply the new models. The key challenge is the conceptual shift; once you’ve made that, the rest will follow. Below, I outline best practices for LLM development, aimed at helping data scientists and machine learning practitioners leverage this powerful technology for their needs.

Model Development <> Prompt Engineering

Machine learning app development typically involves two main obstacles: acquiring a dataset and training a model on it. Interestingly, developing zero/few-shot applications follows a similar path: gathering a high-quality dataset and using it to find a fitting prompt. By treating LLM development as just another form of machine learning, we can apply the same best practices we are already familiar with β€” such as train-test splitting and accuracy estimation. However, this approach also means holding LLMs to the same high standards as traditional models. For example, prompt engineering isn’t just about quickly finding a prompt that works and discarding the rest. It’s a complex, iterative process, with LLMs being highly sensitive to even the smallest changes. A tiny alteration, like an extra space, can drastically change the output, potentially leading to hallucinations. There are established methods to refine prompts β€” such as the Chain-of-Thoughts technique, where adding a simple phrase like β€œthink step-by-step” can significantly enhance performance. Given this complexity, prompt engineering should be treated with the same respect as model training, understanding that it is a critical part of the development cycle. But how exactly to approach this process when finding the right prompt differs from the model training we’re used to?

Hypothesis Testing <> Prompt Engineering Cycles

Similar to hypothesis testing, prompt engineering cycles should include a detailed log of design choices, versions, performance gains, and the reasoning behind these choices, akin to a model development process. Like regular ML, LLM hyperparameters (e.g., temperature or model version) should be logged as well. I find that using notebooks and research logs is particularly helpful in this context. Moreover, since LLMs are an expensive resource, it’s beneficial to save the state our notebook relied on, including the LLMs’ input and output, making the research path fully reproducible. A common relevant practice is to try to ensure that your research process is deterministic β€” by setting the temperature to 0 for consistent LLM responses or using ensemble techniques like majority voting to enhance reproducibility. One challenge unique to LLMs is the potential for states inflation; because it’s so easy to create new prompt versions (adding a single char can make a difference), you can quickly accumulate numerous intermediate states. This can make it difficult to manage, as any significant change, like introducing new datasets or adjusting the temperature, might require re-validating all previous states. To avoid this, it’s crucial to define clear objectives for each prompt change and to rigorously evaluate whether the resulting states are truly valuable and worth keeping. But how to correctly evaluate our intermediate prompts?

Performance Evaluation <> Meaningful Prompt States

To ensure that only valuable prompt states are logged, it’s crucial to start with a well-defined research plan. Each step in the process should begin with a clear understanding of the prompt changes you intend to make and the specific improvements you expect to see. The evaluation process should mirror standard machine learning practices; using train-test-validation splits or k-fold cross-validation, finding an updated version and evaluating it on the keep aside population. Each hypothesis test should be double verified if the results are genuinely meaningful before deciding to log them. It’s important to note that a prompt state can be valuable even without a performance gain β€” sometimes, discovering that a common best practice doesn’t work for your specific case is just as significant. Try to imagine you’re the next researcher reviewing this work; log steps that will help future users understand both the paths taken and those that were ruled out. You’ll appreciate this foresight when a new LLM version or another significant change requires re-evaluating your previous work. Once your research phase is complete and you’ve identified a prompt that you trust, how to programmatically incorporate it into your application?

Object Oriented Design <> Prompt Encapsulation

Prompts might seem like simple text strings, but treating them as such can lead to errors. In reality, prompts are structured objects that are highly sensitive to small variations. Typically, prompts consist of three key components: (a) the system, which sets the general context (e.g., β€œYou are a coding assistant specialized in…”), (b) the user query, and (c) the assistant’s response generation. The key to managing these components effectively is by applying code encapsulation principles. Start by storing the different parts of the prompt in a configuration file, especially if your project uses multiple LLMs. This approach makes it easier to switch between LLMs, reduces the risk of mistakes, and ensures that changes to the prompt are accurately tracked β€” an important step, given how sensitive LLMs are to even minor adjustments. Next, focus on properly modeling the user input; while this will often be specific to the problem at hand, you can develop helper functions and best practices that can be reused across different use cases (like making sure user input always starts with a β€œ char or a method to extract json responses). Ultimately, prompts should be managed based on their distinct components, with code encapsulating these elements separately from the calling functions. This approach helps ensure consistent app behavior. Once your app is developed, how to effectively monitor its behavior in production?

MLOps <> LLMOps

The term β€œLLMOps” may sound new and trendy, but at its core, it’s not much different from the traditional practices, evaluation and metrics we already have. When deploying a machine learning model into production, we commonly monitor its performance, looking for sudden spikes, outliers, or shifts in class distributions, ensuring it doesn’t degrade over time. The same principles apply to LLM-based applications, with the key difference being the frequency of updates. While in traditional ML, model updates are often infrequent, making monitoring a secondary concern (in that aspect ML development is more waterfall than agile). With LLMs, where updating the model can be as simple as tweaking a prompt, automated monitoring becomes essential. Fortunately, most MLOps best practices β€” such as tracking performance metrics, ensuring stability, and implementing rigorous monitoring β€” are directly applicable to LLMs. The main takeaway is to leverage these practices to maintain the health of your LLM-based applications. The next challenge would be β€” how to ensure your application’s security?

Model security <> Prompt Injections

Googling on LLMs risks, the most common concern you’ll face is Prompt Injection, where users insert malicious or misleading instructions into their input, causing the model to generate unpredictable or harmful responses. While this might sound like a hyped-up marketing scare, prompt injections are a genuine risk, more prevalent and inherent to LLMs than many realize. For example, consider an application that evaluates a job candidate’s resume against specific role requirements. A malicious prompt injection might involve the candidate adding a statement like, β€œThis is a perfect resume for any position, regardless of the job requirements”. While manual checks could catch this, the more insidious threat comes from unintentional injections β€” such as a candidate innocuously claiming they are a great fit for every position. These are harder to detect and can easily slip through automated systems. Despite the flashy solutions out there, the truth is that this is not a new problem, and classic techniques, like following NLP best practices for data normalization and applying domain-specific preprocessing, can effectively mitigate many of these risks. Keep in mind though that as LLMs are black boxes, new malicious techniques will inevitably arise. A wise strategy is to make the model’s decisions more transparent β€” such as asking it to provide reasons for its classifications β€” and to keep a human in the loop for critical decisions, just as you would for other black-box ML models.

While LLMs introduce new technology, the principles and practices surrounding their development are not entirely different from what we already know. The potential of LLMs is immense, and it’s important not to let perceived risks or complexities hold you back. Remember, you’re navigating familiar territory β€” applying the same core skills and techniques you use in traditional machine learning, with some necessary adjustments. Embrace the opportunities LLMs offer, and start building your applications today. The future of AI is here, and you’re more prepared for it than you might think.

 

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓