Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Learn Prompting 101: Prompt Engineering Course & Challenges
Latest   Machine Learning   Tutorials

Learn Prompting 101: Prompt Engineering Course & Challenges

Last Updated on November 12, 2023 by Editorial Team

Introduction

The capabilities and accessibility of large language models (LLMs) are advancing rapidly, leading to widespread adoption and increasing human-AI interaction. Reuters reported research recently estimating that OpenAI’s ChatGPT already reached 100 million monthly users in January, just two months after its launch! This raises the importance of the question; how do we talk to models such as ChatGPT and how do we get the most out of them? This is prompt engineering. While we expect the meaning and methods to evolve, we think it could become a key skill and might even become a common standalone job title as AI, Machine Learning, and LLMs become increasingly integrated into everyday tasks.

This article will explore what prompt engineering is, its importance & challenges, and provide an in-depth review of the Learn Prompting course designed for the practical application of prompting for learners of all levels (minimal knowledge of machine learning is expected!).

Learn Prompting is an open-source, interactive course led by @SanderSchulhoff and contributed to by Towards AI and tons of generous contributors. Towards AI is also teaming up with Learn Prompting to launch the HackAPrompt Competition, the first-ever prompt hacking competition. Participants do not need a technical background and will be challenged to hack several progressively more secure prompts. Stay tuned in our Learn AI Discord community or the Learn Prompting’s Discord community for full details and information about prizes and dates!

What is Prompting?

Generative AI models primarily interact with the user through textual input. Users can instruct the model on the task by providing a textual description. What users ask the model to do in a broad sense is a “prompt”. “Prompting” is how humans can talk to artificial intelligence (AI). It is a way to tell an AI agent what we want and how we want it using adapted human language. A prompt engineer will translate your idea from your regular conversational language into clearer and optimized instructions for the AI.

The output generated by AI models varies significantly based on the engineered prompt. The purpose of prompt engineering is to design prompts that elicit the most relevant and desired response from a Large Language Model (LLM). It involves understanding the capabilities of the model and crafting prompts that will effectively utilize them.

For example, in the case of image generation models, such as Stable Diffusion, the prompt is mainly a description of the image you want to generate. And the precision of that prompt will directly impact the quality of the generated image. The better the prompt, the better the output.

Why is Prompting Important?

Prompting serves as the bridge between humans and AI, allowing us to communicate and generate results that align with specific needs. To fully utilize the capabilities of generative AI, it’s essential to know what to ask and how to ask it. Here is why prompting is important:

  • By providing a specific prompt, it’s possible to guide the model to generate output that is most relevant and coherent in context.
  • Prompting allows users to interpret the generated text in a more meaningful way.
  • Prompting is a powerful technique in generative AI that can improve the quality and diversity of the generated text.
  • Prompting increases control and interpretability, and reduces potential biases.
  • Different models will respond differently to the same prompting, and understanding the specific model can generate precise results with the right prompting.
  • Generative models may hallucinate knowledge that is not factual or incorrect. Prompting can guide the model in the right direction by prompting it to cite correct sources.
  • Prompting allows for experimentation with diverse types of data and different ways of presenting that data to the language model.
  • Prompting enables the determination of what good and bad outcomes should look like by incorporating the goal into the prompt.
  • Prompting improves the safety of the model and helps defend against prompt hacking (users sending prompts to produce undesired behaviors from the model).

In the following example, you can observe how prompts impact the output and how generative models respond to different prompts. Here, the DALLE model was instructed to create a low-poly style astronaut, rocket, and computer. This was the first prompt for each image:

  1. Low poly white and blue rocket shooting to the moon in front of a sparse green meadow
  2. Low poly white and blue computer sitting in a sparse green meadow
  3. Low poly white and blue astronaut sitting in a sparse green meadow with low poly mountains in the background

This image was generated by the prompt above:

Although the results are decent, the style just wasn’t consistent. Although, after optimizing the prompts to:

  1. A low poly world, with a white and blue rocket blasting off from a sparse green meadow with low poly mountains in the background. Highly detailed, isometric, 4K.
  2. A low poly world, with a glowing blue gemstone magically floating in the middle of the screen above a sparse green meadow with low poly mountains in the background. Highly detailed, isometric, 4K.
  3. A low poly world, with an astronaut in a white suit and blue visor, is sitting in a sparse green meadow with low poly mountains in the background. Highly detailed, isometric, 4K.

This image was genearted by the prompt above:

These images are more consistent in style, and the main takeaway is that prompting is very iterative and requires a lot of research. Modifying expectations and ideas is important as you continue to experiment with different prompts and models.

Here is another example (on a text model, specifically it was with ChatGPT) of how prompting can optimize the results and help you generate accurate results.

Challenges and Safety Concerns with Prompting

While prompting enables the efficient utilization of generative AI, its correct usage for optimal output faces various challenges and brings several security challenges to the fore.

Prompting for Large Language Models can present several challenges, such as:

  • Achieving the desired results on the first try.
  • Finding an appropriate starting point for a prompt.
  • Ensuring output has minimal biases.
  • Controlling the level of creativity or novelty of the result.
  • Understanding and evaluating the reasoning behind the generated responses.
  • Wrong interpretation of the intended meaning of the prompt.
  • Lack of the right balance between providing enough information in the prompt to guide the model and allowing room for novel or creative responses.

The rise of prompting has led to the discovery of security vulnerabilities, such as:

  • Prompt injection, where an attacker can manipulate the prompt to generate malicious or harmful output.
  • Leak sensitive information through the generated output.
  • Jailbreaking the model, where an attacker could gain unauthorized access to the model’s internal states and parameters.
  • Generate fake or misleading information.
  • The model’s ability to perpetuate societal biases if not trained on diverse and minimally biased data.
  • Generate realistic and convincing text that can be used for malicious or deceitful purposes.
  • The model may generate responses that violate laws or regulations.

Learn Prompting: Course Details

As technology advances, the ability to communicate effectively with artificial intelligence (AI) systems has become increasingly important. It is possible to automate a wide range of tasks that currently consume large amounts of time and effort with AI. AI can either complete or provide a solid starting point for all tasks, from writing emails and reports to coding. This resource is designed to provide non-technical learners and advanced engineers with the practicable skills necessary to effectively communicate with generative AI systems.

About the Course

Learn Prompting is an open-source, interactive course with applied prompt engineering techniques and concepts. It is designed for both beginners and experienced professionals looking to expand their skill sets and adapt to emerging AI technologies. The course is frequently updated to include new techniques, ensuring that learners stay current with the latest developments in the field.

Besides real-world applications and examples, the course provides interactive demos to aid hands-on learning. One of the unique features of Learn Prompting is its non-linear structure, which allows learners to dive into the topics that interest them most. These articles are rated by difficulty and labeled for ease of learning, making it easy to find the right level of content. The gradual progression of the material also makes it accessible to those with little to no technical background, enabling them to understand even advanced prompt engineering concepts.

Learn Prompting is the perfect course for anyone looking to gain practical, immediately applicable techniques for their own projects.

Course Highlights

The Learn Prompting course offers a unique learning experience focusing on practical techniques that learners can apply immediately. The course includes:

  • In-depth articles on basic concepts and applied prompt engineering (PE)
  • Specialized learning chapters for advanced PE techniques
  • An overview of applied prompting using generative AI models
  • An inclusive, open-source course for non-technical and advanced learners
  • A self-paced learning model with interactive applied PE demos
  • A non-linear learning model designed to make learning relevant, concise, and enjoyable
  • Articles rated by difficulty level for ease of learning
  • Real-world examples and additional resources for continuous learning

Learn Prompting: Chapter Summary

Here is a quick summary of each chapter:

1.Basics
It is an introductory lesson for learners unfamiliar with machine learning (ML). It covers basic concepts like artificial intelligence (AI), prompting, key terminologies, instructing AI, and types of prompts.

2. Intermediate
This chapter focuses on the various methods of prompting. It goes into more detail about prompts with different formats and levels of complexity, such as Chain of Thought, Zero-Shot Chain of Thought prompting, and the generated knowledge approach.

3. Applied Prompting
This chapter covers the end-to-end prompt engineering process with interactive demos, practical examples using tools like ChatGPT, and solving discussion questions with generative AI. This chapter allows learners to experiment with these tools, test different prompting approaches, compare generated results, and identify patterns.

4. Advanced Applications
This lesson covers some advanced applications of prompting that can tackle complex reasoning tasks by searching for information on the internet or other external sources.

5. Reliability
This chapter covers techniques for making completions more reliable and implementing checks to ensure that outputs are accurate. It explains simple methods for debiasing prompts, such as using various prompts, self-evaluation of language models, and calibration of language models.

6. Image Prompting
This guide explores the basics of image prompting techniques and provides additional external resources for further learning. It delves into fundamental concepts of image prompting, such as style modifiers, quality boosters, and prompting methods like repetition.

7. Prompt Hacking
This chapter covers concepts like prompt injection and prompt leaking and examines potential measures to prevent such leaks. It highlights the importance of understanding these concepts to ensure the security and privacy of the data generated by language models.

8. Prompting IDEs
This chapter provides a comprehensive list of various prompt engineering tools, such as GPT-3 Playground, Dyno, Dream Studio, etc. It delves deeper into the features and functions of each tool, giving learners an understanding of the capabilities and limitations of each.

9. Resources
The course offers comprehensive educational resources for further learning, including links to articles, blogs, practical examples, and tasks of prompt engineering, relevant experts to follow, and a platform to contribute to the course and ask questions.

How to Navigate

The Learn Prompting course offers a non-linear learning model to make learning practical, relevant, and fun. You can read the chapters in any order and delve into the topics that interest you the most.

If you are a complete novice, start with the Basics section. You can start with the Intermediate section if you have an ML background.

Articles are rated by difficulty level and are labeled:

? Very easy: no programming required

? Easy: simple programming required, but no domain expertise needed

? Medium: programming required, and some domain expertise is needed to implement the techniques discussed (like computing log probability)

? Hard: programming required, and robust domain expertise is needed to implement the techniques discussed (like reinforcement learning approaches)

Please note: Even though domain expertise is helpful for medium and hard articles, you will still be able to understand it.

The future of the course

We plan on keeping this course up to date with Sander and all collaborators (maybe you?). More specifically, we want to keep adding relevant sections for new prompting techniques but also for new models, such as ChatGPT, DALLE, etc., sharing the best tips and practices for using those powerful new models. Still, all of this would not be possible without YOUR help…

Contributing to the course

The idea of learning thrives on growing with each other as a community. We welcome contributions and encourage individuals to share their knowledge through this platform.

Contribute to the course here.

Along with contributions, your feedback is also vital to the success of this course. If you have questions or suggestions, please reach out to Learn Prompting. You can create an issue here, email, or connect with us on our Discord community server.

Find the complete course here.

HackAPrompt Competition: A step towards better prompt safety

Recent advancements in large language models (LLMs) have enabled easy interaction with AI through prompts. However, this has also led to the emergence of security vulnerabilities, such as prompt hacking, prompt injection, leaking, and jailbreaking.

Human creativity generally outsmarted efforts to mitigate prompt hacking. To address this issue, we are helping Learn Prompting to organize the first-ever prompt hacking competition. Participants will be challenged to hack several progressively secure prompts. Prompt engineering is a non-technical pursuit, meaning that diversified people can practice it, from English teachers to AI scientists. This competition aims to motivate users to try hacking a set of prompts in order to gather a comprehensive, open-source dataset for safety research. We expect to collect diverse sets of creative human attacks that can be used in AI safety research.

Stay tuned in our Learn AI Discord community or the Learn Prompting’s Discord community for full details and information about prizes and dates!

Conclusion

Prompt engineering is becoming an increasingly important skill for individuals across all fields. The Learn Prompting course emphasizes practicality and immediate application. One of the exciting aspects of this course is that we are learning about it collectively, and we can only fully understand the techniques through exploration.

As we continue to experiment with generative AI, we must prioritize safety to ensure that the benefits of AI are accessible to everyone. The HackAPrompt competition is one step towards improving prompt safety, as it aims to create a large, open-source dataset of adversarial inputs for AI safety research.

To stay informed about the latest developments in AI, consider subscribing to the Towards AI newsletter and joining our Learn AI Discord community or Learn Prompting’s Discord community.


Learn Prompting 101: Prompt Engineering Course & Challenges was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->