Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Test-Driven Application Development with Large Language Models
Latest   Machine Learning

Test-Driven Application Development with Large Language Models

Last Updated on July 17, 2023 by Editorial Team

Author(s): Prajwal Paudyal

Originally published on Towards AI.

Towards engineering an LLM application. (Image created by author on Stable Diffusion)

The following are my insights about Test Driven Application Development for Large Language Model powered applications.

I have been working on an application that generates late-night-style TV shows and stand-up comedy videos in an end-to-end automated way. If you haven’t seen it, I have posted a few episodes already on this channel; check it out!

If you find the content helpful, please subscribe. I’ll post details about the application I’m building in subsequent posts.

TLDR;

Developing applications with Large Language Models (LLMs) using Test Driven Development (TDD) presents several challenges and insights.

  1. Testing generative models like LLMs is difficult given their complexity and the ‘creative’ nature of output — but it is crucial for automation and safety.
  2. The nature of testing is shifting; it’s easier to discriminate than generate. Therefore using another LLM to test the outputs of the original LLM can be beneficial.
  3. Not all LLMs are created equally. Thus, the selection is crucial and should be done according to the use case and by following relevant benchmarks as well as privacy, security, and cost. considerations. Robustness is another important consideration and can be done using perturbation testing, ensuring that similar inputs give similar outputs.
  4. Duck typing in Python is powerful but can cause integration headaches like runtime errors, incorrectness, and difficulties in generating code and documentation. Tools like MyPy and Pydantic are a must for type-checking and output parsing.
  5. Execution testing involves checking the output of LLMs. Two ways to accomplish this are compile-time property testing (tuning instructions or prompts) and run-time output testing (using another LLM to auto-generate test cases).
  6. Bug discovery, planning, and iteration require an interactive approach with the LLM, prioritizing recall over precision. Using an LLM to enumerate and iterate on use cases and test cases is suggested, with the added advantage of LLMs being excellent at summarizing and categorizing information.

Types of testing — by Development Stage

First things first, testing generative models is tough due to the ‘creative’ nature, but it is fundamentally essential, especially with instruction-tuned and safety aligned models. One of the frequent failure cases is the model refusing to produce an output and answering with “As a language model ..” .

While developing the LLM application in a TDD way, I have found it helpful to think of the various types of tests needed by the ‘stage’ of development.

Stage One: LLM Selection

The starting point of my process was realizing that not all LLMs are identical. The demands of particular use-cases may necessitate specialized LLMs, taking into account quality and privacy requirements. Here’s what I focused on during selection:

1. Benchmarks

Benchmarks are useful to select LLMs if the intended output task is close enough to a standard benchmark.

GPT-4 has released performance on several benchmarks . Likewise, the Open LLM Leaderboard aims to objectively track, rank, and evaluate the proliferation of large language models (LLMs) and chatbots, sifting through the hype to showcase genuine progress in the field. The models are evaluated on four main benchmarks from Eleuther AI-Language Model Evaluation Harness, ensuring comprehensive assessment across diverse tasks. The leaderboard also enables community members to submit their Transformer models for automated evaluation on the platform’s GPU cluster. Even models with delta-weights for non-commercial licenses are eligible. The selected benchmarks — AI2 Reasoning Challenge, HellaSwag, MMLU, and TruthfulQA — cover a broad spectrum of reasoning and general knowledge, tested in 0-shot and few-shot scenarios.

2. Perturbation testing:

LLMs intended to be used with variable prompts, must be locally consistent to inputs that are semantically similar. One way to test this by perturbation testing with an LLM powered test suite.

A sample report for perturbation testing generated at FidderAI (src: Fiddler)

The general approach is as follows:

  1. Introduce Perturbations: Utilize another LLM to rephrase the original prompt while retaining its semantic essence. Then, supply the perturbed and original prompts to the LLM under assessment.
  2. Analyze Generated Outputs: Assess the generated responses for either accuracy (if a standard response is available) or consistency (judged by the similarity of generated outputs if no standard response exists). — The essence of this article
  3. Iterate: Any errors should lead insights towards better prompts, different models, better instruction tuning, etc.

Read the excellent article and tool by FidderAI on Perturbation testing.

Stage Two: Type checking and Integration Testing

Python, the lingua franca of this domain, provided the flexibility of duck-typing, crucial for quick iterations. However, to develop robust software engineering applications, I found it indispensable to ensure thorough syntactic correctness. I would advise always using type-hints, but handle or fail gracefully instead of enforcing breaking errors during run time. Here are some tools I found useful:

  1. MyPy: An efficient static type checker for Python.
  2. Pydantic: It has become my go-to tool for output parsing. It's high extensibility and excellent integration with Langchain are bonus points.
  3. Langchain: The output parsers in Langchain can be employed to create repeated instructions for output as well as automated tests.

Beyond this, integration testing is not much different for LLM applications than for other software applications, so I won’t go into much detail.

Stage Three: Runtime Output Testing

Testing the outputs of a generative model can be tricky, a.k.a. The ‘test oracle’ problem. Nevertheless, using the principle that discrimination is less complex than generation can be helpful here.

Property-based software testing, such as metamorphic testing, is a useful approach for addressing the test oracle problem as well as for test case generation. In essence, this is done by testing on a known or derivable property.

For example, when testing a response to a query, how many distinct cities are there in a particular state; how do we determine if the results of this are correct and complete? This is a test Oracle problem. Based on a metamorphic relation, we can ask the LLM to tell us how many cities are there in the state that begin with the letters A through M. This should return a subset of the previous results. A violation of this expectation would similarly reveal a failure of the system. In this case, several tests can be determined either during development or live during runtime, as explained below. Property-based testing and MT in general were originally proposed as a software verification technique, the concepts cover verification, validation, software quality assessment.

Pre-composed Property testing:

This is particularly useful for instruction tuning or prompt engineering. For instance, if the LLM is supposed to summarize a webpage while eliminating all hyperlinks and emojis, I would start by writing straightforward procedural test cases or LLM prompts for these tasks.

This approach works well if the types of output expected are known in advance. In these scenarios, the testing isn’t much different than what is possible by using testing frameworks like Robot, Pytest, Unittest, etc. Using semantic similarity with a threshold for fuzziness is useful.

For instance:

  1. Application to extract and summarize the ‘main’ article in a webpage while ignoring extra links, comments, etc. Then a battery of tests can be designed using existing known web pages. Positive examples: Semantics matches the main page. Negative examples: Unrelated topics
  2. Application to remove negative sentiments, emojis, etc., and summarize text to less than 3 sentences. Test using procedural or LLM models to test those cases specifically.

This is an important topic, so let’s dive further.

Concept: Discrimination is easier than generation

It is counter-intuitive to think of using another model to test the output of a model. If the second model can ‘grade’ the output, why not use the second model to generate the output in the first place?

For instance, In Generative Adversarial Networks (GANs), two components, the Generator and the Discriminator, interact in a game-theoretic way. The Generator creates new data instances while the Discriminator assesses these instances for authenticity. This structure exemplifies the idea that “discrimination is easier than generation.”

The Generator’s task is to generate new data, such as images, that convincingly mimic real-world examples. This requires learning complex patterns and subtleties, making it a difficult and intricate job.

In contrast, the Discriminator’s role is to classify whether given data instances are real or generated. This task is relatively simpler as it involves identifying distinguishing patterns and features without the need to understand how to create the data instances.

Take the following example. The first image was created using Stable Diffusion (by the author) for the prompt:

A cat holding a water bottle in front of big ben on a rainy day, is a difficult image to create, but it is easier to perceive that there is a cat in the image.

Image credit: Author on Stable Diffusion 1.5
The output for this uses Detr object detection model, which is much more lightweight. (src: huggingface)

Use an LLM to runtime test an LLM — Use a separate model as a discriminator to verify if the conditions are met

The generative model needs to not only understand what is ‘cat’ is well enough to create it, but also needs to understand what a bottle is, what big Ben is, and what it means to be a rainy day, and compose it all together (it does give the cat a human hand, but oh well!). However, the discriminative model only needs to understand each of these concepts in isolation, as in the figure below.

A difficult condition to generate, results in gpt4 not understanding the prompt and an incorrect output.

Thus, even if you use a more powerful model (like gpt-4) to generate a response, it can be tested for correctness — execution test or output test — even by using the same or a lesser model (gpt 3 for instance)

Using a model (without shared context) to test the output reveals 1 condition that was met and 2 unmet for ‘properties.’

Example: Even a less-powerful model can act as a discriminator

The prompts here can be better engineered, but the gist is that a GPT 3.5 can be used to output test a GPT-4 at runtime.

Run time output testing:

These types of tests are useful if the nature of the application or task is not known in advance. For instance, any application that accepts a ‘prompt’ from a user and does work dynamically. — summarize this, convert this to SQL etc.

In these cases, not all is lost. As we can easily use another LLM to design quick correctness tests on the fly. (see below)

This although counterintuitive works well , as discrimination is easier than generation. (details below)

This is an example of using GPT4 to generate test cases for the output on the fly. The generated response above needs output parsing to arrive at specific test cases, but that can be handled using a better input prompt.

Stage Four: Bug Discovery, Planning, and Iteration

A noteworthy lesson I’ve learned is, “You can test for the presence of a bug but never for the absence of one.” At this stage, it is paramount to interact with the LLM in a more inquisitive and explorative manner. Here, I’ve found it more beneficial to prioritize recall over precision. Here’s how I apply this:

  1. Use an LLM to generate and test use cases: For each LLM application, I employed an LLM to both produce and test use cases. While these might already be defined in some instances, the creative nature of LLMs can prove advantageous.
  2. Iterate on test cases: I’ve discovered that LLMs are exceptional partners in summarizing and categorizing information and ideas. This is extremely useful when iterating on the test cases for each use case.
  3. Repeat or Auto-repeat with a low temperature: Drill down and consistently repeat or automate this process, adjusting with a low-temperature setting for more reliable outputs.
Same example as above, but focus on the ‘interactive’ part of working with an LLM to generate test cases, discover bugs and drill-down.

Conclusion

In conclusion, it's still early phases, and things are in flux in the ever-evolving landscape of generative AI and LLMs, but one thing is for sure, the testing process seems to be as vital as development. I hope that these insights from my personal journey are helpful to you in some way.

This article is by far not complete as I haven’t talked about other forms of testing, especially for security around prompt injection, jail breaks, etc. However, that is a different topic.

If you like the article, follow me here and on linkedin and youtube for more. Comment your thoughts.

References

  1. Open LLM Benchmark (https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
  2. Langchain Output Parsers
  3. Pydantic
  4. Guidance — enables you to control LLM by interleaving generation, prompting, and logical control.
  5. Metamorphic Testing — property-based software testing framework
  6. Adaptive Testing and Debugging Language Models
  7. Auditor by FidderAI
  8. Metamorphic Testing

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->