Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Unlocking the Black Box: A Comparative Study of Explainable AI Techniques: PDP vs. ALE and, SHAP vs. Breakdown
Latest   Machine Learning

Unlocking the Black Box: A Comparative Study of Explainable AI Techniques: PDP vs. ALE and, SHAP vs. Breakdown

Last Updated on July 17, 2023 by Editorial Team

Author(s): Varsha Singh

Originally published on Towards AI.

Let us get a grip on the inner working of a few popular XAI Techniques with Easy-to-Understand Explanations.

Photo by fabio on Unsplash

As a data scientist, understanding the inner workings of our models is crucial. This instills confidence in the analytics solution and gives comfort to the stakeholders. But what happens when our models become too complex to interpret? Enter explainable AI, a field dedicated to interpreting and understanding the decisions made by our models. In this exploration, we’ll be taking a closer look at two of the most popular global explanation methods: Partial Dependence and Accumulated Local Effects, which help us understand the relationship between input and output variables. And we’ll also be comparing two local explanation techniques: SHAP Waterfall Plots and Breakdown Interaction Plots. Each of these techniques offers unique insights that are crucial to understanding our models. Let’s start exploring!

1. Introduction

I recently wrapped up an incredible internship where I was able to delve into cutting-edge research and development projects, one of which involved exploring and comparing various explainable artificial intelligence (XAI) techniques. The work has been compiled into a comprehensive notebook for sharing, which details each aspect of the project using the Stroke Prediction Dataset. The insights and lessons learned have been condensed into the included explanations.

XAI family of packages out there

Below is a table of the most popular XAI packages and frameworks, including offerings from big brands such as IBM, Google, Microsoft, etc. Please head on over to my git for quick and easy access to each reference.

https://github.com/singhvarsha0808/Comparative_Study_of_XAI_Techniques

2. Partial Dependence Plot vs. Accumulated Local Effects

This section provides a succinct overview of Partial dependence and accumulated local effects — two techniques for providing global explanations in machine learning. These methods help clarify how a model’s predictions alter as individual feature values shift. A simple demonstration has been created to help illustrate their application and make understanding easier. The aim is to give you a comprehensible understanding of how these methods operate and how they can be utilized to gain a deeper insight into a model’s predictions.

2.1 PDP

The Partial Dependence Plot (PDP) is a widely used method for interpreting machine learning models. It provides insight into the relationship between a feature and the target variable by visualizing the marginal effect that a feature has on the predicted outcome. The PDP is model-agnostic, which means that it can be used with any type of machine-learning model.

Image Source: The SAS Data Science Blog

The PDP allows us to determine whether the relationship between the target and a feature is linear, monotonic, or more complex. This information can be used to improve the model or to gain a deeper understanding of the data. This information can also be used to validate the efficacy of the data and make sure it aligns with what the business/ domain experts expect.

How is partial dependence calculated?

The idea is to observe the model’s output when the feature is varied over a certain range while all other features are held constant. The concept of PDP is explained in great detail by Christoph Molnar in his book Interpretable Machine Learning.

A sample dataset has been created to exhibit the computation of Partial Dependence (PDP) for a single feature. The dataset includes four features: age, bmi, heart disease (coded as 0 for patients without heart disease and 1 for patients with heart disease), and the predicted probability of stroke as determined by a machine learning model.

Sample dummy data for PD calculation U+007C Source: Author

Please note that this data is just for example purposes and is not representative of any real-world scenario. The purpose of this exercise is to demonstrate the calculation of Partial Dependence (PDP) for a single feature.

Our aim is to calculate the PDP for the feature ‘age’ and its impact on the target variable ‘stroke’. To calculate partial dependence, we need to follow these steps:

1. Define a grid: To calculate PDP, we first define a grid of values for the feature of interest. The grid should have a range that covers the possible values of the feature. In our example, the grid for the feature “age” is [3, 24, 45, 66].

2. Replace feature value with grid value: For each value in the grid, we replace the feature value in the data with the grid value. In this case, we replace the age of all observations with a value in the grid.

3. Take the average of predicted values: For each grid value, we run the model and obtain the predicted value. We then take the average of the predicted values over all observations. This gives us the average predicted value for each grid value.

Average prediction calculated per grid value for the sample data U+007C Source: Author

4. Map out the partial dependence plot: We use the average predicted values obtained in the previous step to create a partial dependence plot. This plot shows the relationship between the feature and the target variable. The PDP for feature age can be represented as a line graph, where the x-axis is the age values defined in the grid, and the y-axis is the average of the predicted probabilities of having a stroke. This plot shows how the prediction of the probability of having a stroke changes as the value of age changes. The plot helps to understand the relationship between the feature age and the target variable, i.e., probability of having a stroke.

PD plot for feature ‘age’ U+007C Source: Author

In the above example, we can see that the probability of having a stroke increases as age increases. This means that older people have a higher probability of having a stroke compared to younger people. This information can be used by medical professionals to identify the risk of stroke in different age groups and develop preventive measures accordingly.

It is important to keep in mind that partial dependence does not capture the interaction between features and only gives us information about the effect of a single feature on the target variable. They may present unrealistic scenarios, such as a 3-year-old having a BMI of 30. This is a key factor of difference when we explore ALE in the next section.

It is important to note that partial dependence plots are based on the average of predictions made by the machine learning model and may not reflect the actual probability of having a stroke for each individual. The plots should be used as a tool for understanding the relationship between features and the target variable and not for making decisions about individual cases.

2.2 ALE

Accumulated Local Effects (ALE) is a novel interpretation method that was introduced by Apley in 2018. Unlike other methods, ALE focuses on the differences in predictions rather than the averages, which makes it a more effective tool for blocking the impact of correlated features.

In order to calculate the local effects, ALE divides the feature into multiple intervals and calculates the differences in predictions for each interval. This provides a more accurate representation of the effect of the feature on the model’s prediction.

Overall, ALE plots are a more efficient and unbiased alternative to partial dependence plots (PDPs), making them an excellent tool for visualizing the impact of features on model predictions. By plotting the accumulated local effects, we can gain a deeper understanding of how features influence the model and make more informed decisions.

How is Accumulated Local effect calculated?

Accumulated Local Effects (ALE) is a method of measuring the impact of a single feature on a target variable in a machine learning model. The ALE for a feature provides a visual representation of how the feature affects the target variable across different values of the feature. Christoph Molnar’s book “Interpretable Machine Learning” provides a comprehensive explanation of the ALE method.

Let’s calculate the ALE for the feature ‘age’ and its impact on the target variable ‘stroke’ using a similar example in the previous section. To simplify the calculation, I’ve added a few more observations.

Sample dummy data for ALE calculation U+007C Source: Author

Here are the steps involved:

  1. Select the Feature of Interest: In our case, it is ‘age’.
  2. Define Interval/Neighbor Regions: Divide the age feature into intervals; for example, for age 3, the interval could be all ages between 2 and 6.
  3. Calculate ALE per Interval: For each interval, a) Replace the feature with the lower limit value and calculate predictions. b) Replace the feature with the upper limit value and calculate predictions. c) Average the difference in predictions between (a) and (b). d) Accumulate the effects across all intervals so that the effect of interval X3 is the accumulated effect of X1, X2, and X3.
  4. Center the ALE: Finally, center the accumulated feature effects at each interval so that the mean effect is zero.
Estimated calculation of ALE for feature ‘age’ based on sample data U+007C Source: Author
Source: https://christophm.github.io/interpretable-ml-book/ale.html

5. Plot the ALE Curve: Plot the accumulated local effects to visualize the impact of the feature ‘age’ on the target variable ‘stroke’.

ALE plot for feature ‘age’ U+007C Source: Author
Source: https://christophm.github.io/interpretable-ml-book/ale.html

2.3 Summary Comparison

The Shapash package is used to create PD plots, and the Dalex package to generate ALE plots, as demonstrated below.

The Dalex package’s explainer object’s model_profile() function offers the capability to generate both PD and ALE plots through the use of the “type” argument, which defines the type of model profiles. The results of the computation are stored as a data frame in the “result” field, which is utilized to generate side-by-side PD and ALE plots, facilitating an in-depth comparison.

ALE vs PDP plot using Dalex package U+007C Source Code: https://github.com/singhvarsha0808/Comparative_Study_of_XAI_Techniques

The following is a brief comparison based on the study conducted.

ALE vs PDP summary comparison U+007C Image Source: Author

Method: https://christophm.github.io/interpretable-ml-book/ale.html

3. SHAP Waterfall vs. Breakdown Interaction Plots

This section begins with a general overview of SHAP and Breakdown, followed by a comparison of SHAP waterfall plots and Breakdown interaction plots based on case studies conducted on two datasets.

3.1 SHAP

SHAP, or SHapley Additive exPlanations, is a powerful method for understanding the decisions made by predictive algorithms. Developed in 2017 by Lundberg and Lee, SHAP is considered to be state-of-the-art in machine learning explainability.

There are three main classes of explainers in SHAP: TreeExplainer, DeepExplainer, and KernelExplainer. The TreeExplainer is model-specific and works best with decision tree-based models, while the DeepExplainer is also model-specific and is designed for deep neural networks. The KernelExplainer, on the other hand, is model-agnostic and can be used with any model.

The idea behind SHAP is based on Shapley values, a solution concept in cooperative game theory. A Shapley value is the average expected marginal contribution of one player after all possible combinations have been considered. This helps to determine a payoff for all of the players when each player might have contributed more or less than the others.

In the context of machine learning, the game refers to reproducing the outcome of the model, and the players are the features included in the model. What Shapley does is quantify each feature’s contribution to the game, or in other words, it quantifies the contribution that each feature makes to the model’s prediction. It is important to note that the game in this context only refers to one observation.

The infographic shows how SHAP helps us understand impact magnitude (bar length) and direction (color).

In summary, SHAP is a powerful tool for understanding the decisions made by predictive algorithms by quantifying the contribution of each feature to the prediction. Its ability to work with any model makes it a versatile option for explainability in machine learning.

How are Shapley values calculated?

For a thorough understanding of Shapely values calculation, a nice article on the topic is highly recommended. Additionally, a YouTube video that explains the concept is also available. To give a brief overview, the following steps are typically followed in the calculation of Shapely values.

  1. Create the set of all possible feature combinations (called coalitions)
  2. Calculate the average model prediction
  3. For each coalition, calculate the difference between the model’s prediction without F and the average prediction.
  4. For each coalition, calculate the difference between the model’s prediction with F and the average prediction.
  5. For each coalition, calculate how much F changed the model’s prediction from the average (i.e., step 4 — step 3) — this is the marginal contribution of F
  6. Shapley value = the average of all the values calculated in step 5 (i.e., the average of F’s marginal contributions)

Example from the Dataset

SHAP waterfall plot for True Positive case(predicted and actual — stroke case) U+007C Image Source: Author

SHAP Waterfall plots show how predictions are made based on variable values. The plot starts at the bottom and shows additions or subtractions of values to reach the final prediction. In the above example, a True Positive case( an individual correctly identified as having a stroke) in the X_test dataset is used to demonstrate how the plot works. The base value is -0.192, the average of all observations, and the final prediction of 0.14 is reached through additions or subtractions of values.

3.2 Breakdown

Break Down is a model-agnostic and instance-specific method for identifying and removing features in a machine-learning model. It was first published in 2018 by Mateusz Staniak and Przemysław Biecek. The method uses a greedy strategy to iteratively identify and remove features based on their influence on the overall average predicted response.

The method starts with the mean expected model response and then successively adds variables in a sequence of their increasing contributions. Consecutive rows present changes in the mean prediction induced by fixing the value of a particular explanatory variable. This means that the order in which the variables are added can influence the contribution values.

Break Down also includes a feature called “Break Down Interaction”, which is based on the notion of interaction (deviation from additivity). This means that the effect of an explanatory variable depends on the value(s) of another variable(s).

In terms of explainability, Break Down is useful for answering the question, “Which variables contribute the most to a single observation result?” By identifying and removing features based on their influence on the overall average predicted response, Break Down can help to understand which variables are most important in making a prediction for a specific instance.

How is it calculated?

When it comes to an understanding the contributions of individual variables toward a prediction, breakdown plots are a valuable tool. These plots show how the contributions attributed to individual variables change the mean model’s prediction for a specific instance. Deriving breakdown plots is a simple process that can be done in a few easy steps.

  1. The starting point is the mean prediction of the model.
  2. Next, fix one explanatory variable (X1) at the value of the current instance, calculate all predictions, and take the mean.
  3. Fix X1 and X2 at their respective values and calculate all predictions, taking the mean once again.
  4. Repeat this process for all features.
  5. The last row represents the model’s prediction for the current instance.

By following these steps, you can create a breakdown plot that provides a clear picture of how each variable contributes to the final prediction. This can be especially useful when trying to understand the impact of specific variables or when dealing with complex models.

Example from the Dataset

Breakdown interaction plot for True Positive case U+007C Image Source: Author

The model’s mean prediction for the stroke prediction dataset is 26.7%. This value represents the average predicted probability of stroke over all individuals in the dataset. It is important to note that this is not the percentage of individuals who actually had a stroke but rather the average model prediction. For a specific individual, the model’s prediction is 83%, which is much higher than the mean prediction. The explanatory variable that has the largest impact on this prediction is age. By holding the value of this variable constant, the mean prediction is increased by 46 percentage points. The effect of all other explanatory variables is smaller in comparison.

Break-down Plots for Interactions

The image below provides a clear understanding of the interaction plot. The table displays the stroke prediction results for individuals in the hypothetical stroke prediction dataset, divided by their heart disease status and age. The general proportion of stroke cases is 15.4%, but it increases to 34.3% for individuals over 30 years old who have a history of heart disease. The figure demonstrates the impact of considering heart disease and age in a specific order, revealing an interaction. The presence of heart disease has a negative effect, elevating the stroke probability from 15.4% to 35.4%. This highlights the complexity of evaluating the role of explanatory variables in model predictions when interactions are present.

Image Source: Author U+007C Referenced: https://ema.drwhy.ai/iBreakDown.html

3.3 Summary comparison

The following is a brief comparison based on the study.

SHAP waterfall vs i-Breakdown plot summary comparison U+007C Image Source: Author

4. Conclusion

Both PDP and ALE have their strengths and limitations. ALE appears to have an advantage over PDP in addressing biases, but both have similar runtime. Given ALE’s advantages over PDP, it may be worth exploring further. When comparing the SHAP waterfall plot and the Breakdown plot, SHAP is faster in terms of computation time, while the Breakdown plot is more intuitive and easier to understand. Both methods have limitations, and the specific use case and requirements should be considered before choosing one over the other. When considering the choice between SHAP and Breakdown plot, it is important to consider the additive or non-additive nature of the model is explained. Currently, Breakdown may be more suitable for smaller datasets with less sparsity and size due to its heavy computational runtime.

I extend my heartfelt thanks to my colleague Prateek for his guidance throughout the project. The experience of working with XAI techniques was enriching, and there is excitement about continuing to explore this field in the future. The advancements in this field and the growing contributions from the open-source community are a testament to the increasing importance of understanding AI models, especially as they become more prevalent and impactful in our daily lives.

References

  1. https://christophm.github.io/interpretable-ml-book/
  2. https://www.darpa.mil/program/explainable-artificial-intelligence
  3. https://www.forbes.com/sites/cognitiveworld/2019/07/23/understanding-explainable-ai/?sh=3c062d987c9e
  4. https://towardsdatascience.com/shap-explain-any-machine-learning-model-in-python-24207127cad7
  5. https://www.youtube.com/watch?v=u7Om2joZWYs
  6. https://towardsdatascience.com/shap-explained-the-way-i-wish-someone-explained-it-to-me-ab81cc69ef30
  7. https://www.sciencedirect.com/science/article/pii/S0740624X21001027
  8. https://christophm.github.io/interpretable-ml-book/ale.html
  9. https://towardsdatascience.com/explainable-ai-xai-methods-part-3-accumulated-local-effects-ale-cf6ba3387fde
  10. https://docs.oracle.com/en-us/iaas/tools/ads-sdk/latest/user_guide/mlx/accumulated_local_effects.html
  11. https://towardsdatascience.com/partial-dependence-plots-with-scikit-learn-966ace4864fc
  12. https://towardsdatascience.com/explainable-ai-xai-methods-part-1-partial-dependence-plot-pdp-349441901a3d
  13. https://shap.readthedocs.io/en/latest/index.html
  14. https://ema.drwhy.ai/breakDown.html
  15. https://medium.com/responsibleml/basic-xai-with-dalex-part-4-break-down-method-2cd4de43abdd
  16. https://uc-r.github.io/dalex
  17. https://arxiv.org/abs/1903.11420

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->