Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Explainable and Interpretable Models are Important in Machine Learning
Artificial Intelligence   Data Science   Latest   Machine Learning

Explainable and Interpretable Models are Important in Machine Learning

Last Updated on August 1, 2023 by Editorial Team

Author(s): Suhas Maddali

Originally published on Towards AI.

Learn to use libraries such as LIME, SHAP and others to determine the workings of black-box machine learning models with examples and illustrations, ensuring that they are safe, fair and have less bias

Photo by Alexander Grey on Unsplash

Artificial intelligence and machine learning models have garnered significant attention due to their incredible capabilities in generating texts, predicting sentiments, and making accurate forecasts. However, a growing focus within this field is on developing explainable and interpretable models.

Consider the scenario of building a loan default classifier based on factors like age, gender, and race. While the classifier may accurately predict defaults for a specific subset of the population, questions arise regarding its accuracy for other groups. Unfair advantages and biases can emerge, granting certain groups easier access to loans while disadvantaging others. Therefore, it is crucial to prioritize the interpretability and explainability of models before deploying them in real-world applications. This example highlights just one way in which biases can manifest, emphasizing the need for careful consideration.

In this article, we delve into libraries that facilitate interpretability in machine learning models, such as LIME, SHAP, and others. By employing these libraries, we gain a comprehensive understanding of their workings and their ability to elucidate the importance of various features in driving model outcomes. Additionally, we explore the interpretability of ML models without relying on LIME and SHAP, utilizing alternative approaches.

By emphasizing interpretability, we aim to address biases and promote fairness in machine learning models. Understanding the inner workings and impacts of these models is crucial for responsible deployment and the creation of equitable solutions.

Import the Libraries

We will be importing a list of libraries that are important for the task of predicting whether a person is making an income of $50k dollars per annum based on a set of features such as age , workclass , and education . We will also perform exploratory data analysis with this data to get a full understanding of it and also recommend some insights to the business and the stakeholders.

We import libraries such as numpywhich is used for processing of data in the form of arrays.

pandas is used to convert the categorical features into one-hot encoded features that make it easier for the ML model to make predictions.

sklearn offers a wide range of options when it comes to machine learning. We have access to a large number of models, such as random forests and decision trees. It also consists of train_test_split which is used to divide the data into train and test sets.

lime is used to help us get the feature importance of values in our data. This can help us determine which of the features are really important in determining whether a person can make above or below 50$ annual income.

Read the Data

To facilitate data analysis, we utilize the fetch_openml function, which allows us to retrieve the data without storing it locally. This approach eliminates the need for the data to be physically present on our device. Let’s examine a code snippet that demonstrates how to load the data and access the first five rows:

As we see that there are features such as age , workclass , and education that help determines whether a person is making above or below 50k dollars per annum.

Exploratory Data Analysis (EDA)

This is an important step in machine learning, where we tend to find patterns and trends in the data. In addition, this can also help in determining the presence of outliers, missing values, or errors in the data. In this way, we can make modifications to the data or remove features that contain these values. As a result, this leads to a more robust machine learning model that is capable of capturing a general view and making future predictions.

By looking at the above code cell, we are basically trying to get the shape of the data, null values and a high-level overview or description of the data.

Description of Data (Image by Author)

There are a few missing values in workclass , occupation , and native-country . We can treat these missing values by performing either imputation or removing them so that our models perform well on predictions.

The average age of participants in our data is about 38 years. The minimum age individual in our data is 17 years old, and the maximum age individual is 90 years. The individuals have worked about 40 hours a week on average. There are some participants who have a maximum of 99 hours per week.

In the above code cell, we are exploring the occurrence of the target values of individuals who have either above or below 50k dollars as their annual income.

Income Distribution (Image by Author)

The dataset we are working with exhibits a higher proportion of individuals with an income below 50k dollars per year, which is reflective of real-world datasets where higher salaries are relatively less common. This skew in the data distribution can impact model performance, as there are more data points in the lower income category. Having a larger number of samples in one category may result in a better model performance for individuals with higher incomes.

Having explored the income category, we will now proceed to examine additional descriptive plots that provide us with a comprehensive understanding of the dataset. These visualizations will help us gain insights into various aspects of the data beyond just income, enabling us to make informed decisions and develop a more accurate model.

We are plotting the age histogram with the code cell given above. In this way, we understand the age group of individuals in our dataset before we can make predictions and improve the interpretability of the models.

Age Histogram (Image by Author)

The plotted data reveals the presence of outliers in the age variable, with a few individuals appearing to be around 90 years old. However, the majority of participants fall within the age range of 30 to 50 years, which aligns with the expectation that working individuals typically belong to this age group. Additionally, it appears that the youngest age group represented in the dataset is around 20 years old. These observations provide valuable insights into the age distribution within the dataset, helping us understand the demographic composition of the participants.

Correlation plots give us a good understanding of the relationship of features with others. In this way, we are able to generate good results about how one feature is correlated with the other feature and so on. Heatmaps are a great way to visualize this correlation among features.

Correlation Heatmap (Image by Author)

It is expected and intuitive to observe high correlation values (indicated by the red color) when a feature is compared to itself. On the other hand, the lack of high correlations between other features is a positive sign. When features are not highly correlated with each other, it suggests that they provide unique and independent information, which can enhance the performance of machine learning models by reducing redundancy and the risk of overfitting.

Given the observed correlation patterns, we can confidently utilize this dataset with various machine learning classifiers to predict income levels for individuals. The relatively low inter-feature correlations indicate that each feature contributes distinct and valuable information, making the dataset suitable for building robust and reliable models.

Boxplots give a good representation of the outliers present in the data, along with the spread and the mean of various categories. We have focused on the average age spread of people who are making either above or below 50k dollars as their annual income.

Income based on Age (Image by Author)

It can be seen that the age groups of people who have higher than 50k income tend to be higher as compared to young people. This highlights the importance of income levels across various classes.

We are now going to be plotting the count of work classes based on their income levels. In this way, we can get a good estimate of the total number of samples with respect to the income levels for each of the work class categories.

Count of Workclass based on Income (Image by Author)

Based on the plot above, we can see that we have more number of samples of people who make about 50k dollars per annum as income for self-emp-inc category. In addition, there is Federal-gov feature that also have many candidates who have a higher relative salary range with respect to their group.

Machine Learning Predictions

Now that we have explored the data and understand some important trends and patterns, it is now time to focus our attention on machine learning predictions. Feature engineering techniques are not highlighted in this article, but it is necessary before we give the data to ML models for predictions. If you want to get a good understanding of feature engineering strategies that must be followed for any machine learning project, I encourage you to read the article below.

Which Feature Engineering Techniques Improve Machine Learning Predictions? U+007C by Suhas Maddali U+007C Towards Data Science (medium.com)

We will be using the random forest in our approach to predict the chances of a person making above or below 50k dollars per annum. In our case, we must first convert the categorical features to numerical features. After this step, we perform train_test_split which divide the data into train and test sets, respectively. We train using the random forest classifier with the training data and make predictions on the test set. Below is the code cell that showcases this approach.

After training the model, we are able to get an accuracy of about 86 percent on the test set. The performance of each of the targets is quite consistent. Overall, the algorithm does a decent job in predicting the income status.

Random Forest Predictions (Image by Author)

Interpretability and Explainability

This article revolves around the central themes of interpretability and explainability. We emphasize their significance within the discussed dataset, exploring various methods to achieve these objectives. By unraveling the inner workings of the models and understanding their decision-making process, we ensure transparency, address biases, and uphold ethical considerations. Our focus on interpretability and explainability fosters trust, accountability, and transparency in the field of machine learning.

Random Forest models are interpretable on their own as compared to other complex models, such as deep neural networks. Therefore, we will use the random forest model to generate the interpretations for its predictions.

We determine the top 10 features according to the random forest model in order to determine the total income levels. In this way, we get to know why the model is given predictions in the first place in order to lay more trust in the model predictions.

Random Forest Feature Importance (Image by Author)

We are looking at the top 10 features in our data for our random forest model to make predictions. We will ignore the feature fnlwgt for now as it mostly deals with the weight given to each training example based on which we are determining the income levels. We see features such as age , capital-gain , and hours-per-week that are important in determining the earning potential of an individual. This is true as we have already explored about these features and their impact on the output based on exploratory data analysis. Hence, we can now lay more trust in the model as it is able to explain why it made predictions in the first place.

We will now use another library called LIME that gives an advanced set of features in order to improve the interpretability of the machine learning model.

We initialize the lime library and get the LimeTabularExplainer with the features and the columns in the dataset. A specific instance of the training data is chosen for reference. The explain_instance is capable of giving the local interpretability of the random forest model. In other words, it explains why the model has given a particular set of predictions in the first place. Finally, the feature importance of the features is generated and could be used to determine how the model came up with the predictions.

LIME Feature Importance (Image by Author)

The feature importance values that are generated by lime library are different as compared to the default random forest model feature interpretation. In this case, we get negative values for those features that are inversely proportional to the output target variable and vice-versa. In addition, lime is capable of giving a local interpretation of the model predictions as compared to the global interpretation given by the default random forest library.

It is also a good idea to visualize the top 10 features from lime with the use of the above code. In this way, we can get a pictorial representation of the feature importance of the model predictions for a particular instance or training example.

LIME Feature Importance (Image by Author)

In this case, capital-gain is an important feature along with marital-status category to determine income. We will also visualize the default visualization given by lime for a specific instance below.

The above code cell uses the default library from lime that is capable of generating plots and their usefulness in determining the outcome.

LIME Feature Importance (Image by Author)

For this specific instance, our model gives with probability that this candidate makes above or below 50k dollars per annum. This shows us the confidence of the model in making the predictions as well. Furthermore, we are also given features that are highly influential in determining that this candidate makes below 50k dollars per annum.

Similarly, shap library also provides us with good interpretable results for both local and global interpretability. The above code shows how we initialize the shap library and use summar_plot to understand the model well.

SHAP Feature Importance (Image by Author)

We have considered a specific instance and why the model gave predictions for this individual. education and workclass are important for this model to make predictions. In this way, we are able to get a good idea about the model and some of its weaknesses and strengths during the prediction process.

Conclusion

In conclusion, this article highlights the importance of exploratory data analysis (EDA) in data science. EDA provides valuable insights that inform decision-making during the modeling process. Additionally, libraries like lime and shap enhance the interpretability of machine learning models by identifying influential factors for predictions.

Through EDA and interpretability libraries, we gain a deeper understanding of our models’ inner workings. This knowledge allows us to explain model behavior, build trust, and ensure transparency. By combining EDA with interpretability techniques, we create more robust and reliable machine learning models.

Thorough EDA and model interpretability improve understanding, decision-making, and the overall impact of our work as data scientists. Thank you for taking the time to read this article.

Below are the ways where you could contact me or take a look at my work.

GitHub: suhasmaddali (Suhas Maddali ) (github.com)

YouTube: https://www.youtube.com/channel/UCymdyoyJBC_i7QVfbrIs-4Q

LinkedIn: (1) Suhas Maddali, Northeastern University, Data Science U+007C LinkedIn

Medium: Suhas Maddali — Medium

Kaggle: Suhas Maddali U+007C Contributor U+007C Kaggle

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->