Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Building Trustworthy AI: Interpretability in Vision and Linguistic Models
Artificial Intelligence   Computer Vision   Data Science   Latest   Machine Learning

Building Trustworthy AI: Interpretability in Vision and Linguistic Models

Last Updated on October 31, 2024 by Editorial Team

Author(s): Rohan Vij

Originally published on Towards AI.

Building Trustworthy AI: Interpretability in Vision and Linguistic Models

Photo by Arteum.ro on Unsplash | What thoughts lie behind that eye?

The rise of large artificial intelligence (AI) models trained using self-supervised deep learning methods presents a dangerous situation known as the AI “black box” problem, wherein it is impossible to understand what methods and how a neural network learns what it does. The exponential growth of computational power, availability of massive datasets, and advancements in deep learning algorithms have enabled the development of AI models with extremely large scale and capabilities. This problem is not new in the field of cognitive sciences — the human brain is also considered a black box, as it is impossible to understand how the human brain learns on a fundamental level. Having ununderstandable models performing crucial tasks in business or other high-impact applications is potentially dangerous. It is impossible to determine whether the model has jeopardized decision-making or content-generating capabilities until it does eventually generate false content or make a bad decision. Model users should be able to understand how their data is being used to produce a result. This paper will explore the efficacy of solutions to this problem that attempt to create “interpretable machine learning” in the fields of computer vision and large language models. It will assess the effectiveness of these interpretable machine learning approaches in improving the transparency and accountability of AI systems in real-world applications.

Interpretability in Computer Vision (CV) Models

“If AI enables computers to think, computer vision enables them to see, observe and understand” (IBM, n.d.). Computer vision uses deep learning to look at image data, find patterns, and then identify one image from another. Computer vision models are based on convolutional neural networks (CNNs), which consist of layers used to detect various features of an input image. CNNs have sliding matrix windows that slide along the pixels of an image to capture spatial information — this is known as a convolutional operation. Each layer in a CNN is intended to detect a certain feature of the input image. As each successive layer receives information from the previous layer, the model is able to build a feature map that combines all of the important features of the image. Layers at earlier stages of the CNN might be responsible for identifying higher-level features such as edges or colors, while deeper layers use the result of the prior ones to detect more complex patterns (Craig, 2024). The increased complexity and application of CNNs raise concerns about how interpretable they are. As more and more layers are added to CNNs, the ability to understand what patterns the CNN is actually identifying that lead it to make a decision is lost. Kevin Armstrong (2011), columnist at “Not a Tesla App,” noted that Tesla’s Full Self-Driving v12:

is eliminating over 300,000 lines of code previously governing FSD functions that controlled the vehicle, replaced by further reliance on neural networks. This transition means the system reduces its dependency on hard-coded programming. Instead, FSD v12 is using neural networks to control steering, acceleration, and braking for the first time. Up until now, neural networks have been limited to detecting objects and determining their attributes, but v12 will be the first time Tesla starts using neural networks for vehicle control.

Tesla’s dramatic shift away from hard-coded rules to having their algorithm for self driving reliant almost entirely on neural networks is concerning in regards to the interpretability and accountability of the self-driving system. If an accident were to occur with FSD v12, it would be harder for Tesla to determine what part of the system was responsible for making the erroneous decision. Without being able to understand how these models reason to arrive at their final decision, they are harder to trust — especially in high-stakes environments such as driving a heavy electric vehicle.

LIME

LIME, short for Local Interpretable Model-agnostic Explanations, is a generalized technique that can be used to understand the reasoning behind any classifier model. LIME is best described as a probe for any model — it creates slight variance in the original data to understand the relationship between those changes and the model’s final output. LIME allows its users to change specific features of the input to the model, so humans can decide what features are the most important or most likely to overfit and test those to see their impact in the model. LIME outputs a list of explanations that represent each input features’ contribution to the final output of the classifier output (Ribero et al., 2016).

“Explaining individual predictions. A model predicts that a patient has the flu, and LIME highlights the symptoms in the patient’s history that led to the prediction. Sneeze and headache are portrayed as contributing to the “flu” prediction, while “no fatigue” is evidence against it. With these, a doctor can make an informed decision about whether to trust the model’s prediction” (Ribero et al., 2016).

A good example of using LIME in CV is to understand the reasoning behind a model’s prediction:

“Raw data and explanation of a bad model’s prediction in the ‘Husky vs Wolf’ task” (Ribero et al., 2016).

The creators of LIME ran an experiment with 27 graduate students who had taken an ML course at some point in their academic careers. In the first trial, they provided each of the 27 students with 10 images of a wolf classification model. 8 of the images were classified correctly as wolves, where the other two were misclassified: one was classified as a wolf even though it was a dog with snow in the background, and one was classified as a wolf even though it was a wolf with no snow in the background. 10 out of 27 of the students trusted the model, with 12 out of 27 stating that the presence of snow is a potential feature taken into account by the model. In a second trial with the same 27 participants, an explanation (as in Figure 2) was provided for each model’s prediction. After the second trial, only 3 students trusted the model, with 25 citing the presence of snow as a potential feature (Ribero et al., 2016).

Grad-CAM

Grad-CAM, or Gradient Weighted Class Activation map, analyzes the last convolutional layer of a CNN to determine what pixels provided the most weightage to the model’s final result. This is done through a 5-step process (Ahmed, 2022):

  1. The model is traditionally trained on a set of images to get its predictions and the corresponding weights of the last convolutional layer.
  2. With the model’s best classification guess (like “dog,” “cat,” etc. — whatever classification has the highest probability assigned by the network), Grad-CAM computes the gradient of the result compared to the weights/activations of the last convolutional layer. For instance, if the model predicts that the image contains a dog, Grad-CAM computes how minute changes in the model’s activations (features ranging from as simple to edges/textures to patterns that make up a dog’s nose) would affect that classification. Like LIME, this allows Grad-CAM to identify which features in the image were the most important in leading the model to predict its classification. Unlike LIME, however, Grad-CAM probes the model by looking at the last convolutional layer and understanding how changes there affect the final result, while LIME changes the input image to the model to understand how macro changes affect the final result.
  3. By looking at the calculations from the last step, Grad-CAM identifies what parts of the last image convolutional layer were important to deciding the model’s classification.
  4. Each neuron in the final convolutional layer’s gradient (i.e what was calculated in step 2 — when we increase this activation by n number, how much does the classification change? The higher the change, the higher the gradient, and the more important that neuron is to the final classification) is multiplied to every pixel involved with that neuron channel. As a result, pixels that contribute to the final classification the most are highlighted the most, whereas pixels that negatively contribute to the final classification are not taken into account and are highlighted accordingly. This creates a heatmap, allowing human users to see what parts of the image were the most critical to the model’s classification decision.
  5. This “importance value” of the pixels is normalized to be between 0 and 1, allowing for better visualization when the heatmap is overlaid on top of the final image.

Using the following image:

The image being applied to the Resnet50 model (with 50 convolutional layers) with GradCAN to understand the reasoning behind its classification (Ahmed, 2022).

The ResNet50 model (CNN models with 50 convolutional layers) classifies the image with two categories: ‘sports_car’ and ‘racer.’

Visualizing the activations of the last layer in relation to the ‘sports_car’ classification:

The Grad-CAM heatmap for the ‘sports_car’ class (Ahmed, 2022).

The neurons of the last layer are clearly activated by the front parts of the two cars. For further exploration, putting an image of a non-sporty car (i.e a Honda Civic) could be useful to explore how the model differentiates between typical vehicles and high-performance vehicles.

Visualizing the activations of the last layer in relation to the ‘racer’ classification:

The Grad-CAM heatmap for the ‘racer’ class (Ahmed, 2022).

The same pixels around the cars are highlighted for the ‘racer’ classification, even though an individual can still be classified as a racer without being near cars. While it is possible (and even good) that the model is able to use the context around an object to determine its classification, the model not strongly highlighting any of the pixels on the person near the cars creates distrust in some of the model’s classifications. If the person in the middle of the cars was not present, would the model still identify the image in the ‘racer’ class? If the cars were not present, would the model still identify the image in the ‘racer’ class? In a nutshell, Grad-CAM provides a window into the decision-making process of CV models by allowing human users to understand the pixels in an image that influence its decisions.

Conclusion & Interpretability with Large Language Models (LLMs)

A common argument against explainable AI (techniques like LIME, Grad-CAM, and SHAP) is that they explain which inputs affect the output and by how much (by input perturbation, as seen in LIME, or by analyzing the last convolutional layer, as seen in Grad-CAM), but not the underlying reasoning (the why) behind its classification. According to Tim Kellog (2023), ML Engineering Director at Tegria, when a model’s explanation “doesn’t match your mental model, the human urge is to force the model to think ‘more like you.’” The purpose of this paper is to explore AI interpretability for its purpose in helping humans trust it more; humans might tend to distrust AI even more if they see it making decisions based on a decision process that they themselves do not follow:

Jaspars and Hilton both argue that such results demonstrate that, as well as being true or likely, a good explanation must be relevant to both the question and to the mental model of the explainee. Byrne offers a similar argument in her computational model of explanation selection, noting that humans are model-based, not proof-based, so explanations must be relevant to a model (Miller, 2019).

People are far more likely to trust explanations if they match their current way of thinking — not if they invent a new thought process (even if it is still correct). Kellog (2023) remarks:

I had seen this phenomenon a lot in the medical world. Experienced nurses would quickly lose trust in an ML prediction about their patient if the explanation didn’t match their hard-earned experience. Even if it made the same prediction. Even if the model was shown to have high performance. The realization that the model didn’t think like them was often enough to trigger strong distrust.

Through understanding trust in AI through the lens of sociology it can be observed that humans want to trust AI like they trust another human — they want to be able to probe it to find out more and understand how it reasons. Large language models (LLMs) like ChatGPT or Claude happen to act more human than any other type of model thus far. They can be probed to explain their thought process, asked for more information, and fact-check themselves.

A common argument against LLMs is that they cannot always be trusted — which is a nonissue if society reconsiders its interactions with LLMs to be similar to an individual’s interactions with real people. It would be naive to believe whatever someone says to you without doing any internal fact/logic-checking. This same level of constantly questioning the information society gives in the media or by other people can and should also be applied to information received from LLMs. In a quest to make AI as trustable as possible by making it as human as possible, users must acknowledge that this also makes AI susceptible to the same “hallucinatory” or made-up information that humans can propagate.

To increase society’s trust in AI, it must be designed to act more human — but not a human that spreads rumors or makes up facts, but one that is consistent with its thoughts, viewpoints, and presentation of information, and one that is able to cite its sources.

  1. Consistency in AI is an issue that has already been solved with the temperature variables, which controls the “randomness” of the LLM’s response. LLMs are set algorithms with the same weights that mathematically provide the same output to every input. However, commonly used-models like GPT often have a temperature setting other than 0 which forces the model to randomly use a word that is not the most probable — introducing randomness and “creativity” to the LLM’s writing (Prompt Engineering Guide, 2024). If LLMs were allowed to be far more deterministic (provide the same response for every input), it would be far easier for humans to trust them because they would be far more reliable to use.
  2. It is possible to use Retrieval Augmented Generation (RAG), which expands the knowledge base of an LLM for specific responses. Microsoft’s Copilot has the ability to actively search Bing during a response and cite the websites it retrieves information from (Microsoft, n.d.). While still in its infancy, LLMs can use RAG in a reliable way to cite all information they provide from external sources. LLMs are simply language algorithms that can be fed more information and glue that information together — it is not necessary for them to always fallback to their training data to get information if they can be given it during their response.

Interpretability might not be what society is looking for in AI; characteristics of humanity might be far more important than raw explainability for society to truly adopt and trust AI in dictating important decisions.

Thank you for reading!

References

Ahmed, I. (2022, April 5). Interpreting Computer Vision Models. Paperspace Blog. https://blog.paperspace.com/interpreting-computer-vision-models/

Armstrong, K. (2023, November 24). Tesla FSD v12 Rolls Out to Employees With Update 2023.38.10 (Update: Elon Confirms). Not a Tesla App. https://www.notateslaapp.com/news/1713/tesla-fsd-v12-rolls-out-to-employees-with-update-2023-38-10

Awati, R. (2022, September). What is convolutional neural network? SearchEnterpriseAI. https://www.techtarget.com/searchenterpriseai/definition/convolutional-neural-network

Computer Vision. (2019). IBM. https://www.ibm.com/topics/computer-vision

Kellogg, T. (2023, October 1). LLMs are Interpretable — Tim Kellogg. Timkellogg.me. https://timkellogg.me/blog/2023/10/01/interpretability

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, February 16). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. ArXiv.org. https://arxiv.org/abs/1602.04938

Saravia, E. (2024). LLM Settings — Nextra. promptingguide.ai. https://www.promptingguide.ai/introduction/settings

Your AI-Powered Copilot for the Web. (n.d.). microsoft.com. https://www.microsoft.com/en-us/bing

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->