Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

AI Systems: Unearthed Bias and the Compelling Quest for True Fairness
Artificial Intelligence   Latest   Machine Learning

AI Systems: Unearthed Bias and the Compelling Quest for True Fairness

Last Updated on August 7, 2023 by Editorial Team

Author(s): João Areias

Originally published on Towards AI.

And how we can prevent the automation of prejudice

Source: Bing Image Creator

Artificial Intelligence (AI) is no longer a futuristic concept — it has become an intrinsic part of our lives. It’s hard to think how Visa would validate 1,700 transactions per second and detect fraud in them without AI assistance or how, out of the nearly 1 billion uploaded videos, Youtube could find the one video that is just right. With its pervasive influence, it is crucial to establish ethical guidelines to ensure the responsible use of AI. For that, we need strict criteria of fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountability in AI systems. In this article, we will delve deeper into one of these principles, fairness.

Fairness in AI Solutions

Fairness is at the forefront of responsible AI, implying that AI systems must treat all individuals impartially, regardless of their demographics or backgrounds. Data Scientists and ML Engineers must design AI solutions to avoid biases based on age, gender, race, or any other characteristic. Data used to train these models should represent the population’s diversity, preventing inadvertent discrimination or marginalization. Preventing bias seems like an easy job; after all, we are dealing with a computer, and how the heck can a machine be racist?

Algorithmic Bias

AI fairness problems arise from Algorithmic bias, which is systematic errors in the model’s output based on a particular person. Traditional software consists of algorithms, while machine learning models are a combination of algorithms, data, and parameters. It doesn’t matter how good an algorithm is; a model with bad data is bad, and if the data is biased, the model will be biased. We introduce bias in a model in several ways:

Hidden biases

We have biases; there’s no question about that, stereotypes shape our view of the world, and if they leak into the data, they will shape the model’s output. One such example of this occurrence is through language. While English is mainly gender-neutral, and the determiner “the” does not indicate gender, it feels natural to infer gender from “the doctor” or “the nurse.” Natural Language models, such as translation models or Large language models, are particularly vulnerable to that and can have the results skewed if not appropriately treated.

A few years ago, I heard a riddle that went like this. A boy was playing on the playground when he fell and was severely injured; the father took the child to the hospital, but upon arriving, the doctor said, “I cannot operate on this child; he is my son!” How can this be? The riddle was that the doctor was a woman, the child’s mother. Now picture a nurse, a secretary, a teacher, a florist, and a receptionist; were they all women? Surely we know there are male nurses out there, and nothing keeps a man from being a florist, but it is not the first thing we think about. Just as our mind is affected by this bias, so is the machine’s mind.

Today, July 17, 2023, I asked Google Translate to translate some professions from English to Portuguese. Google’s translation of occupations such as teacher, nurse, and seamstress, make use of the Portuguese feminine pronoun “A” indicating the professional is a woman (”A” professora, “A” enfermeira, “A” costureira, “A” secretaria). In contrast, occupations such as professor, doctor, programmer, mathematician, and engineer use the Portuguese masculine pronoun “O” indicating the professional is a man (”O” professor, “O” médico, “O” programador, “O” matemático, “O” engenheiro).

Bias in Google’s translation of professions from English to Portuguese, as evidenced by gender-specific pronouns (Source: Image by the author)

While GPT-4 has made some improvements, and I could not replicate the same behavior with my short quick tests, I did reproduce it in GPT-3.5.

Bias in Chat GPT using GPT 3.5 (Source: Image by the author)

While the examples presented don’t pose much of a threat, it’s easy to think of potentially dire consequences of models with the same technology. Consider a CV analyzer that reads a resume and uses AI to determine if the applicant is suitable for the job. It would undoubtedly be irrational and immoral, and in some places, illegal to disregard the applicant for a programmer position because her name is Jennifer.

Unbalanced classes in training data

Is 90% accuracy good? How about 99% accuracy? If we predict a rare disease that only occurs in 1% of people, a model with 99% accuracy is no better than giving a negative prediction to everyone, completely ignoring the features.

Now, imagine if our model is not detecting diseases but people. By skewing the data toward a group, a model may have issues detecting a misrepresented group or even ignore it entirely. This is what happened to Joy Buolamwini.

In the documentary Coded Bias, MIT computer scientist Joy Buolamwini exposed how many facial recognition systems would not detect her face unless she wore a white mask. The model’s struggles are a clear symptom that the dataset heavily underrepresents some ethnic groups, which is unsurprising, as the datasets used to train these models are highly skewed, as demonstrated by FairFace [1]. The misrepresentation of the group proportions can lead the model to ignore essential features of misrepresented classes.

Racial compositions in face datasets. (Source: FairFace)

While FairFace [1] balanced its dataset among the different ethnicities, it’s easy to see that important datasets in the industry, such as LFWA+, CelebA, COCO, IMDM-Wiki, and VGG2 are composed of about 80% to 90% of white people, this is a distribution that is hard to see even in the whitest of countries [2] and, as demonstrated by FairFace [1], can significantly degrade models performance and generalization.

While facial recognition may allow your friend to unlock your iPhone [3], we may face worse consequences from different datasets. The US judicial system systematically targets African Americans [4]. Suppose we create a dataset of arrested people in the US. In that case, we will skew the data toward African Americans, and a model trained on this data may reflect this bias to classify Black Americans as dangerous. This happened to COMPAS, an AI system to create a risk score of a criminal rescind in crime, exposed by ProPublica in 2016 for systematically targeting black people [5].

Data leakage

In 1896, in the case of Plessy vs. Ferguson, the US solidified racial segregation. In the National Housing Act of 1934, the US federal government would only back neighborhood building projects that were explicitly segregated [6]. This is one of the many reasons why race and address are highly correlated.

Distribution of residency by race of Milwaukee (Source: US Census Bureau)

Consider now an electricity company creating a model to aid bad debt collection. As a data-conscious company, they decided not to include name, gender, or personally identifiable information on their training data and to balance the datasets to avoid bias. They instead aggregate the clients based on their neighborhood. Despite the efforts, the company has also introduced bias.

By using a variable so correlated to race as an address, the model will learn to discriminate against race, as both variables could be interchanged. This is an example of data leakage, where a model indirectly learns to discriminate against undesired features. Navigating a world of systemic prejudice can be challenging; bias will sneak into the data in the most unexpected ways, and we should be overly critical of the variables we include in our model.

Detecting Fairness problems

There is no clear consensus on what fairness means, but there are a few metrics that can help. When designing an ML model to solve a problem, the team must agree on the fairness criteria based on the potential fairness-related issues they may face. Once the criteria are defined, the team should track the appropriate fairness metric during training, testing, validation, and after deployment to detect and address fairness-related problems in the model and address them accordingly. Microsoft offers a great checklist to ensure fairness is prioritized in the project [7]. Consider dividing people into two groups of sensitive attributes A, a group a with some protected attributes, and a group b without those attributes; we may define some fairness metrics as follows:

  • Demographic Parity: This metric asks if the probability of a positive prediction for someone from a protected group is the same as for someone from an unprotected group. For example, the likelihood of classifying an insurance claim as fraudulent is the same regardless of the person’s race, gender, or religion. For a given predicted outcome R this metric is defined by:
  • Predictive Parity: This metric is all about the accuracy of positive predictions. In other words, if our AI system says something will happen, how often does it happen for different groups? For example, suppose a hiring algorithm predicts a candidate will perform well in a job; the proportion of predicted candidates who actually do well should be the same across all demographic groups. If the system is less accurate for one group, it could be unfairly advantaging or disadvantaging them. For a given realized outcome Y we can define this metric as:
  • False Positive Error Rate balance: Also known as equal opportunity, this metric is about the balance of false alarms. If the AI system makes a prediction, how often does it wrongly predict a positive outcome for different groups? For instance, when an admissions office rejects applicants to a university, how often are good suitable candidates rejected in each group? We can define this metric as:
  • Equalized odds: This metric is about balancing both true positives and false positives across all groups. For a medical diagnostic tool, for example, the rate of correct diagnoses (true positives) and misdiagnoses (false positives) should be the same regardless of the patient’s gender, race, or other demographic characteristics. In essence, it combines the demands of Predictive Parity and False Positive Error Rate Balance and can be defined as follows:
  • Treatment equality: This metric examines the distribution of mistakes across different groups. Are the costs of these mistakes the same for other groups? For instance, in a predictive policing context, if two people — one from a protected group and one from an unprotected group — both don’t commit a crime, they should have the same likelihood of being mistakenly predicted as potential criminals. Given the false positives FP and false negatives FN of a model, this metric can be defined as follows:

At least in classification problems, computing the fairness criteria can be easily done using a confusion matrix. Still, Microsoft’s fairlearn provides a suite of tools [8] to compute those metrics, preprocess data, and post-process predictions to comply with a fairness constraint.

Addressing fairness

While fairness must be in the mind of every Data Scientist throughout the entirety of the project, we can apply the following practices to avoid problems:

  • Data collection and preparation: Ensure your dataset is representative of the diverse demographics you wish to serve. Various techniques can address bias at this stage, such as oversampling, undersampling, or generating synthetic data for underrepresented groups.
  • Model design and testing: It is crucial to test the model with various demographic groups to uncover any biases in its predictions. Tools like Microsoft’s Fairlearn can help quantify and mitigate fairness-related harms.
  • Post-deployment monitoring: Even after deployment, we should continuously evaluate our model to ensure it remains fair as it encounters new data and establishes feedback loops to allow users to report instances of perceived bias.

For a more complete set of practices, one can refer to the previously mentioned checklist [7].

In conclusion

Making AI fair isn’t easy, but it’s important. It is even harder when we can’t agree on what fair even is. We should ensure everyone is treated equally and no one is discriminated against by our model. This will become harder as AI grows in complexity and presence in daily life.

Our job is to ensure the data is balanced, biases are questioned, and every variable in our model is scrutinized. We must define a fairness criterion and adhere closely to it, being always vigilant, especially after deployment.

AI is a great technology at the foundation of our modern data-driven world, but let’s ensure it is great for all.

References

[1] FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age

[2] https://en.wikipedia.org/wiki/White_people

[3] https://www.mirror.co.uk/tech/apple-accused-racism-after-face-11735152

[4] https://www.healthaffairs.org/doi/10.1377/hlthaff.2021.01394

[5] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[6] https://en.wikipedia.org/wiki/Racial_segregation_in_the_United_States

[7] AI Fairness Checklist — Microsoft Research

[8] https://fairlearn.org/

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->