Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Tackle Imbalanced Learning
Latest   Machine Learning

Tackle Imbalanced Learning

Last Updated on July 24, 2023 by Editorial Team

Author(s): Satsawat Natakarnkitkul

Originally published on Towards AI.

Photo by Rupert Britton on Unsplash

Imbalanced learning U+007C TOWARDS AI

All you need to know on how to tackle imbalanced learning issues

Introduction

If you have been in the field of data science and have been working as part of the team or lead the team, you will probably come across the issue of data imbalance. Those who have been working in Financial, you may need to build a fraud detection model (identify the fraudulent transactions rather than the legit, very common, transactions). In industrial, you may want to identify which equipment about to fail then the one which will continue to operate.

Actually, most of the problems we are trying to solve have to do with data imbalance. There are really no 50–50 distribution (or every hard to encounter) of positive and negative classes for classification problems.

The issues

The problem with imbalanced learning can be categorized into three main issues.

  1. A problem-definition level issue occurred when we do not have enough information to define the learning problem. This also includes the understanding of how to properly judge/measure the classifier.
  2. The data-level issue happened when we do not have enough training data. An example is “absolute rarity”, where we do not have sufficient numbers of minority class examples to learn.
  3. The algorithm-level issue is the inability of the algorithms to optimize learning for the target evaluation criteria. Any algorithms which used greedy search methods are having issues in finding rare patterns.

Evaluation Metrics for Imbalanced problem

In this first section, it is important to understand how we will use meaningful and appropriate evaluation metrics for imbalanced data, and this will ultimately translate into providing accurate cost information to the learning algorithms. Choosing the right evaluation metric is one way to solve the problem-definition level issue.

Confusion matrix, Precision, Recall, and F-measure

Let’s recap the confusion matrix; normally, the class whom we want to predict/classify will be ‘positive’. And precision and recall, when we mentioned, will be based on the positive class.

Confusion matrix and the metrics

The precision of classification is essentially the accuracy associated with those rules, while recall is the percentage of examples of a designated class that are correctly predicted. Another meaning for the recall is the measurement of coverage of the minority class.

Another popular measurement is F-measure, it is parameterized and can be adjusted to specify the relative importance of precision vs. recall, F1-measure is the most often used when learning from imbalanced data (which weights both precision and recall equally). Let’s observe its formula below.

F-measure formula

For F2-measure, this will set β equals to 2. The intuition behind the F2 measure is that it weights recall higher than precision. Hence, making F2 measures more suitable in certain applications where we want to emphasize the importance of classifying correctly as many positive samples as possible.

ROC and AUROC

ROC analysis can sometimes identify optimal models and ignore suboptimal ones independent of the cost context or the class distribution. To put in a more simple context, ROC analysis does not have any bias toward models that perform well on the majority class at the expense of the majority class. AUROC summarizes this information into a single number, which facilitates model comparison when there is no dominating ROC curve.

ROC analysis is a plot of the TPR (true positive rate) against FPR (false positive rate) for a number of different candidate threshold values between 0 and 1. It is a plot of the false positive rate (x-axis) versus the true positive rate (y-axis) for a number of different candidate threshold values between 0.0 and 1.0. Put another way, and it plots the false alarm rate versus the hit rate.

Example of ROC curve of High (left) vs. Low (right) performance models

Precision-Recall Curve and AUCPR

As previously discussed in the previous section, precision and recall are good metrics to evaluate the binary classification, especially imbalanced classes.

The key to calculate both metrics are heavily concerned with the correct prediction of the positive class, hence the minority class for this problem.

Example of the precision-recall curve (AUCPR can be computed using AUC(recall, precision))

A precision-recall curve is a plot of the precision (y-axis) and the recall (x-axis) for different thresholds, much like the ROC curve.

There is one difference from the ROC curve that the baseline is no longer fixed (in ROC, this is a diagonal line). But the baseline for the precision-recall curve is determined by the ratio of positive and negative class and is a horizontal line.

So when to choose between ROC and precision-recall curves?

In most cases:

  • ROC curves should be used when there are roughly equal numbers of observations for each class.
  • Precision-Recall curves should be used when there is a moderate to large class imbalance.

ROC can present the optimistic picture of the model on imbalanced class data sets. And it might be deceptive and lead to incorrect interpretations of the model ability.

The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets, 2015.

Fixing the data

In this section, we will focus on how we can solve the issue which happened on the data-level.

Actually, one of the best (or better way) to tackle this is to enrich the data by either getting more positive samples or adding more features to the existing data.

However, getting more positive samples may be difficult; otherwise it should be an imbalanced data problem. There are several methods to mitigate the effect of imbalanced data.

Oversampling, undersampling and augmenting the data

There are 2 methods that deal directly with the dataset, and another method which uses the synthetic (or augment) technique on the minority class.

1) Oversampling — this is the method of replicating some observations of the minority class to increase its cardinality. The main advantage is no information loss as all data from the majority and minority classes are being kept. However, this process is prone to over-fitting.

2) Undersampling — this is the method of sampling the majority-class data to balance with minority-class data. Given it involves removing the observations, we may lose the useful information from the training data set.

Visualization of over- and under-sampling methods
  • generating synthetic data consists of creating new synthetic points from the minority class (see SMOTE method for example) to increase its cardinality

3) Augment / synthetic data — this is the method where we are creating new synthetic (another method of augmenting) points from the minority class.

The popular methods are SMOTE (Synthetic Minority Oversampling Technique) and ADASYN (Adaptive Synthetic).

SMOTE works by looking at the existing minority data and synthesizing the new data points at a random location in the line between the current observation and its K-nearest neighbors (this k is required for the function).

ADASYN builds on the SMOTE methodology by using the weighted distribution for different minority instances based on their level of learning difficulty. Hence, more synthetic data will be generated for minority instances that are harder to learn (help to shift the classification boundary).

Example of how synthetic data is created using SMOTE and ADASYN with different neighbors setting

To end this section, all of the sampling methods should be used with cautions; here are two points to take into account.

  • The sampling method should only be done only on training data.
  • Any sampling methods could be viewed as we are attempting to change the reality of how the data is representing, so it requires to be careful and to have in mind what it means for the outputted results of our classifier.

Optimize the learning algorithm

When all the methods have failed, sometimes we need to step back and look at the problem again. Maybe we need to rethink and optimize how we use and tune the classifier algorithm to deal with minority class, specifically.

Avoid Divide-and-Conquer search approach

The algorithms utilize the search method of the divide-and-conquer approach to recursively partition the search space can have difficulty in finding rare patterns, as it is leaning toward the separation of the majority class. It is suggested that learning methods that avoid or minimize these 2 approaches will tend to perform better when there is imbalanced data. An example is genetic algorithms, which are global search techniques that work with populations of candidate solutions rather than a single solution and employ stochastic operators to guide the search method.

Optimize the Search Method by using the Metrics designed to handle the Imbalanced Data

We have explored several evaluation metrics in the early section to help with the problem-level issue. However, they can also play a role here in the algorithm level to guide the search process.

It is using genetic algorithm-based classification system with F-measure that controls the relative importance of precision and recall in the fitness function, so a diverse set of classification rules are evolved, with some having high precision and others high recall. The expectation is that this will eventually lead to rules with both high precision and recall.

Another method is to separate into two phases; first, optimizes by the recall to maximize the coverage and, secondly, optimizes by precision to remove the false positives.

Algorithms that Implicitly Favor Minority Class

Cost-sensitive learning algorithms are one of the most popular for handling imbalanced data. To better understand what it is, let us think about how important we are to detect the fraud transaction and legit transaction. We can see that fraud transactions will cost more to the company (both financial and reputation) than incorrectly flagging legit transaction as fraud (delay to customers’ end due to payment). Based on this example, the errors between misclassification are no longer equal.

There are several methods for implementation, including weighting the training observations in a cost-proportionate manner and building the cost sensitivity into the learning algorithm.

Learn only the Minority Class

There are several approaches to only learn classification rules for the minority class. One is a recognition-based approach; it learns only from training observations associated with the minority class to identify the hidden patterns shared among the training observations.

Other, which is more common, approach learns from the training observations which belong to all classes but learns the rules to cover the minority class first. The most popular algorithm is the Ripper algorithm, which builds rules using a separate-and-conquer approach. Ripper generates the rules for each class from the rarest to the most common class. At each stage, it grows the rules for one targeted class by adding conditions until no observations are covered, which belong to the other class (see illustration below).

Example of how the RIPPER algorithm works sequentially by creating rules and removing the data points that are already covered by those rules

Probability cut-off

This approach does not try to solve any issues state above, but rather deal on how we can use the classifier result. Here, we will use the prediction outputs as the probability of each class for the observation. We can then put the prediction and produce the gain/lift table to evaluate the cut-off point.

Using the below sample gain table as an example, we can then use this information (number of observations, hit rate, % of default, cumulative and lift information) combined with the business/problem objective to select which score / predicted probability to use.

Example of Gain / Lift table

End Notes / Takeaways

If you have read up to this point, congratulation!! Hopefully, you have learned and brought some of the concepts and used them in a real problem.

  • Sometimes, it is better to think and rethink the problem. Have a clear goal and, if possible, break down them down to achievable sizes.
  • Use appropriate evaluation metrics when dealing with imbalanced learning; these metrics should be based on the goal you want to reach.
  • Optimize the machine learning algorithm to handle the imbalanced learning. Cost-sensitive learning, one class learning, class re-weight are some of the examples.
  • Re-sampling techniques (oversampling, undersampling) can be used, but with caution. Because doing this, we are changing the reality of the data to the learning algorithm.

Thanks for reading and happy learning!!!

Satsawat Natakarnkitkul — AVP, Senior Data Scientist — SCB — Siam Commercial Bank U+007C LinkedIn

I’m data enthusiast and utilize both technical and business understanding to drive and deliver the insight of the…

www.linkedin.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->