Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

MoneyBalling Cricket: Predicting Centuries — Base Model
Latest   Machine Learning

MoneyBalling Cricket: Predicting Centuries — Base Model

Last Updated on July 26, 2023 by Editorial Team

Author(s): Arslan Shahid

Originally published on Towards AI.

Data preparation & binary classification modeling of centuries in cricket

Image from Pexel. Image by Mahafuzur Rehman

Centuries are a celebrated event in cricket, usually resulting in match-winning innings by the batsman. As a statistics enthusiast, it felt like a great problem to model because it is not only immensely interesting, the novelty of the problem did make it challenging. This piece explains the reasoning behind how I prepared the data, what model I used, and the evaluation criteria.

Data

Before starting, a bit of information about the data source and verification.

  1. Data Source: All the data has been sourced from cricsheet.org. They offer ball-by-ball data of ODIs, T20s, and Test matches. I do not own the data, but cribsheet data is available under the Open Data Commons Attribution License. Everyone is free to use, build and redistribute the data with proper attribution under this license. Read about the license here.
  2. Data Verification: The founder of cricsheet does a good job of verifying the data source with minimal errors. I verified the data using aggregates and compared them with aggregates available at major cricketing sites such as ESPNcricinfo.
  3. Data Dimensions & time: The dataset contains 2050 ODI matches, starting from 2004–01–03 to 2022–07–07. It contains almost all major male ODIs played during the period. The dataset contains 1,087,793 balls played, & 35,357 batsman knocks & 4,002 innings.

In a previous post, I did a probabilistic analysis of centuries, a key finding was that unconditioned on anything else, the empirically estimated probability of a batsman knock resulting in a century is only 3.16%. This is important because when modeling a classification problem, class prevalence is probably the most crucial factor in determining the efficacy of your model(s). A low-class prevalence usually means that model performance on metrics like accuracy, precision, recall & f1 score will be low. In simple words, it will be hard to predict centuries if you predict a century at the start of the match, you need to overcome this somehow to get any meaningful results.

Money Balling Cricket — Probability of 100 using repeated conditioning

Probabilistic analysis of Scoring ≥ 100 runs in cricket

towardsdatascience.com

Simplifying the problem

Any model trained with data sampled at the start of the match is unlikely to have predictive power, to mitigate this problem, I needed to simplify the problem. If one were to predict the centuries at the halfway point where the models are predictive, like the moment the batsman reaches a threshold of 50–55 runs, they would get much better results.

Another simplification to make is to exclude data points when a century is not ‘possible’; when the balls remaining don’t permit a century without free hits, and for the second innings when the total required to win is lower than the required runs for a century. This would reduce noise in our data. The last simplification is to exclude teams that do not play Test cricket, I did this because these teams have very small samples compared to major teams.

All of this simplification might seem like cheating, but when you are modeling, do the most simple thing you can do first, then remove some of these restrictions to get a series of more ‘complete’ models.

Data Preparation

Data Preparation explained. Image by Author

The original dataset extracted from cricsheet is ball by ball, the way I designed our model, it will take in snapshots of the batsman’s innings the moment they cross 50 runs. The following steps were taken to prepare it for modeling.

  1. Identifying rows or instances in the matches where a batsman in one innings crossed the 50 runs threshold for the first time.
  2. These were then passed through a series of filters.
  3. The first filter checks that both batting and bowling teams are Test-playing teams.
  4. The second filter ensures that a century is still possible, considering the number of balls remaining.
  5. The third filter deletes rows where the batsman could not complete their century if the target score doesn’t allow for one, it only applies to the 2nd innings.

At these snapshots, all historical data (up to the current ball of the current match) of the batsman was aggregated. For the batsman, their historic average against the team they are batting against, and for the bowling team, the historical average of the economy (runs per ball), runs per wicket, etc, against the same team were computed. In case of no history, they were imputed with an overall team historical KPIs, before the innings in question. Also, partnership statistics like total runs by the current partnership & partner’s score were added to the dataset.

These historic KPIs were important for making an informative model but including them poses the risk of target leakage. It happens when you train your algorithm on a dataset that includes information that would not be available at the time of prediction. In our case, it can happen if I include in the training dataset the historic KPIs of a match that happens after a match in the test dataset. To prevent this from happening, the data had to be sorted by time & balls played in the match, with only the first 80% of the data included for training, and the rest 20% was put into the test dataset.

Test-Train Split. Image by Author

Base Model

For my initial model, I choose a Binary Logit model a.k.a Logistic Regression. The model was chosen for the following reasons:

  1. Interpretability: Complicated modeling techniques such as neural networks will likely perform better on performance metrics, but this comes at the cost of interpretability. Logistic Regression is easy to interpret, which often gives insights into how our independent variables impact the dependent variables.
  2. Debugging: In most modeling exercises, you often have to debug your data & model. The outliers and confounding effects are easier to identify in a simple model. Which can help you clean your data or do better feature engineering.
Model equation, C() means that the variable is categorical. Image by Author

Note: hist economy is the historic economy of the bowling team overall against the batting team, up to that match & hist Avg is the historic average of the batsman against the same bowling Team. Both were imputed with historic overall average/economy if there is no history between the batting team & bowling team.

Have a data science problem, and need an expert to solve it? Consider hiring me!

Model Evaluation

The model was evaluated on a test dataset on a series of metrics, which change as you change the decision boundary of your model. Logit Models predict the probability of the event happening, it is up to the modeler to pick the cut-off threshold used to classify an event as a century or not a century.

By default, that decision boundary is set to 0.5 ( a predicted probability of >0.5 is 1 and else 0), but that fails in most cases when there is a huge class imbalance. One class is more prevalent than the other, so to overcome this, you change the decision boundary.

Below are the intuitive explanations of all the metrics used to evaluate our model:

Precision:

Formula — (True_positive)/(True_positive + False_positive).

Intuition: Maximizing precision means that you do not include any false positives in your predictions, you only include those predictions which are extremely likely to be true positives. This intuitively means that your predictions will include many false negatives but fewer or no false positives.

Recall:

Formula — (True_positive)/(True_positive + False_negative).

Intuition: Maximizing recall means that you capture all those instances which are true positives, and do not include any false negatives. This means you will tolerate false positives but not false negatives. There is a tradeoff between precision & recall.

F1 score:

Formula — 2*(precision*recall)/(precision + recall).

Intuition: The F1 score is the harmonic mean of precision & recall. Maximizing the f1 score is where you reach a ‘mid-point’ between precision & recall. Where you scrutinize false negatives & false positives equally.

F-beta score :

Formula — (1+Beta²)(precision*recall)/((Beta²)precision + recall).

Intuition: Similar to F1 score the F-beta score also tries to reach a ‘consensus-point’ between precision & recall but the beta value skews the consensus in favor of either precision or recall. A beta value greater than 1 means a skew in favor of recall & a beta value lower than 1 means a skew towards precision.

Model Metrics Plot. The dashed line is the decision boundary where the f1 score is maximized. Image by the Author.

The curve above shows how model metrics change if the decision boundary of the model is changed. Which metric to maximize is purely a question of how you want to use the model. For example, in sports betting if you want to bet big on a player making a century, you would like to be very sure that your prediction is a true positive, so you might want to optimize the model on the precision or f beta score with a lower than 1 beta value.

In most cases, one would want to choose a point that penalizes both false positives & false negatives equally, so the f1 score makes the most sense. F1 score is maximized at an 18% threshold, the model has an f1 score of 48%, an accuracy of 60%, and a recall of 70%. This means the model captured 70% of all true positives, but the precision of 38% is low. Any improvements to this base model should be more precise with its prediction!

These were the metrics used to find the optimal decision boundary, now, let us evaluate the model as a whole. For that, I used the Area under the curve (AUC) of the receiver operating characteristic (ROC) curve.

ROC curve. Image by the Author.

Intuition (ROC): The ROC curve shows the relationship between True positive rate (TPR) & false positive rate(FPR). The true positive rate is the same as recall. The ROC tells how the two change, remember to capture more true positives, you have to also tolerate more false positives. The origin line is where both TPR & FPR are equal, a model which has TPR = FPR for all decision boundaries is purely making random predictions.

Intuition (AUC): The Area under the curve is a metric of the overall performance of our model. The AUC ranges from 0 to 1, with 0 having no predictive power, 1 meaning perfect predictive power & 0.5 meaning purely random predictive power (no better than flipping a coin).

The origin line forms a triangle with the x-axis, with length & height = 1. This means that it has an AUC of 0.5 (Area of triangle = 0.5 * height * length). Our model’s ROC curve has an AUC of 0.653, which implies it is much better than random!

Improvement in our model can be made in two ways, use more sophisticated algorithms or try predicting centuries at a threshold other than 50–55 runs!

Thank you for reading! I will explore the different scoring thresholds and different, more complicated models in my next post on the topic. Stay Tuned!

Want to read more about statistical modeling in cricket, please do check these out:

Money Balling Cricket — Statistically evaluating a Match

Evaluating player performances using descriptive statistics

medium.com

Money Balling Cricket: Averaging Babar Azam’s runs

One of the key elements in the movie Moneyball(2011) is that Billy Beane (Brad Pitt) and Peter Brand (Jonah Hill)…

arslanshahid-1997.medium.com

Or maybe you want something different:

Lies, Big Lies, and Data Science?

I’m sure that with all the hype surrounding data science, machine learning, and artificial intelligence, you’ve been…

medium.com

Please do follow me on Medium, Twitter, and Linkedin. Don’t forget to click the email icon so that you can receive an email of my posts.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->