Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Deep Learning: Forecasting of Confirmed Covid-19 Positive Cases Using LSTM
Latest

Deep Learning: Forecasting of Confirmed Covid-19 Positive Cases Using LSTM

Last Updated on January 6, 2023 by Editorial Team

Last Updated on August 15, 2022 by Editorial Team

Author(s): Dede Kurniawan

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

Describing each step in the deep learning method for forecasting positive cases of Covid-19 in Indonesia

Photo by Markus Spiske from Pexels

Introduction

Almost all parts of the world have been shocked by the Covid-19 outbreak that has infected many people; even millions of people have died in this outbreak. The first case of Covid-19 was detected in Wuhan City, Hubei Province, China, in December 2019. On January 30, 2020, it was declared a health emergency for the whole world by the World Health Organization (WHO). Covid-19 spreads quickly because it can be transmitted directly through the air within a radius of 2 meters and in direct contact or when an infected person sneezes or coughs [3].

The Covid-19 pandemic has significantly impacted people’s lives in various fields such as economy, health, education, etc. Some examples of policies implemented by the Indonesian government are wearing masks when in public places, social restrictions on a microscale, Social distancing (2 meters), online learning, working from home, closing various places that invite crowds, etc. [2]. However, not all policies implemented by the Indonesian government effectively prevent the transmission of Covid-19. Having a reliable early warning method is very important to estimate how much this disease will affect the community, on this basis, it is hoped that the government can implement the right policies in dealing with the Covid-19 pandemic.

How do we find a suitable early warning method in dealing with Covid-19? With machine learning or deep learning, we can estimate/predict/forecast the number of cases infected with the Covid-19 virus. After we know how big the estimated cases of being infected with the Covid-19 virus are, we can make a policy to deal with this pandemic by considering the data.

You can access the Google Colaboratory that we use here

Task

We will analyze Covid-19 data in Indonesia and create a deep learning model to forecast the number of cases infected with Covid-19. Our processes include data cleaning and pre-processing, exploratory data analysis, modeling, and drawing conclusions.

We use the dataset on the https://tiny.cc/Datacovidjakarta

Import library

Since we are using the python programming language, there are several libraries that we will use in this article. The first thing we have to do is import all the required libraries.

We imported Numpy to perform computation. Pandas for data manipulation purposes. Matplotlib, Seaborn, and Plotly for data visualization. Scikit-learn for algorithms that are useful in data preprocessing, machine learning, etc. TensorFlow is used to build neural networks.

Data cleaning and pre-processing

Before preparing or cleaning the data, we must load the dataset into a data frame using pandas. To do so, we can use the pd.read_excel() function.

We also display the first five rows of our dataset using .head().

Just in case our data set gets corrupted during data cleaning, we use .copy() method to duplicate our dataset.

df = data.copy()

Well, here maybe there are some column names that cannot be understood because they use Bahasa, so we will rename the column names using English. Also, here we will only focus on analyzing the columns ['Positif (Indonesia)', 'Sembuh (Indonesia)', Meninggal (Indonesia)']. Therefore, we will drop columns that are not used.

To see information from the dataset, we can use df.info().

The dataset we use has 893 rows and 4 columns. In the date column, there is a datetime64 data type, and the other column has a float64 data type. However, the problem lies in the date column, which has more rows than the other 3 columns therefore we can assume that in 3 columns other than the date column, there are missing values.

In the code df.dropna(inplace=True), we drop the column with missing values. Then, in the code df.drop(index=0, inplace=True), we drop the first line because it has an inconsistent time difference compared to the lines below. Next, we reset the index on our dataset df.reset_index(drop=True, inplace=True).

The next thing we are going to do is check if there are any duplications and missing values in the data, if there are, we should drop them.

If it turns out that there is no duplication in our data, then our data cleaning process will end here. Next, we will go to the exploratory data analysis process, but before that, we will look at our data frame info once again to make sure our data cleaning process is completely complete.

Finally, we should present an overview of our dataset once again.

Exploratory data analysis

At this stage, we will explore our dataset and try to understand more about it. We want to know a summary of the descriptive statistics of the dataset, to do that, we can use .describe() and .transpose() to convert rows into columns.

df.describe().transpose()

Next, we want to know the comparison of the total number of positive confirmed cases, the number of death cases, and the number of recovered cases. To do this, we can visualize the data so that it is easier to understand.

Image by the author.

According to the pie graph above, we can take insight that the comparison of recovered cases and positive cases are almost equal. This shows a high recovery rate due to being infected with the Covid-19 virus, but the spread rate is also high. Then, it can be seen that death cases have a small area when compared to cases of recovery and death, this shows a high survival rate due to the Covid-19 virus.

Then, we also want to see the history of Covid-19 cases from some time ago using a line chart.

Image by the author.

According to the line chart above, We can see that positive cases are occasionally rising quickly, but the recovery rate from Covid-19 is also quite high, and the number of deaths from Covid-19 is very small compared to the very high rate of spread.

Exploring the correlation between the data will be our final step in the exploratory data analysis process.

Visualization of correlation between data using heatmap (Image by author).

Here, we calculate the correlation between data using the Pearson correlation method, which is a measure of the linear correlation between two variables. All of these data have a high positive correlation with each other. We can understand that if one variable value increases, the other variable value will also increase. Just like if positive cases go up, the recovery rate will also go up.

Modeling

What we have to do before creating a deep learning model is to prepare preprocessed data first. Since we are focusing on the positive column, we will drop the other columns.

Here, we also change the data type to int64 and convert the data frame to a one-dimensional NumPy array. Then, the data must have been scaled.

Here we are scaling the data using the MinMaxScaler method present in Scikit-learn. We also only apply scaling to 80% of the total dataset, to be precise, we only scale the data used for the model training process.

We next create a function that is used to separate the dataset into training data and test data. We separate with a composition of 80% for training data and 20% for test data.

Next, we design a neural network architecture to be used in time series data prediction (forecasting).

Image by the author.

We design a neural network model using Conv1d, LSTM, and dropout layers. LSTM neurons are very effective when used in time series data. here we also add a dropout layer as a regularization. The total parameters in our model are 99.521.

After finishing constructing the neural network architecture, we have to define the matrix for the loss function as well as the optimizer. We choose Mean Square error as the loss function and optimizer Adam (Adaptive Moment Estimation). We chose Adam based on consideration because it is adaptive and requires less learning rate hyperparameter adjustment.

model.compile(loss= 'mse', optimizer= 'adam')

The next step we will take is to train our model with the data that has been prepared.

Result of model training (Image by author).

When conducting the model training process, we also determine the learning rate and perform early stopping. We chose the ReduceLROnPlateau algorithm for the learning rate because the learning rate can be adaptive according to the matrix (the learning rate will decrease when the matrix is not growing).

After we finish training the model, we will evaluate the performance of the model we have trained.

Visualization of training and validation loss (Image by author).

To make it easier for us to evaluate the model, we visualize the training loss and validation loss using a line chart. It can be seen from the graph that the error in the test data decreases as the number of epochs increases.

We will also compare the actual value as well as the predicted value of the model. To make it easier to observe, we will also visualize it.

Actual value vs. Predicted value (Image by author).

It can be seen that the model predicts a value that is close to the actual value. We then assume that our model performs reasonably well.

Next, we prepare the data that we will use to make predictions (forecast).

Next, we will plot the entirety of the data and compare it with the predicted results from the model. Actually, it’s pretty much the same as above, but here includes all of our data.

In the line chart, it can be seen that the predictions of the model are almost equal to the actual values.

Now we will predict Covid-19 cases for the next 30 days.

Finally, we visualize the predicted data. This also makes it easier for us to analyze and understand the data.

Visualizing the predicted output (Image by author).

We make predictions for the next 30 days using the model we have designed and trained. After that, we did a visualization using a line chart to see the prediction results. It can be seen that for the next 30 days, our model predicts that Covid-19 cases in Indonesia will continue to increase.

Summary

Covid-19 is a pandemic that is still happening today. Wise decision-making is based on existing data, and by using the data, we can predict future events. Based on the process we have done, we understand that the Covid-19 pandemic in Indonesia is not too bad. This is based on comparing positive, recovered, and death cases.

We focus on positive cases and build deep learning models to predict future cases. As a result, we predict that positive cases of Covid-19 in the next 30 days will continue to increase. However, it is also based on uncertainty, and our model also does not 100% predict the true outcome. Based on that we will also make our decisions according to future conditions.

References:

[1] Kurniawan, A., & Kurniawan, F. (2021). Time Series Forecasting for the Spread of Covid-19 in Indonesia Using Curve Fitting. 2021 3rd East Indonesia Conference on Computer and Information Technology (EIConCIT). https://doi.org/10.1109/eiconcit

[2] Géron, A. (2019). Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition (2nd ed.). O’Reilly Media.

[3] Painuli, D., Mishra, D., Bhardwaj, S., & Aggarwal, M. (2021). Forecast and prediction of COVID-19 using machine learning. Data Science For COVID-19, 381–397. https://doi.org/10.1016/b978-0-12-824536-1.00027-7

[4] Setiati, Siti & Azwar, Muhammad. (2020). COVID-19 and Indonesia. Acta medica Indonesiana. 52. 84–89.


Deep Learning: Forecasting of Confirmed Covid-19 Positive Cases Using LSTM was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->