Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Let’s Do: Time Series Decomposition
Latest   Machine Learning

Let’s Do: Time Series Decomposition

Last Updated on June 28, 2023 by Editorial Team

Author(s): Bradley Stephen Shaw

Originally published on Towards AI.

What makes your time series tick? There’s only one way to find out — by taking it apart.

Photo by Sean Whelan on Unsplash

Time series are quite possibly the most ubiquitous collections of data available today. In fact, you’ve probably observed many yourself without even knowing — doing your grocery shopping, watching a sports match, and even just walking around.

Put simply, a time series is a series of data points that are indexed, listed, or graphed in time order¹.

Time series approaches generally fall into two buckets: extracting meaningful information from past data (time series analysis) and modeling past data in order to predict future values (time series forecasting). In this article, we’ll be focussing on the former, exploring:

  1. Model form
  2. Location and trend
  3. Seasonality
  4. Residual and noise
  5. A useful example

First up, some theory.

The building blocks of a time series

It’s useful to consider a generic time series as some combination of underlying drivers: location and trend, seasonality, and residual.

Location and trend

Often bundled together into a single “trend” term, location specifies the starting point of the series, and trend relates to long-term changes over time.

Usually, we think of trends as being movements in time series unrelated to regular seasonal effects and random shocks.

Seasonality

Seasonal effects are regular, systematic variations in the time series.

The drivers of these effects vary depending on the context of the analysis but are commonly related to calendar dates or physical conditions — e.g., summer holidays or the frequency at which a traffic light turns red.

Residual

Sometimes called noise, the residual is the catch-all bucket containing variation which is unrelated to long-term and/or predictable changes — i.e., this component is whatever remains after trend and seasonality have been removed from the time series.

As an example, a volcano eruption or a large sports event could cause shocks in a time series of airline passengers; all of this would get bundled into the residual.

Now that we’ve got a basic understanding of the elements of a time series let’s put them together in a model.

Model form

We can combine trend, seasonality, and residual into a decomposition model. These models are usually additive or multiplicative depending on the characteristics of the time series in question. Mathematically, this becomes:

Image by author

The determining factor of the decomposition model type is whether the amplitude of the seasonal and residual elements change as the trend changes²: if the size of the seasonal and residual elements are consistent regardless of the trend, then an additive model is most appropriate.

Enough theory for now — let’s see a practical example of decomposition!

The classic example

This set of airline data³ is commonly used in demonstrations of time series analysis, and for a good reason: it demonstrates the key components of a time series extremely well.

After some manipulation, we have monthly counts of airline passengers dating back to 1949:

Image by author

Plotting this data, we see the following:

Image by author
  1. A clear and consistent upward trend over time.
  2. Strong seasonality, becoming more evident over time. The seasonality element becomes more pronounced as the trend increases.
  3. Some evidence of a residual, as we can see what looks like random variation in the series.

As we’ve seen the magnitude of the seasonality element move in line with the magnitude of the trend element, we can be fairly certain that a multiplicative model is appropriate here.

Let’s go ahead and decompose the time series using the seasonal_decompose function in Python’s statsmodels ⁴. This decomposition first determines trend, and then calculates seasonality. The difference between the time series and the derived trend and seasonality falls into the residual.

This is a “naive” decomposition approach as it relies on the use of moving averages to calculate trend — more on that below.

Code-wise, the decomposition is quite simple:

# decompose
result = seasonal_decompose(
df['passengers'].values,
model = 'multiplicative',
period = 12,
extrapolate_trend = 6
)

# get each element
trend = pd.Series(result.trend)
season = pd.Series(result.seasonal)
res = pd.Series(result.resid)

Let’s take a look at the parameters used:

  • model specifies the decomposition model type. In this case, we’ve explicitly stated that we’re using the multiplicative model.
  • period specifies the period after which the seasonal behavior of the time series can be expected to repeat. Since we’re using monthly data, we would expect the seasonal element to repeat itself every 1 year, or 12 months. If we were using daily data, we would set period to be 7, reflecting the number of days in a week.
  • extrapolate_trend relates to the moving average used to derive the trend. The default approach is to calculate a “two-sided” moving average which reflects the period provided, and return Nan for points for which the moving average cannot be calculated. seasonal_decompose provides the option to replace the Nan with extrapolated values using the nearest extrapolate_trend points. So in our case, the two-sided moving average returns Nan for the first and last 6 points (since period = 12); we use the nearest 6 points to create an extrapolation to replace these missing values.

Let’s plot each component of the decomposition and remove each from the series in turn:

Image by author

The trend component is clearly upward and consistent over time. No surprises here, as we saw in the Raw series. Note how regular the start and finish of the trend are — this is caused by the extrapolation we covered above.

Once we remove the trend from the series — or “detrend” it — we see a mostly-regular pattern (top right above). Note how the de-trended series appears most consistent from about 1954 to 1958, with data points on either side of that interval appearing fairly noisy.

Seasonality is shown in the middle left chart above. Since seasonality is calculated such that it is a consistent pattern across the entire time series, we show only one period of the seasonal effects. Notice how seasonality peaks around the July and August summer holidays and drops around the start of the academic year in September and October — fairly sensible and in line with expectations.

Removing both trend and seasonality from the series leaves just the residual — we see this as the middle right and bottom left charts are the same. We can see how there is a noticeable change in the amplitude if the residual from 1954 to 1958, coinciding with changes in the raw series.

Finally, removing the residual from the de-trended, de-seasonalised series delivers a constant series of value 1.

Pro tip #1: under the multiplicative model, we remove an element of the time series by dividing the (raw) series by the element in question. As an example, if we were to remove seasonality from the equation, we would divide the raw series by the seasonality values. Under the additive model, we would subtract the component rather than divide by it.

Pro tip #2: under the multiplicative model, removing all of the constituent parts of a time series leaves a constant series of 1. If we were using an additive model, this “remainder” would be 0. This is a good check to ensure that all the elements of the time series have been captured appropriately.

Practical applications

Now that we’re familiar with decomposition, let’s take a look at how we can use it in our analysis.

Removing seasonality highlights trends and unexpected events

High levels of seasonality can mask trends and events, especially if the event takes place during a period of strong seasonality.

For example, consider the number of daily walkers in a park; during the colder winter months, we would expect a drop in these numbers — no big deal. But what if a short cold snap kept even more people away? Would it be easy to see that in a series without seasonality removed? Probably not!

Image by author

If we remove the derived seasonality from the airline data, we get the chart above (orange for de-seasoned data, grey for original series). Note how a downward spike around March 1960 appears to be masked, and only becomes noticeable once we strip out seasonality.

Removing noise creates a more regular time series.

By definition, the residual component is noisy and contains unexpected events and changes. It’s quite often useful to ask “what if?” questions, like “what if that random thing didn’t happen?”. We can get to a quick answer by simply removing the residual from a time series.

Image by author

Consider the airline series above. The original series is plotted in grey, and the denoised series is in red. Notice how removing random events creates a more regular time series that’s better suited to analysis and forecasting.

Recap and ramble

The recap

We covered the constituent components of a time series, and how we can put them together. We discussed how long-term changes in a time series are captured as trend, how the seasonality component represents regular and periodic variation, and how random fluctuations and shocks are captured in the residual component.

We also saw how we could formulate an additive and multiplicative time series model, the choice of which to use depending on how the amplitude of the seasonality and residual interacts with the trend.

We used statsmodels decompose an example time series and explore the various components and their effects on the original time series. Lastly, we discussed some practical use cases: the motivation for removing seasonality and for removing residual from the time series.

The ramble

Implicit in this analysis is the use of a univariate time series — i.e., we concentrated on measurements of a single quantity over time. Things don’t tend to operate in isolation in the real world, so it may make sense to analyze multiple time series in conjunction. Luckily for us, multivariate time series is a well-formed branch of statistics!

I mentioned that the time series decomposition approach we used was “naive”; naive in the sense that the decomposition relies on moving averages. While moving averages are great (and simple) tools, they can be sensitive to the choice of window size. There are other decomposition approaches available — like the STL⁶ — which may provide a more nuanced and robust result.

Finally, we need to talk about change points: depending on the size and treatment of the change, the decomposition can be distorted. COVID is a great example where not accounting for the effect of lockdown periods properly can create a distorted seasonality result, which in turn can distort conclusions drawn from an analysis.

That’s it from me — I hope you’ve enjoyed reading as much as I’ve enjoyed writing. Keep an eye out for more articles as I delve into time series analysis and forecasting.

References and resources

  1. Time series — Wikipedia
  2. Time Series Analysis: The Basics (abs.gov.au)
  3. yao, wei (2016), “international-airline-passengers”, Mendeley Data, V1, doi: 10.17632/vcwrx2rwtg.1, downloaded and used under the CC BY 4.0 license.
  4. statsmodels 0.14.0
  5. statsmodels.tsa.seasonal.seasonal_decompose — statsmodels 0.15.0 (+8)
  6. 6.6 STL decomposition U+007C Forecasting: Principles and Practice (2nd ed) (otexts.com)

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->