Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Forecast The Future With Time Series Analysis
Latest

Forecast The Future With Time Series Analysis

Last Updated on March 31, 2022 by Editorial Team

Author(s): Bala peddireddy

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

A detailed explanation of univariate time series analysis with an example…✈

Introduction

Time series analysis is a way of analyzing the data which is sequenced in a data-time format. In simple words, We can say that the index of the data frame is in the form of timestamps (date format). Univariate time series analysis contains only one variable the target variable is predicted or forecasted based on time. Let me explain all the concepts of univariate time series analysis with an example.

Getting Started

In this article, I’m will be working on the “Air Passengers dataset from the Kaggle website. This dataset contains a monthly wise record of the passengers who traveled through airlines from Jan 1949 to Dec 1960.

Import Packages

First of all, I will make the warnings not to be displayed in the python notebook by importing the warnings package and calling the filterwarnings method.

Code:

import warnings
warnings.filterwarnings(“ignore”)

Then, Import the required libraries into the python notebook with the help of the import command. Here, the Pandas library is used for us to play with the dataset. Whereas the NumPy library is used for us to do the numerical operations on the data frame and Matplotlib is for plotting of data frame.

Code:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

Read Dataset

Read the dataset (airline_passengers.csv) with the help of the read_csv method from the Pandas package and display the first 5 rows using the head method.

Code:

df=pd.read_csv(“airline_passengers.csv”)
df.head()

Explore The Data

Explore the dataset with help of the built-in methods of the Pandas package.

Code:

df.info()
df.describe()

As we can see the month column is in the form of object format. We need to convert it into the date-time format by using the to_datetime method and then assign it as the index of the data frame using the set_index method.

Here, the inplace value is True means the data frame itself gets modified based on the operation applied.

Set the frequency of the date-time index to “MS” using the freq attribute because the dates of the index are at the beginning of the month.

Code:

df.index.freq = ‘MS’

Check For Null Values

Let's check whether the data frame contains null values or not by using the isnull method.

Code:

df.isnull().sum()

No null values are present in the dataset. Otherwise, we would use some techniques to handle missing values like forward-filling, backward-filling, interpolation, etc.

Plot The Data

Plot the data frame using the plot method from the matplotlib library to see how data behaves through time. Here, the figure method is used to set the parameters of the plot.

Code:

plt.figure(figsize=(12,6))
plt.plot(df[‘Thousands of Passengers’])
plt.title(“Monthly total of Airline Passengers”)
plt.ylabel(“In Thousands”)
plt.xlabel(“year”)
plt.show()

From the above image, we can say that the data is having an upward trend which means a gradual increment of values through time and there is some seasonality in the graph.

Check For Seasonality

Seasonality is a pattern of data in which the data varies at regular intervals every year. It can be on a weekly, monthly & quarterly basis. In order to understand it clearly, divide the plot based on seasonality. Here, we can say that it is on yearly basis from the above figure.

Code:

plt.figure(figsize=(12,5))
plt.plot(df[‘Thousands of Passengers’])
plt.title(“Monthly total of Airline Passengers”)
plt.ylabel(“In Thousands”)
plt.xlabel(“year”)
for x in df.index[df.index.month==12]:
plt.axvline(x=x, color=’red’);
plt.show();

Decompose The Signal ( Data )

In order to get much clarity, let's decompose the plot into three components(trend, seasonality, residual plots) by using the seasonal_decompose method from the statsmodels library.

Here, the model attribute is given as additive because the graph is gradually increasing with respect to time(trend component). If it increases exponentially then we can assign it as multiplicative.

Code:

from statsmodels.tsa.seasonal import seasonal_decompose
result = seasonal_decompose(df[‘Thousands of Passengers’], model=’additive’)
fig, axs = plt.subplots(2, 2,figsize=(15,8))
axs[0, 0].plot(result.observed)
axs[0, 0].autoscale(axis=’x’,tight=True)
axs[0, 0].set_title(‘Observed’)
axs[0, 1].plot(result.trend,’tab:orange’)
axs[0, 1].autoscale(axis=’x’,tight=True)
axs[0, 1].set_title(‘Trend’)
axs[1, 0].plot(result.seasonal, ‘tab:green’)
axs[1, 0].autoscale(axis=’x’,tight=True)
axs[1, 0].set_title(‘Seasonal’)
axs[1, 1].plot(result.resid, ‘tab:red’)
axs[1, 1].autoscale(axis=’x’,tight=True)
axs[1, 1].set_title(‘Residuals’)
plt.show()

Statistical Test For Seasonality

Let’s do one last step before training the model. That is, check for the stationarity of the data. Stationarity means the data’s mean and variance do not change throughout the x-axis (timestamps). In order to check it, we need to perform a statistical technique that is called Augmented Dickey-Fuller test.

  • In this test, if the p-value is less than the significance level (0.05 or 5%) then there is strong evidence against the Null Hypothesis. So, we reject Null Hypothesis and conclude that the data is stationary and has no unit root.
  • If the p-value is greater than the significance level (0.05 or 5%) then there is weak evidence against the Null Hypothesis. So, we accept Null Hypothesis and conclude that the data is non-stationary and has a unit root.

Import the adfuller method from the statsmodels library to implement the ADF test. A custom function is defined for the ADF test so that it can be called multiple number times as per our requirement.

Code:

from statsmodels.tsa.stattools import adfuller
def adf_test(df):
result=adfuller(df)
print("P Value: ",result[1])
if result[1]<=0.05:
print("Strong evidence aganist Null Hypothesis. So, reject Null Hypothesis and conclude data is stationary.")
return(Tr)
else:
print("Weak evidence aganist Null Hypothesis. So, accept Null Hypothesis and conclude data is non-stationary.")
return(False)
adf_test(df)

If our data is non-stationary, then we need to make it stationary to forecast the future and then apply it to the model. Converting non-stationary data to stationary data can be achieved by differencing the data with its time lag.

Automated Conversion From Non-stationary Data To Stationary Data

Let me define a custom function to automate the conversion of non-stationary data to stationary data and to display the d-value( how many times the data is differenced ).

Code:

def convert_non_stationary_to_stationary(df):
d=0
new_df=df
while True:
new_df=new_df-new_df.shift()
new_df.dropna(inplace=True)
d=d+1
if adf_test(new_df):
print("d-value is",d)
break
convert_non_stationary_to_stationary(df)

Time-series Forecasting Models

Now, select the model based on the data. Before doing it, Let me brief you about each model and its significance.

  • Auto Regression—It is the regression of the variable against itself( its lagged versions ). Here, the order of lags is represented by p. We obtain the p-value from the PACF plot.​
  • Moving Average — It is dependent on the current observations and the lagged residual errors of the data. Here, the order of lags is represented by q. We obtain the q-value from the ACF plot.​
  • ARMA ( AR+MA ) —It is a combination of the Auto Regression model and Moving Average model. It only analyses the stationary time-series data in the form of two polynomial equations without differencing. Its order is defined by ( p,q )components.
  • ARIMA ( AR+I+MA ) — It is a combination of the Auto Regression model and Moving Average model. It only analyses the time-series data in the form of two polynomial equations with differencing to convert stationary to non-stationary data. Its order is defined by ( p,d,q ) components.
  • SARIMAX (S+AR+I+MA+X) —It is a combination of seasonal components with the ARIMA components and exogenous variables. It is used to forecast seasonal time-series data. Its order is defined by ( p, d, q ) components and seasonal components( P, D, Q ).

Exogenous variable

An exogenous variable is a variable that might affect the variables in the dataset but can not be affected by any other variables.

For example, weather can affect the yield of the crop but vice versa is not possible.

Select The Order Of The Model

Now, the most crucial part forecasting the future. Generally, you might select a model among the models mentioned above and find out p,d,q components from ACF and PACF plots. After that, you would train the model and test it, and forecast the future. Most often, This particular way of following the steps might be messy.

Auto ARIMA Model

Instead of that approach, It’s better to go with an automated approach this can be either using the auto_arima method or using the GridSearchCV method ( hyper-parameter tuning).

Split the data into train and test datasets and fit the dataset to the auto_arima method.

Code:

train = df.iloc[:len(df)-30]
test = df.iloc[len(df)-30:]
from statsmodels.tsa.statespace.sarimax import SARIMAX
from statsmodels.graphics.tsaplots import plot_acf,plot_pacf
from pmdarima import auto_arima
auto_arima(df['Thousands of Passengers'],seasonal=True,m=12).summary()

Observations :

Here m-value is based on seasonality. It is 12 for monthly data, 4 for quarterly data, and 1 for annual data. As we can see from the above picture, the best prediction is done by SARIMAX(2, 1, 1)x(0, 1, [], 12) model.

Fit The Model With Train Dataset

Let’s take the order, implement the model by fitting it with train data and predict it with test data.

Code:

model = SARIMAX(train[‘Thousands of Passengers’],order=(2, 1, 1),seasonal_order=(0, 1, [], 12))
results = model.fit()
results.summary()

Predict The Model With Test Dataset

Code:

start=len(train)
end=len(train)+len(test)-1
predicted_values = results.predict(start=start, end=end)
ax = test[‘Thousands of Passengers’].plot(figsize=(12,5))
predicted_values.plot()
plt.legend()
ax.autoscale(axis=’x’,tight=True)

As we can see in the above picture, the model is performing well. It almost fits the data.

Evaluate The Model

Now, evaluate the model with the test dataset and find out the RMSE, MSE, MAE, and MAPE. Import these parameters from the sklearn library.

Code:

import sklearn as sk
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_absolute_percentage_error
print("mean_squared_error :",mean_squared_error(test['Thousands of Passengers'],predicted_values ))
print("root_mean_squared_error :",mean_squared_error(test['Thousands of Passengers'],predicted_values, squared=False))
print("mean_absolute_error :",mean_absolute_error(test['Thousands of Passengers'],predicted_values))
print("mean_absolute_percentage_error :",mean_absolute_percentage_error(test['Thousands of Passengers'],predicted_values))

From the above image, we can see the error values are less. So, we can conclude that our model is performing well for the test data.

Forecast the Future…😎

Retrain the model of the same order with the entire data, and forecast the future.

Code:

model = SARIMAX(df['Thousands of Passengers'],order=(2, 1, 1),seasonal_order=(0, 1, [], 12))
results = model.fit()
results.summary()
predicted_values = results.predict(start=len(df), end=len(df)+30)
df['Thousands of Passengers'].plot(figsize=(12,6))
predicted_values.plot()
plt.legend()
plt.show()

Source Code

GitHub – balupeddireddy08/Univariate_Time_Series_Analysis

Conclusion

I would like to suggest you follow the same flow to perform the time-series analysis on the data. Hope you enjoy reading the article and is helpful for you… 🤝🏻🤝🏻🤝🏻

Let me know if you have any doubts and correct me if anything is wrong with this article. All suggestions are accepted…✌️

Happy Learning😎


Forecast The Future With Time Series Analysis was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->