Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

An Insight of Marketing Analytics
Latest

An Insight of Marketing Analytics

Last Updated on January 21, 2022 by Editorial Team

Author(s): Saniya Parveez

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

Introduction

Many industry-leading companies are already using data science to make better decisions and to improve their marketing analytics. With the expanded industry data, greater availability of data sources, and lower storage and processing costs, an organization can now masticate large volumes of frequently granular data with the help of several data science procedures and leverage it to create composite models, deliver modern tasks, and obtain important consumer acumens with higher accuracy. Using data science principles in marketing analytics is a determined cost-effective, practical way for a lot of companies to observe a customer’s journey and contribute a more customized experience.

Segmentation of Customer Data

Segmentation of customer data is the process of ordering (segmenting) target customers into different groups based on demographic or behavioural data so that marketing plans can be tailored more precisely to each group. It is also an important part of earmarking marketing sources properly because, by targeting particular customer groups, a higher return on expense for the marketing actions can be performed.

Customer Segmentation Data Clustering (Unsupervised Learning)

Unsupervised learning is a modern approach to do segmentation of customer data. It is excellent for customer data segmentation because it collects data points that are most like each other and clubs them together, which is exactly what good customer segmentation procedures should do.

Clustering is a kind of unsupervised machine learning that sees groups or clusters in data externally knowing them ahead of time. Following are the benefits of clustering:

  • It can get customer groups that are unforeseen or unfamiliar to the data analyst.
  • It is resilient and can be practised for a broad range of data.
  • It decreases the need for extensive expertise about connections between the demographics of customers and behaviours.
  • It is prompt to act also it is scalable to very large datasets.

Limitations of clustering:

  • Customer accumulations created may not be easily interpretable.
  • If data is not based on consumer delivery (for example products or services purchased), it may not be obvious how to use the clusters that are seen.

Connection in Customers Data

To use clustering for customer segmentation, it is essential to determine the similarity or to be very particular about determining what kind of customers are similar.

Example:

Segmenting customers’ data based on the quality of bread customers tend to buy may not make sense if companies want to design marketing strategies for selling clothes.

Customer behaviour, such as how they have reacted to marketing drives in the past, is normally the most important kind of data.

Standardizing Customers’ Data

To be able to determine customers based on continuous variables, it is required to rescale these parameters such that the data is on similar scales.

Example:

Let's take age and salary. These are very different computations. A person’s salary can be $90000 and his age can be 40 years. Therefore, It needs to be precise about how big a change in one of these variables is about the same as changing the others in terms of customer connection. Producing such kinds of presentations manually for each variable can be challenging. So, it requires standardizing the data, to reconcile them all on a standard scale.

Z-score is a way to standardize parameters for clustering with the following steps:

  • Decrease the mean of the data from every data point.
  • Decrease the mean of the data from every data point.

The standard deviation is a calculation of how extent our points is. Below formula to calculate the standardized value of a data point:

Figure: Equation of Standardization

Where,

zi = ith standardized value

x = all values

mean(x) = the mean value of of all x values

std(x) = the standard deviation of the x values

Example of standardizing age and income data of customers

Below Python code will standardize the age and income data of customers.

Import all required packages.

import numpy as np
import pandas as pd

Generate random customer income and age data.

np.random.seed(100)
df = pd.DataFrame()
df['salary'] = np.random.normal(80000, scale=10000, size=100)
df['age'] = np.random.normal(50, scale=10, size=100)
df = df.astype(int)
df.head()
Customer Income and Age data

Calculate the standard deviation of both columns concurrently using the std function.

df.std()
Standard deviation

Calculate the means of the two columns.

df.mean()
Mean of age and income

Standardize the variables using their standard deviation and mean.

df['z_salary'] = (df['salary'] -df['salary'].mean())/df['salary'].std()
df['z_age'] = (df['age'] - df['age'].mean())/df['age'].std()
df.head()
Standardized variables

Check standardization.

df.std()
Standardization of age and income

Once the data is standardized, it requires to calculate the similarity between customers. Mainly, this is accomplished by measuring the distance between the customers in the feature space. In a two-dimensional scatterplot, the Euclidean distance between two customers is just the distance between their points.

Calculate Distance between Customers Data Points

Let's calculate the distance between three customers.

Import all required packages.

import math

Create age and income data.

ages = [50, 40, 30]
salary = [50000, 60000, 40000]

Calculate the distance between the first and the second customer/

math.sqrt((ages[0] - ages[1])**2 + (salary[0] - salary[1])**2)
Distance between the first and second customers.

Calculate the distance between the first and third customers.

math.sqrt((ages[0] - ages[2])**2 + (salary[0] - salary[2])**2)
Distance between the first and third customers.

Here, in the output the distance between first and third customers and first and second customers are different.

Standardize the ages and salary using the mean and standard deviation.

z_ages = [(age - 40)/10 for age in ages]
z_incomes = [(salary - 50000)/10000 for salary in salaries]

Again, calculate the distance between the standardized scores of the first and second customers.

math.sqrt((z_ages[0] - z_ages[1])**2 + (z_salaries[0] - z_salaries[1])**2)
Distance between the first and second customers after standardization

Calculate the distance between the standardized scores of the first and third customers.

math.sqrt((z_ages[0] - z_ages[2])**2 + (z_salaries[0] - z_salaries[2])**2)
Distance between the first and the third customers.

Here, after standardization, the distance between the first and second customers and the distance between the first and the third customers are the same.

K-means Clustering

k-means clustering is a very popular unsupervised learning method with a very wide range of utilization. It is very familiar because it scales to very large datasets, and manages to work quite well in application.

k-means clustering is an algorithm that attempts to find the best way of grouping data points into k separate groups, where k is a parameter given to the algorithm. The algorithm then works iteratively to try to find the best grouping.

Below are steps to do this algorithm:

  • The algorithm starts by randomly picking k points in space to be the centroids of the clusters. Each data point is then allocated to the centroid that is closest to it.
  • The centroids are refreshed to be the mean of all of the data points assigned to them. The data points are then reassigned to the centroid closest to them.

Step two is replicated till none of the data points changes the centroid they are assigned to after the centroid is updated.

Example: K-mean Clustering on Customer Salary and Age Data

Perform K-mean clustering on Customer Salary and Age data.

Import all required libraries.

import pandas as pd
import matplotlib.pyplot as plt
from sklearn import cluster
%matplotlib inline

Import customer’s CSV data.

df = pd.read_csv('/content/customer.csv')
df.head()
Customer’s data

Create the standardized value columns for the salary and age values and store them in the z_salary and z_age variables.

df['z_salary'] = (df['salary'] - df['salary'].mean())/df['salary'].std()
df['z_age'] = (df['age'] - df['age'].mean())/df['age'].std()

Plot customer’s data.

plt.scatter(df['salary'], df['age'])
plt.xlabel('Salary')
plt.ylabel('Age')
plt.show()
Plotting Customer’s data

Perform k-means clustering with four clusters.

model = cluster.KMeans(n_clusters=4, random_state=10)
model.fit(df[['z_salary','z_age']])
K-means clustering

Create a column called cluster that contains the label of the cluster each data point belongs to.

df['cluster'] = model.labels_
df.head()
Customer data after clustering

Plot the data.

colors = ['r', 'b', 'k', 'g']
markers = ['^', 'o', 'd', 's']
for c in df['cluster'].unique():
d = df[df['cluster'] == c]
plt.scatter(d['salary'], d['age'], marker=markers[c], color=colors[c])
plt.xlabel('Salary')
plt.ylabel('Age')
plt.show()
K-mean clustering of customer data

So here, a plot of the data with the colour/shape indicating which cluster each data point is assigned to.

High-Dimensional Data and Dimensionality Reduction

It is common to have data that has larger than just two dimensions. If we had some knowledge about how these customers reacted to promoted sales, or how many purchases they had made of products, or how many people lived in their household, so then it will have many more dimensions.

When data have additional dimensions, it becomes more challenging to visualize that data. So, dimensionality reduction comes into the picture. The purpose of dimensionality reduction is that data that is multi-dimensional is reduced, normally to two dimensions, for visualization purposes, while trying to preserve the distance between the points.

Principal component analysis (PCA) is used to perform dimensionality reduction. PCA is a method of transforming the data. It takes the original dimensions and creates new dimensions that capture the most variance in the data.

PCA functionality

Example: Performing Dimensionality Reduction of High-Dimensional Data Using PCA

Import all required packages.

import pandas as pd
from sklearn import cluster
from sklearn import decomposition
import matplotlib.pyplot as plt
%matplotlib inline

Import customer’s CSV data.

df = pd.read_csv('/content/pca_data.csv')
df.head()
Customer Data

Standardize the three columns and save the names of the standardized columns in a list

cols = df.columns
zcols = []
for col in cols:
df['z_' + col] = (df[col] - df[col].mean())/df[col].std()
zcols.append('z_' + col)
df.head()
Standardize data

Perform k-means clustering on the standardized scores.

model = cluster.KMeans(n_clusters=4, random_state=10)
df['cluster'] = model.fit_predict(df[zcols])

Perform PCA on data.

pca = decomposition.PCA(n_components=2)
df['pc1'], df['pc2'] = zip(*pca.fit_transform(df[zcols]))

Plot the clusters in the reduced dimensionality space.

colors = ['r', 'b', 'k', 'g']
markers = ['^', 'o', 'd', 's']
for c in df['cluster'].unique():
d = df[df['cluster'] == c]
  plt.scatter(d['pc1'], d['pc2'], marker=markers[c],    color=colors[c])
plt.show()
Plotting of PCA

Here in plotting, the x and y axes here are principal components and consequently are not easily interpretable. But, by visualizing the clusters, we can get an insight into how good the clusters are based on how much they overlay.

Conclusion

Unsupervised machine learning is an excellent modern technique to perform customer segmentation. K-means clustering, a generally used, fast, and easily scalable clustering algorithm. Investigation of data processing is also an important part of any data science. Presenting advanced analysis and creating visualizations to make processing easy to understand is an excellent technique to understand customer data. Matplotlib and seaborn library is nice library to create adequate visualization. When we develop an analytics pipeline, the first step is to build a data model. A data model is a summary of the data sources that we will be working on, their associations with other data sources, where precisely the data from a specific source is going to enter the pipeline, and in what format(For example an Excel file, a database, or a JSON from an internet source, or REST API). The data model for the pipeline emerges over time as data sources and methods change. Marketing data, traditionally, comprises data of all three types. Originally, most data points started from different (mainly manual) data sources, so the values for a field could be of different lengths, the value for one field would not equate to that of other fields because of various field names, some rows receiving data from even the same origins could also have disappeared values for some of the fields. But promptly, because of technologies, structured and semi-structured data is highly available and is frequently being used to implement analytics. Nowadays data has two formats- structure and unstructured. Unstructured data is trendy and schema-free. Data processing and wrangling are the beginning, and very valuable, parts of the data science pipeline. It is commonly important if data engineers or data scientists are preparing data to have some domain knowledge about the data. Data processing also demands coming up with innovative resolutions and techniques. If data engineers are sure that projects data was arranged correctly, it is combined with other data sources. They also got rid of duplicates and unwanted columns, and finally, dispensed with missing data. After performing these steps, project data is made ready for analysis and modelling and could be put into a data science pipeline undeviatingly.


An Insight of Marketing Analytics was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->