Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Cross-Selling Web App on Streamlit Cloud
Latest   Machine Learning

Cross-Selling Web App on Streamlit Cloud

Last Updated on July 17, 2023 by Editorial Team

Author(s): Claudio Giorgio Giancaterino

Originally published on Towards AI.

Insurance Companies are becoming data-driven oriented with the Marketing field assuming a strategic role for the Company’s growth.

With this project, is achieved a little more knowledge of cross-selling strategy from a data science and actuarial point of view, also deploying a web app on Streamlit Cloud to share results.

There are many ways to generate additional revenue for a Company. Introducing new products, offering additional services, or even raising prices. One common technique is known as cross-selling, which can lead to increased customer lifetime value.

In this project, using a dataset coming from a hackathon, the goal is to predict whether a customer from the past year will also be interested in vehicle insurance coverage provided by the Company.

This goal is relevant for Insurance Companies because they are becoming data-driven and customer-oriented, following other Companies' strategies from different fields. Then cross-selling modeling can lead Insurance Companies to raise revenue.

Cross-selling and up-selling are marketing terms that sound familiar, but what is the difference between each other?

Both cross-selling and up-selling are two important tools for increasing sales volume per customer. Cross-selling involves selling additional items related or complementary to a previously purchased item, while up-selling
involves increasing order volume either by selling more units of the same purchased item or upgrading to a more expensive version.

While these sales techniques are relatively old, their practice has changed with the advent of customer relationship management (CRM), and the use of information technology. I suggest reading Kamatura’s paper for more details on cross-selling.

-Exploratory Data Analysis side

I’m going ahead and looking at the dataset retrieved from the Kaggle platform: it is composed of 12 variables (including the outcome and the id) and 381.109 rows. You can follow the code from the notebook.

The goal of this job belongs to a classification task, and the outcome is a binary variable with class “1” for policyholders interested in purchasing the vehicle insurance, instead with class “0” for policyholders not interested in it.

def catcharts(data, col1, col2):
plt.rcParams['figure.figsize']=(15,5)

plt.subplot(1,2,1)
data.groupby(col1).count()[col2].plot(kind='pie',autopct='%.0f%%').set_title("Pie {} Variable Distribution".format(col1))

plt.subplot(1,2,2)
sns.countplot(x=data[col1], data=data).set_title("Barplot {} Variable Distribution".format(col1))

plt.show()

The target variable shows imbalanced classes, where only 12% of policyholders would buy the vehicle coverage.

Looking at the features, there are three numerical variables (id is dropped): Age, Annual Premium and Vintage (number of days the policyholder is in the Company portfolio).

def numcharts(data, var):
plt.rcParams['figure.figsize']=(15,5)

plt.subplot(1,3,1)
x=data[var]
plt.hist(x,color='green',edgecolor='black')
plt.title('{} histogram'.format(var))
plt.xticks(rotation=45)


plt.subplot(1,3,2)
x=data[var]
sns.boxplot(x, color="orange")
plt.title('{} boxplot'.format(var))
plt.xticks(rotation=45)


plt.subplot(1,3,3)
res = stats.probplot(data[var], plot=plt)
plt.title('{} Q-Q plot'.format(var))
plt.xticks(rotation=45)


plt.show()

An interesting aspect to see is that owners of health coverage used for this analysis are largely young people (the median is 36). Moreover, looking at the bivariate analysis between numerical features and target variable, Age seems to be more predictive than the others, given a positive relationship with the outcome.

Then there are seven categorical features: Gender, Driving License, Previously Insured, Vehicle Age, Vehicle Damage, Policy Sales Channel and Region Code. The last two ones are dropped because they are not useful in the modeling activity, given that data are allocated in many classes. In the Policy Sales Channel, more or less 70% of data are covered by 3 channels; meanwhile, in the Region Code, more or less 40% of data are allocated in 2 regions and the rest in other many other not relevant region codes. Looking at the other variables, the gender variable shows a prevalence of men policyholders: 54% male vs 46% female. Almost all the policyholders have a driving license, and they own young vehicles: 53% of vehicles are in a range of 1–2 years. Most of the policyholders did not previously insure with the Company: 54% did not insure with Company vs 46% previously insured. In the last feature, vehicles with damage and without damage are equally distributed in the portfolio.

Moving toward the bivariate analysis, interesting relationships to pay attention to are coming from the relationship between Previously Insured, Vehicle Damage features and the outcome. Bivariate responses fill the class of customers not previously insured and the same bivariate response fills the class of customers with vehicle damage. The first variable can have an impact on cross-selling prediction, the second one can have an impact from the actuarial point of view as a possible risk factor in building the vehicle insurance tariff. Going ahead in the overall actuarial view, Age, Annual Premium, Gender, Vehicle Age, and Vehicle Damage are features that you can find in a typical car dataset employed to build a car insurance premium.

-Data Preparation step

Before ingesting your input features in every model, there is a pre-processing activity such as data transformation and split dataset into train and test. In the pre-processing activity, I applied the same pipeline for all models. Outliers from numerical features have been capped to avoid biased results and have been applied target encoding on categorical features instead of the classical one-hot encoding to improve the performance of models. Then have been removed predictors with zero variance because they didn’t produce any information for the models and have been removed correlated predictors to mitigate multicollinearity and improve the stability of models, though ensemble trees are less sensitive to correlated predictors. In the end, I’ve applied feature scaling to normalize the range of different input variables to a similar scale, and so to optimize the process for models sensitive to feature magnitude.

-Modelling and evaluation activity

Actually, the data preparation pursued is suitable for the Logistic Regression (LR), used as a benchmark model for this task. Logistic Regression is usually used as a reference in Insurance and also in Machine Learning comparison because it is a calibrated model, an important aspect for the evaluation of the performance.

For this job, Logistic Regression (LR) has been compared with the Gaussian Naive Bayes model (GNB), and Histogram-Based Gradient Boosting Machine (HGBM).

The choice of a Gradient Boosting algorithm comes from the fact that it is one of the best performance models, usually more competitive than Neural Networks with tabular data. Instead, the choice of Naive Bayes comes from the fact that it is another common model used in machine learning with different assumptions than Logistic Regression.

For this competition has been used the Area Under the ROC curve (AUC) is a metric for the evaluation of the performance. It assumes values between 0 and 1, with the orientation that higher values are better, but as well as the Gini index, it is not calibration—sensitive, because it ignores the marginal distribution of the outcome, leading maybe into wrong decisions. For this reason, is necessary to have a calibrated model when you look at the evaluation score. So, what is the calibration? Generally speaking, calibration is a process used to improve the reliability of a model for the estimated probabilities. A model is said well-calibrated when its predicted probabilities are close to the true probabilities of the events it is predicting. Logistic Regression is considered a calibrated model because it directly predicts the probabilities of the outcome rather than predicting the class labels based on a threshold. Each model has been evaluated in terms of calibration over the performance, and eventually, has been applied the Platt Scaling to perform a well-calibrated classifier. The Platt Scaling method transforms the outputs of a classification model into a probability distribution over classes assuming that the estimated probabilities follow a sigmoid function and fitting a logistic regression model to map the predicted probabilities to the true probabilities.

# check calibration
# Generate probability predictions from your model
def calibration(model, xdata, ydata, model_name):
plt.rcParams['figure.figsize']=(15,5)
probabilities = model.predict_proba(X_test)
predicted_probabilities = probabilities[:, 1]

# Get true outcome value for each test observation
test_outcomes = y_test

# Generate the calibration curve data
calibration_curve_data = calibration_curve(test_outcomes, predicted_probabilities, n_bins=10)

# Plot the calibration curve
plt.plot(calibration_curve_data[1], calibration_curve_data[0], marker='.')
plt.plot([0, 1], [0, 1], linestyle='--')
plt.xlabel('Predicted probability')
plt.ylabel('Observed frequency')
plt.title('{} Calibration Curve'.format(model_name))

plt.show()

Only the Gaussian Naive Bayes has been applied to the Platt Scaling correction, and the above resulting charts show that all classifiers seem calibrated (the red predicted probability follows the blue dash true frequency of the positive label).

Looking at the results, the better model is the Histogram-Based Gradient Boosting (HGBM), which I’ve chosen as a final model to fine-tune both the hyperparameters and the threshold optimization.

Looking at the barplot, the “0” predicted class is less underestimated, meanwhile, the “1” predicted class is overestimated.

For the Feature's Importance, I’ve used the SHAP algorithm. From this algorithm, the Previously Insured feature is the most relevant in the prediction of the outcome, followed by the Age feature. This is a validation of what I’ve previously observed from the Exploratory Data Analysis. The forecast concerns if a customer is interested in a supplementary product, so the variable Previously Insured can play a relevant role in a decision. Happy customers maybe are positively oriented to buy complementary products, instead, unhappy customers prefer to leave.

Data Segmentation

Prediction is important, and it was the goal of the competition. Anyway, I’ve gone ahead looking at other aspects; with the second step, I’ve profiled customers interested in purchasing coverage using the K-Means clustering method, the more common clustering method, on numerical features and then I’ve applied this split to the overall dataset.

np.random.seed(0)
for n_cluster in range(2,10):
clustering = KMeans(n_clusters=n_cluster, random_state=0).fit(num_sc)
preds = clustering.predict(num_sc)
silhouette_avg = silhouette_score(num_sc, preds)

print('Silhouette Score for %i Clusters: %0.4f' % (n_cluster, silhouette_avg))

Interested Customers in buying vehicle insurance coverage can be profiled into 4 clusters resulting from the silhouette score. In this way has been possible to understand relationships between numerical features such as Annual Premium and Age with other categorical features.

Meanwhile, the Annual Premium is distributed for 70% in the two first clusters, Age is more or less equally distributed in all clusters rather than the first one that, covers about 30% of the interested people in purchasing the vehicle coverage.

From the above charts, my attention goes toward the relationship between Annual Premium and Age vs. Previously Insured and Vehicle Damage. We can understand exactly what has been seen before, given people interested were not previously insured with the Company, so there are customers not satisfied and so not interested in purchasing complementary coverage.

Previously Insured is the most relevant feature in the prediction because, intuitively, it is linked with the satisfaction of the customers with the Company. Vehicle Damage is not relevant for the prediction, but it can have an impact on the following steps of the Company process. I mean, people interested in purchasing the complementary coverage pay an Annual Premium for vehicles with damages, and intuitively, they can have had claims, wrong driving style, and so on. Given Vehicle Damage can be a variable used in insurance coverage, this feature it’s a risk factor to consider, requiring a better analysis, because it could have an impact on the profitability of the insurance tariff.

Building the app

https://tinyurl.com/bszhxuw7

In the last step I’ve deployed the work realized on jupyter notebook into an app, essentially to share the results, so the inference page is not developed. Building an app with Streamlit and deploying it into the Streamlit cloud is not a hard job, though I’ve had some challenges.

I’ve built a multipage app. It requires the first “1_Cross_Selling_App.py” file as the home page and then other Python files involved in the menu, saved into the fold pages of your repository, then you can connect these files (requirements.txt included) with the cloud, and that’s it, deployment activity starts.

Visualization charts have required just little modifications of the code because the matplotlib and seaborn libraries that I’ve used for the data visualization are embedded in Streamlit.

Deployment of machine learning models has been challenging, at the first time, I repeated the same structure from the jupyter notebook, running the code, but given the small memory available in the cloud, visualization of pages requires time. The solution was to save models and run the code for the visualization of results.

In the process that followed, I started to build the app locally, then I moved into the cloud, and the hard challenge was just in this step because what worked locally, didn’t work on the cloud. It was challenging, but in the end, the solution came, and now the app is live!!!!

Final thoughts

The marketing field is growing thanks to data science activity, and also in Insurance, it is changing. The use of modern machine learning is welcome because they give a more accurate prediction (see HGBM in this job), helpful in a better allocation of costs in the marketing budget. Actuaries can play a relevant role both in the prediction and in the segmentation providing their expert judgment. Yes, because in this process, we have seen many features that are employed in the actuarial structure to develop products. Actuaries can give a risk evaluation, deep analysis of features employed in building an insurance tariff, playing as a ring conjunction between actuarial structure and marketing structure!!!
Enjoy the app U+1F60A

References

Notebook

App

-Dataset: Health Insurance Cross Sell Prediction U+1F3E0 U+1F3E5 U+007C Kaggle

Cross-Selling

Logistic Regression

Gaussian Naive Bayes

Histogram-Based Gradient Boosting

-Calibration

-SHAP

-K-Means clustering

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->