Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Country Recognition and Geolocated Sentiment Analysis Using the RoBERTa Model
Data Science   Latest   Machine Learning

Country Recognition and Geolocated Sentiment Analysis Using the RoBERTa Model

Last Updated on February 10, 2025 by Editorial Team

Author(s): Pedro Markovicz

Originally published on Towards AI.

Country Recognition and Geolocated Sentiment Analysis Using the RoBERTa Model

Have you ever wondered how public opinion about a country shapes its global image? From travel reviews to political debates on social media, people’s opinions often carry an emotional tone that can reveal intriguing regional patterns. What if we could map these sentiments across the globe and uncover insights that go beyond just words?

That’s where Geolocated Sentiment Analysis comes in. Sentiments aren’t just personal, they’re influenced by culture, region, and national identity. By building an up-to-date dataset from Social Media comments and mapping them to specific countries and associated sentiments, we can gain deeper insights into these emotions.

In essence, this study combines Country Recognition with Sentiment Analysis, leveraging the RoBERTa NLP model for Named Entity Recognition (NER) and Sentiment Classification to explore how sentiments vary across different geographical regions.

Data Extraction

This project explores how data from Reddit, a widely used platform for discussions and content sharing, can be utilized to analyze global sentiment trends.

The data collection was performed using PRAW (Python Reddit API Wrapper), which enabled the extraction of relevant content from communities focused on Travel, News, Continents and Countries. This approach allows for an assessment of global sentiment trends as explored in this project.

The PRAW API enables the extraction of the following data:

  • Title: The title of the post provided by the original poster;
  • Comment: A specific comment made by a user on the post;
  • Flair: The category or tag selected by the original poster for the post;
  • Date: The exact year, month, and day the post was created.
# Import the necessary libraries
import praw
import datetime
import pandas as pd

# Initialize Reddit API connection
reddit = praw.Reddit(
client_id="your_client_id",
client_secret="your_client_secret",
user_agent="your_user_agent",
)

# Define subreddit and fetch top posts
subreddit_name = 'travel'
subreddit = reddit.subreddit(subreddit_name)
posts = subreddit.top(limit=None)

# Set criteria for collecting posts and comments
min_comments_per_post = 2
max_rows = 50000
data = []

# Iterate over the top posts
for post in posts:
if post.num_comments >= min_comments_per_post:
post_data = {
'title': f"{post.title} / {post.selftext}",
'date': datetime.datetime.utcfromtimestamp(post.created_utc),
'flair': post.link_flair_text,
}

# Extract comments from the post
comments = [comment.body for comment in post.comments if isinstance(comment, praw.models.Comment)]

# Store each comment along with the post data
for comment in comments:
data.append({
'title': post_data['title'],
'comment': comment,
'date': post_data['date'],
'flair': post_data['flair'],
})

# Stop collecting data once the row limit is reached
if len(data) >= max_rows:
break

# Break out of the main loop if the row limit is reached
if len(data) >= max_rows:
break

# Create a DataFrame from the collected data
df = pd.DataFrame(data, columns=['title', 'comment', 'date', 'flair', ])

The figure below presents an example, showcasing the first five unique entries extracted from the dataset.

The Challenges of NLP: NER and Sentiment Analysis

Extracting emotions values in text and associating them with specific regions isn’t straightforward. It involves tackling two main NLP challenges:

  • Named Entity Recognition (NER): Identifying and linking mentions of countries within text.
  • Sentiment Analysis: Determining whether the emotional tone of the text is positive, negative, or neutral.

Both challenges will be addressed using RoBERTa, a state-of-the-art Transformer-based machine learning model. RoBERTa is an optimized variant of BERT, designed to improve the pretraining process and fine-tune hyperparameters, leading to enhanced performance across a wide range of natural language processing tasks.

Named Entity Recognition (NER)

Photo by Greg Rosenke on Unsplash

To address the NER challenge, a pre-trained RoBERTa model designed for Named Entity Recognition tasks was utilized. The model, sourced from HuggingFace, is:

To optimize the model’s search process, the algorithm initially iterates through lists and dictionaries containing cities, capitals, demonyms, and political leaders associated with the respective countries, considering both the title and comment columns. Any values it fails to identify are then passed to the NER model for further processing.

# Import the necessary libraries
import re
import pandas as pd
from transformers import pipeline

def extract_countries(row, demonym_mapping, city_to_country, countries_list, ner_pipeline):
# Helper function to search for countries/cities/demonyms in the text
def find_matches_in_text(text, search_dict):
found_countries = set()
for key, country in search_dict.items():
pattern = r'\b' + re.escape(key.lower()) + r'\b'
if re.search(pattern, text):
found_countries.add(country)
return found_countries

# Prepare the text
text = row['post_title_body'].lower() + ' ' + row['comment_body'].lower()

# Search in the static list of countries
found_countries = find_matches_in_text(text, {country: country for country in countries_list})

# Search for demonyms and political leaders
found_countries.update(find_matches_in_text(text, demonym_mapping))

# Search for cities
found_countries.update(find_matches_in_text(text, city_to_country))

# If no country is found, use the RoBERTa model
if not found_countries:
found_countries.update(extract_countries_with_roberta(text, ner_pipeline, countries_list))

return list(found_countries) if found_countries else None

def extract_countries_with_roberta(text, ner_pipeline, countries_list):
# Apply the NER pipeline to the text
found_countries = set()
ner_results = ner_pipeline(text)

# Iterate over the entities recognized by the NER model
for entity in ner_results:

# Check if the recognized entity is a country
if 'LOC' in entity.get('entity', ''):
entity_text = entity['word'].lower()

# Iterate over the list of countries and check if any of them is contained in the recognized entity
for country in countries_list:
if country.lower() in entity_text:
found_countries.add(country)

return found_countries

# Load the NER pipeline with the RoBERTa model
ner_pipeline = pipeline("ner", model="Jean-Baptiste/roberta-large-ner-english")

# Apply the extract_countries function to the DataFrame
df['countries'] = df.apply(lambda row: extract_countries(
row=row,
demonym_mapping=demonym_mapping,
city_to_country=city_to_country,
countries_list=countries_list,
ner_pipeline=ner_pipeline
), axis=1)

Before feeding the text data into the NER model, a thorough cleaning is performed to ensure it’s optimally prepared. This cleaning process includes removing special characters, converting all text to lowercase, and eliminating stopwords, all of which enhance the efficiency of NLP tokenization process.

Once the data is cleaned, it passes through the country recognition pipeline, where a new column titled countries is added to the dataset. This column stores a list of all the countries mentioned in the text.

Sentiment Analysis

For the Sentiment Analysis challenge, a pre-trained RoBERTa model specifically designed for sentiment classification was used. The model, sourced from Hugging Face, is:

# Import the necessary libraries
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification

# Load the tokenizer and pre-trained RoBERTa model for sentiment analysis
tokenizer = RobertaTokenizerFast.from_pretrained("cardiffnlp/twitter-roberta-base-sentiment-latest")
model = RobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-sentiment-latest")

# Function to calculate sentiment
def calculate_sentiment(comment_text):
# Tokenize and prepare the input data
encoding = tokenizer(comment_text, return_tensors='pt', padding=True, truncation=True, max_length=128)
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']

# Make prediction without gradients
with torch.no_grad():
outputs = model(input_ids=input_ids, attention_mask=attention_mask)
_, preds = torch.max(outputs.logits, dim=1)

return preds.item()

# Apply the function to the comments in the DataFrame
df['sentiment'] = df['comment_body'].apply(calculate_sentiment)

By integrating this pre-trained model into the pipeline, the sentiment score is computed for the text in the comment column. A score of 0 indicates negative sentiment, 1 indicates neutral sentiment, and 2 indicates positive sentiment.

Dataset Creation

Once all the steps are completed, a new pipeline is established, designed to generate a dataset that includes sentiment scores and the countries identified from the text of Reddit posts and comments.

By applying this pipeline to other subreddits, such as those focused on Traveling Tips, World News, Continents and Countries, user sentiment can be captured. This approach enables a structured analysis of sentiment towards the mentioned countries, providing valuable insights into users’ perspectives across various discussions.

At the time of this publication, a total of 444,059 up-to-date unique comments were collected, categorized by countries, and classified by sentiment using the pipeline. After expanding the countries list, the dataset grew to 870,066 comments for each country.

Data Visualization

Now that the dataset is cleaned and prepared, it’s time to visualize the results to uncover insights and trends. In this section, the Plotly library will be used to explore the data further.

Let’s start by analyzing the volume of comments for each country.

[Click here to view the interactive map]
Top 25 countries with the highest number of comments

Sentiment Over Time

In this section, let’s explore how sentiments have evolved over time. By analyzing sentiment scores alongside the date of posts, it becomes possible to uncover how sentiments have evolved across different countries.

Visualizing sentiments in relation to the timeline reveals significant changes in tone and emotional context. Specific periods can also be explored where sentiment shifts may correlate with historical events.

For deeper insights, the data can be filtered by individual countries, allowing for the uncovering of unique sentiment trends and potentially connecting these shifts to major geopolitical events that occurred during those times.

Global Sentiment Overview

Next, let’s analyze the sentiment tone for each country. We’ll start by examining the broader sentiment trends before diving into more specific details.

By selecting the column for flair and associating it with the sentiment score, the overall sentiment associated with each post’s tag can be analyzed.

Upon reviewing the volume of comments graphs, it’s clear that the number of comments varies significantly across countries. To address this problem, the sentiment value is calculated relative to the total number of comments for each country. This ensures that countries with fewer comments are not overshadowed by those with larger volumes.

For a deeper exploration, you can interact with the sentiment maps below. These maps provide a detailed visual representation of how sentiment varies globally, highlighting the percentages of positive, negative, and neutral sentiment across different countries.

[Click here to view the interactive map]
[Click here to view the interactive map]
[Click here to view the interactive map]

Additionally, by calculating the difference between positive and negative sentiments, the sentiment balance can be visualized on a single map.

[Click here to view the interactive map]

To complement the analysis, the focus shifts to countries with the highest ratios of positive, negative, and neutral sentiment values. To ensure accuracy and relevance, only countries with a substantial number of comments are considered.

Interestingly, countries with the most positive sentiment often share characteristics like a high quality of life, natural beauty, political stability, a discreet presence in international politics, or popularity as tourist destinations. These nations tend to enjoy a more favorable global perception.

On the other hand, countries with the most negative sentiment face significant challenges, such as the Russia-Ukraine and Israel-Palestine conflicts, which have intensified political and economic instability. These, along with internal conflicts, restrictions on civil liberties, humanitarian crises, or recent political developments, can contribute to a widespread negative perception.

Meanwhile, countries with the most neutral sentiment tend to have a balanced distribution, with scores clustered around the middle. This is because most countries evoke moderate, mixed opinions rather than extreme ones, making neutral sentiments more common, while positive and negative scores are less frequent and more spread out.

Conclusion

The results of this study, grounded in Geopolitical Analysis, partially validate the accuracy of the data generated by the RoBERTa model for Sentiment Analysis. These results emphasize the model’s effectiveness in identifying the sentiment tone embedded within unstructured data, providing valuable insights into a country’s global reputation and overall sentiment.

By adopting an interdisciplinary approach, the project also integrates Sentiment Analysis with Geopolitical Analysis, opening the door to more profound studies across disciplines such as International Relations.

Another key aspect of this project was the creation of a database containing 444,059 up-to-date, unique Reddit comments, each labeled with corresponding countries and sentiment values by the RoBERTa model.

I hope this project sparks your interest or proves helpful in some way! If you have any questions or thoughts, feel free to leave a comment, or connect with me on LinkedIn.

Sources

[1] Hugging Face, Transformers — Token classification
[2] Hugging Face, NLP Course — Token classification
[3] Hugging Face, Jean-Baptiste/roberta-large-ner-english
[4] Hugging Face, cardiffnlp/twitter-roberta-base-sentiment-latest
[5] PRAW, The Python Reddit API Wrapper (PRAW)
[6] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer and V. Stoyanov, Roberta: A robustly optimized bert pretraining approach (2019)

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->