Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Text Similarity using K-Shingling, Minhashing, and LSH(Locality Sensitive Hashing)
Latest   Machine Learning

Text Similarity using K-Shingling, Minhashing, and LSH(Locality Sensitive Hashing)

Last Updated on July 26, 2023 by Editorial Team

Author(s): Supriya Ghosh

Originally published on Towards AI.

Natural Language Processing

Text similarity plays an important role in Natural Language Processing (NLP) and there are several areas where this has been utilized extensively. Some of the applications include Information retrieval, text categorization, topic detection, machine translation, text summarization, document clustering, plagiarism detection, news recommendation, etc. encompassing almost all domains.

But sometimes, it becomes difficult to understand the concept behind the Text Similarity Algorithms. This write-up will show an implementation of Text Similarity along with an explanation of the required concepts.

But before I start, let me tell you that there might be several ways and several algorithms to perform the same task. I will use one of the ways for depiction using K-Shingling, Minhashing, and LSH(Locality Sensitive Hashing).

Dataset considered is Text Extract from 3 documents for the problem at hand.

We can use n — number of documents with each document being of significant length. But to make it simpler and avoid heavy computations, I am considering a small chunk from each document.

Let’s perform the implementation in Steps.

Step 1 :

Set your working directory to the folder where the files are placed so that R can read it. Then read all the input files from the working directory using the below code.

# Libraries used
library(dplyr)
library(proxy)
library(stringr)
library(data.table)
# Set the working directory
setwd(".\")
# Read the original text file
files <- list.files(path=".", pattern='*.txt', all.files=FALSE,
full.names=FALSE)
( doc <- lapply( files, readLines ) )

R Studio Input Display

Step 2 :

Preprocess the text to remove punctuation, convert it into lower cases, and split text into word by word.

# Preprocess text
documents <- lapply(doc, function(x) {
text <- gsub("[[:punct:]]", "", x) %>% tolower()
text <- gsub("\\s+", " ", text) %>% str_trim()
word <- strsplit(text, " ") %>% unlist()
return(word)
})
# Print the texts in files
documents[[1]]
documents[[2]]
documents[[3]]

R Studio Display

Step 3 :

Introduce K-Shingling which is a technique of representing documents assets. We will understand further the importance of K-Shingling but as of now, we can just try getting familiar with the Steps.

K-shingle of a document at hand is said to be all the possible consecutive sub-string of length k found in it.

Let’s illustrate with an example with k = 3.

Shingling <- function(document, k) {
shingles <- character( length = length(document) - k + 1 )

for( i in 1:( length(document) - k + 1 ) ) {
shingles[i] <- paste( document[ i:(i + k - 1) ], collapse = " " )
}

return( unique(shingles) )
}
# "shingle" the example document, with k = 3
documents <- lapply(documents, function(x) {
Shingling(x, k = 3)
})
list( Original = doc[[1]], Shingled = documents[[1]] )

R Studio Display

Hence with k = 3, the k-shingles of the first document which got printed out, consist of sub-strings of length 3.

The first K-Shingle is: “the night is”

The second Shingle is: “night is dark” and so on.

One important point to note is that a document’s k-shingle set should be unique. For example, if the first document above contains more than one “the night is” then it will only appear once as the set of k-shingle for that document.

Step 4:

Construct a “Characteristic” matrix that visualizes the relationships between the three documents. The “characteristic” matrix will be a Boolean matrix, with :

rows = the elements of every unique possible combination of shingles set across all documents.

columns = one column per document.

Thus, the matrix will be filled with 1 in row i and column j if and only if document j contains the shingle i , otherwise it will be filled with 0.

Let us try to understand this with the below depiction.

# Unique shingles sets across all documents
doc_dict <- unlist(documents) %>% unique()
# "Characteristic" matrix
Char_Mat <- lapply(documents, function(set, dict) {
as.integer(dict %in% set)
}, dict = doc_dict) %>% data.frame()
# set the names for both rows and columns
setnames( Char_Mat, paste( "doc", 1:length(documents), sep = "_" ) )
rownames(Char_Mat) <- doc_dict
Char_Mat

R Studio Display

The first row of the above matrix has all three columns as 1. This is because all three documents contain the 3-shingle “the night is”.

For the second column, the value is [1, 0, 1] which means that document 2 does not have the 3-shingle “night is dark, while document 1 and 3 has.

One important point noted here is that most of the time these “characteristic matrices” are almost sparse. Therefore, we usually try to represent these matrices only by the positions in which 1 appears, so as to be more space-efficient.

Step 5 :

After creating shingle sets and characteristic matrix, we now need to measure the similarity between documents.

We will make use of Jaccard Similarity for this purpose.

For example, with two shingle sets as set1 and set2, the Jaccard Similarity will be :

With this we will calculate the pairwise Jaccard similarities for all three documents. “dist” function in “R” quickly computes and returns the distance/similarity matrix.

# how similar is two given document, Jaccard similarity 
JaccardSimilarity <- function(x, y) {
non_zero <- which(x U+007C y)
set_intersect <- sum( x[non_zero] & y[non_zero] )
set_union <- length(non_zero)
return(set_intersect / set_union)
}
# create a new entry in the registry
pr_DB$set_entry( FUN = JaccardSimilarity, names = c("JaccardSimilarity") )
# Jaccard similarity distance matrix
d1 <- dist( t(Char_Mat), method = "JaccardSimilarity" )
# delete the new entry
pr_DB$delete_entry("JaccardSimilarity")
d1
doc

R Studio Display

The similarity matrix d1 tells us that document 1 and 3 is the most similar among the three documents.

For small datasets, the above method works perfectly fine, but imagine if we have a large number of documents to compare instead of just three that too with significantly larger lengths, then the above method might not scale well, and we might have heavy computation with performance issue building up as the sparse matrices with a set of unique shingles across all documents will be fairly large, making computation of the Jaccard similarity between the documents a burden.

Under such situations, we employ a different technique that help us save computations and can compare document similarities on a large scale efficiently. The technique is called Minhashing.

Step 6 :

Minhashing involves compressing the large sets of unique shingles into a much smaller representation called “signatures”.

We then use these signatures to measure the similarity between documents.

Although it is impossible for these signatures to give the exact similarity measure, the estimates are pretty close.

The larger the number of signatures chosen, the more accurate the estimate is.

For illustration let us consider an example.

Suppose we take up the above example to minhash characteristic matrix of 16 rows into 4 signatures. Then the first step is to generate 4 columns of randomly permutated rows that are independent of each other. We can see for ourselves that this simple hash function does in fact generate random permutated rows. To generate this, we use the formula:

Where :

x is the row numbers of your original characteristic matrix.

a and b are any random numbers smaller or equivalent to the maximum number of x, and they both must be unique in each signature.

For e.g. For signature 1, if 5 is generated to serve as a- coefficient, it must be ensured that this value does not serve as a- coefficient multiple times within signature 1, though it can still be used as

b- coefficient in signature 1. And this restriction refreshes for the next signature as well, that is, 5 can be used to serve as a or b coefficient for signature 2, but again no multiple 5 for signature 2’s a or b coefficient and so on.

c is a prime number slightly larger than the total number of shingle sets.

For the above example set, since the total row count is 16, thus prime number 17 will do fine.

Now let’s generate this through the “R” code.

# number of hash functions (signature number)
signature_num <- 4
# prime number
prime <- 17
# generate the unique coefficients
set.seed(12345)
coeff_a <- sample( nrow(Char_Mat), signature_num )
coeff_b <- sample( nrow(Char_Mat), signature_num )
# see if the hash function does generate permutations
permute <- lapply(1:signature_num, function(s) {
hash <- numeric( length = length(nrow(Char_Mat)) )
for( i in 1:nrow(Char_Mat) ) {
hash[i] <- ( coeff_a[s] * i + coeff_b[s] ) %% prime
}
return(hash)
})
# # convert to data frame
permute_df <- structure( permute, names = paste0( "hash_", 1:length(permute) ) ) %>%
data.frame()
permute_df

R Studio Display

From the above output, we see that the 4 columns of randomly permutated rows got generated. There are 0s also, but it will not affect our computation and we will see this later.

Step 7 :

Using the randomly permutated rows, now signatures will be calculated further. The signature value of any column (document) is obtained by using the permutated order generated by each hash function, the number of the first row in which the column has a 1.

What we will do further is combine randomly permutated rows (generated by hash functions) with the original characteristic matrix and change the row names of the matrix to its row number to illustrate the calculation.

# use the first two signature as an example
# bind with the original characteristic matrix
Char_Mat1 <- cbind( Char_Mat, permute_df[1:2] )
rownames(Char_Mat1) <- 1:nrow(Char_Mat1)
Char_Mat1

R Studio Display

Now considering the matrix generated above, we will start with the first hash function (hash_1).

According to our first hash function’s permutated row order, the first row is row 14 ( why row 14 ? because 0 is the smallest value for our randomly generated permutation, and it has a 0 in row 14, making it the first row ). Then we’ll look at row 14’s entry for all three documents and try to find “which document’s entry at row 14 is a 1 ?”. document 3’s (doc_3) row 14 is a 1, thus the signature value for document 3 generated by our first hash function is 0. But documents 2 and 3’s entries at row 14 are both 0, thus we’ll have to continue looking.

According to our first hash function’s permutated row order, the second row is row 8 ( 1 is the second smallest value for our randomly generated permutation, and it has a value of 1 at row 8 ). We apply the same concept as above and find that document 2’s (doc_2) entry for row 8 is a 1, thus the signature value for document 2 generated by our first hash function is 1. Note that we’re already done with document 3, we do not need to check if it contains a 1 anymore. But we’re still not done, document 1’s entry at row 8 is still a 0. Hence, we’ll have to look further.

Again, checking the permutated row order for our first hash function, the third row is row 2. document 1’s entry for row 2 is 1. Therefore, we’re done with calculating the signature values for all three columns using our first hash function!! Which are [2, 1, 0].

We can then apply the same notion to calculate the signature value for each column (document) using the second hash function, and so on for the third, fourth, etc. A quick look at the signature second hash function shows that the first row according to its permutated row order is row 8 and doc_2 has a 1 in row 3. Similarly, the second row is row 14 with doc_3 as 1 and the third row is row3 with doc_1 as 1. Hence, the signature values generated by our second hash function for all three documents are [2, 0, 1].

As for these calculated signature values, we will store them into a signature matrix along the way, which will later replace the original characteristic matrix. The following section will calculate the signature values for all 3 columns using all 4 hash functions and print out the signature matrix.

# obtain the non zero rows' index for all columns
non_zero_rows <- lapply(1:ncol(Char_Mat), function(j) {
return( which( Char_Mat[, j] != 0 ) )
})
# initialize signature matrix
SM <- matrix( data = NA, nrow = signature_num, ncol = ncol(Char_Mat) )
# for each column (document)
for( i in 1:ncol(Char_Mat) ) {
# for each hash function (signature)'s value
for( s in 1:signature_num ) {
SM[ s, i ] <- min( permute_df[, s][ non_zero_rows[[i]] ] )
}
}
# set names for clarity
colnames(SM) <- paste( "doc", 1:length(doc), sep = "_" )
rownames(SM) <- paste( "minhash", 1:signature_num, sep = "_" )
SM

R Studio Display

Our signature matrix has the same number of columns as the original characteristic matrix, but it only has n rows, where n is the number of hash functions we wish to generate (in this case 4).

Let me elaborate on how do we interpret the above result?

For e.g., for documents 1 and 3 (columns 1 and 3), its similarity would be 0.25 because they only agree in 1 row out of a total of 4 (both columns’ row 4 is 1).

Let’s calculate the same explanation through code.

# signature similarity
SigSimilarity <- function(x, y) mean( x == y )
# same trick to calculate the pairwise similarity
pr_DB$set_entry( FUN = SigSimilarity, names = c("SigSimilarity") )
d2 <- dist( t(SM), method = "SigSimilarity" )
pr_DB$delete_entry("SigSimilarity")
list(SigSimilarity = d2, JaccardSimilarity = d1)

R Studio Display

From the difference of the result between the original Jaccard similarity and the new similarity obtained using the signature similarity, we might doubt if this is an accurate estimate? But as mentioned earlier, Minhash's purpose is to provide a fast “approximation” to the true Jaccard similarity and the estimate can be closer but not 100% accurate hence the difference. Also, the example considered here is way too small to depict closer accuracy using the law of large numbers. More accurate and close results are expected with large datasets.

In cases where the primary requirement is to compute the similarity of every possible pair, probably for text clustering or so, then LSH (Locality Sensitive Hashing) does not serve the purpose but if the requirement is to find the pairs that are most likely to be similar, then a technique called Locality Sensitive Hashing can be employed further which I am discussing below.

Locality Sensitive Hashing

While the information necessary to compute the similarity between documents has been compressed from the original sparse characteristic matrix into a much smaller signature matrix, but the underlying problem or need to perform pairwise comparisons on all the documents still exists.

The concept for locality-sensitive hashing (LSH) is that given the signature matrix of size n (row count), we will partition it into b bands, resulting in each band with r rows. This is equivalent to the simple math formula — n = br, thus when we are doing the partition, we have to be sure that the b we choose is divisible by n. Using the signature matrix above and choosing the band size to be 2 the above example will become :

# number of bands and rows
bands <- 2
rows <- nrow(SM) / bands
data.frame(SM) %>%
mutate( band = rep( 1:bands, each = rows ) ) %>%
select( band, everything() )

R Studio Display

What locality-sensitive hashing tells us is: If the signature values of two documents agree in all the rows of at least one band, then these two documents are likely to be similar and should be compared (list it as the candidate pair). Using this small set of documents might be a bad example since it can happen that none of them will get chosen as our candidate pair. For instance, if the signature value of document 2 for band 1 becomes [ 0, 1 ] instead of the current [ 1, 0 ], then document 2 and document 3 will become a candidate pair as both of their rows in band1 takes the same value of [ 0, 1 ].

Note — My computations and your computations, while executing the above set of R Codes might vary as the signatures are randomly generated.

Final Thoughts

The above technique using Jaccard Similarity, Minhashing, and LSH is one of the utilized techniques to compute document similarity although many more exists. Text similarity is an active research field, and techniques are continuously evolving. Hence which method to use is very much dependent on the Use Case and the requirements of what we want to achieve.

Thanks for reading !!!

You can follow me on medium as well as

LinkedIn: Supriya Ghosh

Twitter: @isupriyaghosh

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->