Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Mastering Recommendation Engines with Neural Collaborative Filtering
Latest   Machine Learning

Mastering Recommendation Engines with Neural Collaborative Filtering

Last Updated on December 11, 2023 by Editorial Team

Author(s): Priyansh Soni

Originally published on Towards AI.

This article is your go-to manual for crafting a recommendation engine with Neural Collaborative Filtering (NCF). Starting with a swift introduction to recommendation engines, we’ll dance through their different types, focusing primarily on model-based collaborative filtering, leading all the way to the working of neural recommendation engines. And just to sweeten the deal, we’ll wrap it all up with a juicy real-world example.
Kindly buckle up, ‘coz this one’s a little cool.

Disclaimer This article assumes that the reader is familiar with recommendation engines and collaborative filtering.

OUTLINE —

  1. What are Recommendation Engines and their types?
  2. Model-Based Collaborative Filtering and NCF
  3. Working algorithm of pure NCF models
  4. Making a Recommendation Engine with NCF
  5. Conclusion

1. What are Recommendation Engines and their types?

Recommendation engines, also referred to as recommender systems, are nothing but engines or algorithms that serve us with content we are most likely to watch, buy, and consume. etc. These systems hold significant importance across diverse online platforms, spanning e-commerce websites, streaming services, as well as social media and content platforms. Their primary goal is to analyze user preferences and behaviors to deliver tailored recommendations, ultimately enhancing user engagement and satisfaction.
The most common example of this is online streaming services like Netflix, Amazon Prime, and such, where we are often presented with content recommendations on the home page that say “You might also like”.

Types of Recommendation Engines :

  • Content-Based Filtering
  • Collaborative Filtering
  • Hybrid Models

Let’s get a brief about them —

Content-based filtering analyzes the characteristics and features of items users have interacted with or searched in the past. By focusing on item attributes, these systems recommend items with similar properties. This is mostly used when the user base is less and there are more products to serve — a cold start problem.

Collaborative Filtering (CF) recommends items by examining the preferences and behaviors of a group of users. This can be user-based and item-based. User-based CF identifies similar users and suggests items liked by those similar users, whereas Item-based CF suggests items similar to ones the user has previously enjoyed.

Hybrid recommendation engines follow ensemble techniques, often blending aspects of both content-based and collaborative filtering. By integrating various approaches, these models aim to overcome individual limitations, providing more accurate and diverse recommendations.

Now Collaborative Filtering recommender engines can also be categorized further as Memory-based and Model-based.

The major difference between these two methods is the way they determine the ratings given by users to items.

  1. Memory-based CF uses a traditional approach by measuring user/item similarity using correlation methods (e.g., Pearson’s) and then taking a weighted average of the ratings to generate a rating for an item by the user.
  2. Model-based CF uses machine learning or statistical models to learn patterns and relationships in the data, which is then used to determine user ratings for items.

Regardless of how interesting both of these methods are, we are going to dive deeper into Model-based Collaborative Filtering for this article to justify its title!

2. Model-Based Collaborative Filtering and NCF

Model-based CF, in brief, involves creating a model from the user-item interactions to make predictions. It uses machine learning or statistical models to learn patterns and relationships in the data. Let’s get into the details of this.

Model-based CF tends to create user-feature and item-feature matrices, which are randomly initialized, dotted, and weighted to generate the user-item interaction scores. This can be visualized in the image below

As seen above the matrices U and V are user-feature and item-feature matrices. These are randomly initialized.

  • The user-item interaction matrix (R) is usually sparse since most of the items are not rated by the users.
  • The matrices U and V are dotted to generate the predicted values for the user-item interaction matrix. These will be some random values coming from the dot product of two random matrices (U and V). Let's call this matrix with random values R`.
  • Just like we use optimization algorithms, e.g., gradient descent, in traditional ML to update the values of w and b to minimize the loss between the actual and the predicted output, a similar approach is followed here to update the values inside the matrices U and V to minimize the difference between R and R` via gradient descent.

Gradient descent adjusts these random weights to their most optimal values which minimises the difference between the predicted ratings and the actual ratings.

The model learns to assign weights to the latent features in matrices U and V. These weights capture patterns and preferences in the user-item interaction data. The weights are optimized until the model converges. The learned matrices U and V are then used to predict missing entries in the original matrix and make personalized recommendations for users.

We can also use embedding layers for user and item features, which can be learned during model training via gradient descent.

This method of decomposing the sparse user-item interaction matrix into two lower-rank matrices (U and V) is called Matrix Factorisation. There are several other types of Model-based CF methods like —

The gist of all these model-based CF methods is to learn the latent feature patterns of user and item interactions. Some of these methods can only capture linear patterns like Matrix Factorisation, SVD, etc., whereas others can capture non-linearity. And one such method is NCF!

Neural Collaborative Filtering

Neural Collaborative Filtering (NCF) leverages the expressive power of neural networks to model complex and non-linear relationships in user-item interactions. While traditional collaborative filtering methods capture linear patterns, NCF can use activations like ReLU to capture non-linearity.
Incorporating neural networks into the collaborative filtering layer effectively transforms the recommendation problem into a machine learning task by employing neural architectures, typically involving multi-layer perceptrons (MLP) or deep neural networks.

Let’s dive into how NCF models are built

Working algorithm of pure NCF models

While building traditional collaborative filtering models like Matrix Factorisation(MF), we usually start with a sparse user-item interaction matrix, then we make separate user and item latent feature matrices U and V, that are randomly initialized or created with embeddings. Then, these feature matrices are dotted to generate the matrix R`, which is then used to update the values inside the matrix U and V. NCF incorporates the structure of MF models and combines it with neural networks.

A typical NCF model utilizes embeddings. The architecture can be broken down to 3 layers—

  1. Matrix Factorisation (MF) layer — generates dotted output for user and item embeddings — R`. The matrix R` captures linear interactions between users and items.
  2. Neural Network (NN) layer — generates concatenated user and item embeddings passed through an MLP. The MLP layer is used to capture non-linear interactions between users and items.
  3. NCF output layer — combines the output from MF and NN layers to generate the final output. The output R` from the MF layer is concatenated with the output of the NN layer. This concatenated vector is passed through a Dense layer, which generates the final output for the model.
NCF Model Architecture

The sole purpose of incorporating an MLP for training is to capture the non-linear interactions bewteen the latent features of users and items.

The models can be tweaked for various incorporations like batch normalization, dropout layers, optimizers like Adam, etc.

Let’s look at a real-world problem statement to get a little hands-on.

4. Making a Recommendation Engine with NCF

4.1 Problem Statement

We will take the usual movie recommender problem set but will approach it using NCF instead of traditional CF models. Any related problem statement can be found on websites like Kaggle, UCI machine learning repository, etc.

4.2 Dataset

The dataset, after filtering and preprocessing, looks like this —

We have the userId, movieId, and ratings given by users to movies. Collaborative filtering methods just require user and item interactions, since they learn the patterns in these interactions irrespective of other user and item features. Therefore, all we need for a CF model or a model derived from CF (like NCF), is the user and item interactions and nothing else.

4.3 User-Movie Interaction Matrix

The user-movie interaction matrix denotes the ratings given by users to movies. This is a sparse matrix since users have not seen all the movies

4.4 Getting Data Ready for Modelling

As a neural network takes only numeric input, we made indices of userId and movieId. This will help if we have userId and movieId in categorical or some other format other than an integer.
Next, we split the data into a random 80–20 split and generated the training and validation sets.

4.5 Model Architecture

We’ll go through the model architecture step by step —

  1. Define hyperparameters like epochs, batch size, learning rate, and embedding size.
  2. Define the input layer for the model using Keras layers.Input
  3. Define the MF layer with user and movie embeddings. Flatten the embeddings and take the dot product to generate a similarity score (traditional MF architecture).
  4. Define the NN layer with user and movie embeddings, flatten and concatenate the embeddings, and pass them as input through a Multi-Layer Perceptron(MLP) to generate the output. Add and tweak modifications like the number of neurons, batch normalization, and dropout layers.
  5. Define the NCF layer by concatenating the output from the MF and NN layers. Pass the concatenated vector through a Dense layer with a single neuron to generate the output for the entire model.
  6. Build and compile the model with the ‘MSE’ loss and optimizer of your choice.

4.6 Model Training and Evaluation

Train the model on the desired number of epochs and batch_size. Try with multiple combinations of epochs and batch_size to generate the best results.

4.7 Make Predictions

After successful model training and evaluation, make predictions on the entire dataset to get the ratings for unrated movies.

5. Conclusion

Embarking on the journey of recommendation engines reveals a world where technology understands us better than we know ourselves. Neural Collaborative Filtering (NCF) isn’t just an algorithm; it’s a wizard crafting tailored journeys for every user. With NCF’s prowess, recommendations aren’t just suggestions — they’re personalized invitations to uncover new passions and experiences. As we bid adieu, let’s embrace the excitement of evolving technology, knowing that every recommendation is a tiny spark igniting joy in our digital adventures.

Check these out as well —

The Power of Click Probability Prediction

CPP — the engine that fuels the algorithms behind modern recommendation systems, online advertisers, and much more. This…

medium.com

One Stop For Logistic Regression

Logistic Regression? Why is it called Regression? Is it linear? Why is it so popular? And what the hell is log odds?

pub.towardsai.net

A One-Stop for Support Vector Machine

Support Vectors? Machine? And, why isn’t Oswald Mosely dead?

medium.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->