Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Unleashing the Power of Feature Stores: How They Can Supercharge Your MLOps
Latest   Machine Learning

Unleashing the Power of Feature Stores: How They Can Supercharge Your MLOps

Last Updated on July 17, 2023 by Editorial Team

Author(s): Natalia Koupanou

Originally published on Towards AI.

Discover the Benefits of Feature Stores for Streamlined and Efficient MLOps

Edited Photo by Joshua Sortino

If you’re interested in Machine Learning Operations (MLOps), you’ve probably heard about feature stores. But what exactly are they, and why are they so important? Having worked with feature stores for several years and having helped my teams successfully adopt them, I had the honor of sharing my experiences and insights with fellow industry professionals at the MLOps Summit in London in November 2022 and I would love to share more in this blog post. In this post, we’ll provide an introduction to feature stores, explain why they’re needed, highlight their benefits and what to consider before buying or building one.

So, what is a feature store? Simply put, it’s a centralized repository of preprocessed data features that are used to train machine learning models. Feature stores allow data scientists and machine learning engineers to easily access and manage these features, rather than having to repeatedly preprocess and re-engineer the data for each model. But why do we need feature stores? In the past, data scientists would manually engineer features for each machine learning model, resulting in redundant work and wasted time. With the rise of big data and the increasing complexity of machine learning models, this process has become even more challenging. Feature stores streamline this process and enable more efficient and scalable machine learning.

Maximising ML Impact with MLOps

MLOps is the integration of Machine Learning (ML) into a product or business process. The key question is, why to invest in and improve MLOps? The answer lies in operationalizing ML to unlock its value. MLOps is becoming an essential aspect of businesses and digital products, offering benefits like personalization, enhanced efficiency, real-time insights, and better customer experience. The more we mature our operationalization of ML, the more value it brings to our business. For example, imagine we want to provide recommendations to website visitors. One-off recommendations based on static data samples could become irrelevant over time. We could update recommendations more frequently using batch processing. Still, if a visitor’s interactions within a session make weekly recommendations irrelevant, we’d need to find relevant content on the fly. Our capabilities increase as our operational ML infrastructure and expertise grow, enabling us to deliver more value as a data science team.

Journey for unlocking ML value with ML maturity using product recommendation as an example — Diagram inspired by Feature Store Tecton.

However, the operationalization of ML can be challenging, as indicated by a recent McKinsey Global Survey that showed only 36% of participants had their ML project deployed beyond the pilot stage. One of the main reasons for this low success rate is the difficulty of managing the ML process, including data leakage and training-serving skew, duplicated efforts in feature engineering, and managing and serving ML fast in production. A Feature Store is a fully managed and unified solution that can help alleviate these challenges by sharing and serving features at scale across the organization.

Supercharging MLOps with Feature Stores

Challenges solved by Feature Stores

Point-in-time correctness

Data leakage is a common mistake found when reviewing data science code. This happens when we fail to account for changes in feature values over time, making it difficult to accurately predict outcomes. This is where a feature store comes in handy.

A feature store allows us to ensure point-in-time correctness by providing the latest stored value of a feature at or before a given timestamp. For instance, let’s say we want to train a model for predicting whether a user will make a purchase on a website within a session. In the illustration below, we show two labels for two separate sessions of the same user and we can see how features, such as the number of items a user has in their basket change, have different values at different points in time. By requesting data from a feature store based on (an) entity identifier(s) (e.g., user ID, session ID), the feature names, and the timestamps (e.g., T1, T2), we can guarantee that no information from the future is used to create a prediction during training. This not only eliminates the risk of data leakage, but also provides a reliable point-in-time lookup for more accurate predictions.

Point-in-time lookup for retrieving feature vector from a features store using training data for predicting purchase conversion of the user as an example — Diagram inspired by Feature Store Vertex AI.

Consistency across development & production environments

When developing and deploying machine learning models, it’s crucial to avoid training-serving skew. This occurs when different source codes are used for generating features during development and production, leading to discrepancies between training and serving data. This can result in incorrect model behavior and make backtesting a lengthy and frustrating process. So how can we reduce the risk of training-serving skew?

By using a feature store, we can ensure consistency across development and production environments. The same code, data sources, and pipelines are used for both training and serving, making backtesting much easier. With a feature store, batch and real-time data can be easily melded, and any changes in data can be quickly incorporated using a streaming flow. Hence, by fetching all features from the same feature store during both development and production, we can avoid surprises during backtesting and reduce the likelihood of experiencing train-serve skewness in our data.

Re-usability of features across various applications

One of the challenges that data science teams often face is duplicated efforts in feature engineering. This occurs when there is no centralized place to manage and fetch features, which results in teams working in silos and reusing features not being straightforward. Additionally, there is an overhead cost associated with using many features. However, using a feature store can help alleviate these challenges by allowing teams to easily share and reuse features across different applications.

With a central repository for managing and organizing features, duplicated efforts can be avoided, making it faster to create, iterate, and deploy. By using feature stores across several projects, we can reduce the cost of ML applications, as well as break down silos within our organization. Furthermore, feature stores enable practices related to data catalog and lineage and the addition of metadata for each feature, making them reusable across different teams and inv. This leads to faster deployment times, consistency in architecture and infrastructure, and a head start in the development of MVP projects or POCs. With a feature store, sharing is caring (and it also saves time and resources)!

Serving models fast in production

As more and more applications require real-time capabilities, it’s becoming increasingly challenging to extract data from multiple sources and serve inferences quickly. Let’s face it, nobody likes waiting 20 seconds for a response! Fortunately, feature stores come with two storage options — online and offline — that are obviously priced differently. The offline storage is great for training and batch predictions, while the online storage is essential for real-time serving. By using a feature store, we can easily retrieve features in milliseconds and scale our computational resources as needed. We can even code the logic to generate features but let the feature store handle the task of serving them in real-time. With a feature store, we can say goodbye to slow response times and hello to happy users!

High level architecture of feature store

Ease and reliable management of features

Managing ML features can be a challenge, especially when it comes to reliability and ease of use. However, using version control for the code used in a feature store can help with this. So we can easily go back to previous versions of data, just like we do with source code in GitHub. Additionally, detecting drift in the data is possible through feature stores, as they can track the distribution of feature values imported. This makes data monitoring much easier. We also set an expiry date for features limiting cost and making data retention management a breeze. Furthermore, we can control costs by setting limits on computational resources such as quotas on the number of online serving nodes or the number of online serving requests per minute. With these features, managing ML features becomes much more manageable and cost-effective.

To buy or To build?

Whether your team needs a feature store depends on various factors, such as the number and complexity of ML applications in your technology roadmap, budget, team size, and expertise. Despite the many benefits of using a feature store, it’s crucial not to rush into it and instead take an agile approach that gradually proves its value in improving MLOps. For instance, a feature store can provide more significant returns to a large organization with multiple ML applications in production and globally distributed teams than to a startup with a small data team trying to deploy its first ML model. If your organization falls into the former category, then it’s worth considering buying or building a feature store.

To buy or to build?

Buy

When it comes to getting a feature store for your organization, you can either build one yourself or buy one from a company that specializes in providing such solutions. There are several options available, including Tecton, which is founded by some of the same people who contributed to Uber’s Michelangelo and I have personally used it in the past and have been generally pleased. H2o, Databricks, Google Cloud Platform’s Vertex, and Amazon Web Services’ SageMaker also offer feature store solutions. Buying a feature store can be a time and cost-efficient option, as it is less complex than building it in-house. You can save on resources that would be needed to build, maintain, and further develop the feature store infrastructure, as the vendor takes care of these tasks and may even provide 24/7 support. And therefore, you can leave the task of building a feature store to the experts and focus on exciting work that’s in line with your company’s vision!

Build

Companies like Airbnb, Uber, and Spotify had the resources and expertise to successfully build their own feature stores. Building a feature store internally has its own perks. For instance, it gives you complete control over the feature store roadmap and ensures that it aligns with your company’s goals and requirements. There is also no vendor lock-in, which gives you the flexibility to tailor the solution to your specific needs. In addition, the availability of open-source feature stores like Hopsworks, Feast, and Feathr makes it easier to get started without incurring vendor costs. On the downside, building an internal feature store can be time-consuming and resource-intensive, and you might need to have specialized expertise in-house. Ultimately, the decision to build or buy a feature store depends on your company’s specific circumstances and priorities.

Putting it all together

In summary, a feature store can be a valuable solution for companies looking to address challenges in operationalizing ML and unlock the full potential of their data. The benefits of a feature store include point-in-time correctness, consistency across environments, feature reusability, fast model serving, and easy feature management. When deciding whether to use a feature store, consider factors such as the size of your organization and the number of ML applications in production. You can either build a feature store in-house for complete control and flexibility or buy one from a specialized vendor for cost-efficiency and convenience. Lastly, with the potential of GPT to revolutionize feature stores, it’s an interesting time for the field, but it’s crucial to weigh all factors before incorporating it into your MLOps ecosystem.

Thanks for reading! If you’d like to stay updated with my latest articles, provide feedback or discuss further ML and AI, you can follow me on Medium or connect with me on LinkedIn.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->