Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Revolutionizing Web Accessibility with the Hey AI Browser-Native Copilot
Latest   Machine Learning

Revolutionizing Web Accessibility with the Hey AI Browser-Native Copilot

Last Updated on January 6, 2025 by Editorial Team

Author(s): Ido Salomon

Originally published on Towards AI.

How on-device AI can reshape the browsing experience

Browser-native copilot (Image generated by author using DALLE)

Motivation

The internet is the world’s equalizer — or at least, it should be. While it has revolutionized how we learn, work, and connect, it still challenges many users. Consider the millions who find the online world taxing rather than liberating due to motor impairments, vision differences, difficulty navigating traditional interfaces, or challenges discerning reliable information in a sea of content. Traditional interfaces assume uniformity that doesn’t align with our diverse society.

Artificial intelligence (AI) has long been the key to bridging this gap. Voice assistants, screen readers, and browser extensions have tried to remove barriers to access. Yet, these solutions relied on cloud-based services, introducing privacy concerns over personal data sent to the cloud, cost barriers tied to cloud computing, unpredictable network latency and availability, and a one-size-fits-all approach that rarely adapts to the users’ needs.

Fortunately, the last two years have rewritten the rules of what’s possible. Compact and efficient open-source models can run locally on consumer devices instead of restricted to specialized hardware. At the same time, modern browser technologies like WebGPU and increasingly optimized AI runtimes such as Mediapipe and ONNX enable accelerated on-device inference. Together, these advancements support a new class of AI-powered experiences that run entirely within the browser, respecting the users’ privacy, responding in real-time, and tailoring to the individuals’ needs.

This article introduces Hey AI, the first browser extension built on these principles. It blends AI-powered local voice transcription, language and content understanding, and real-time interaction into a fully browser-native copilot experience. The extension addresses the immediate challenge of voice-driven accessibility and lays the groundwork for future interaction modes and experiences, such as eye-based control. The result is a proof of concept that serves as a stepping stone toward a genuinely inclusive human-centric web.

Introducing the Hey AI copilot

See it in action

Imagine starting your day with the news by saying: “Open Google News. Click Search, type Presidential race, and Submit. What is the bottom line?”. With Hey AI, local AI detects the commands, transcribes them, executes them, processes the prompt in an LLM, and synthesizes a concise voice summary on the spot. Everything happens within your browser, without cost-per-request or risk of sending sensitive personal data to third parties.

The possibilities aren’t limited to non-standard input methods. For example, any user can benefit from critical content inspection (e.g., “Is this a scam”?), protecting them from misinformation or social engineering attacks, or even summarization and focused content extraction. We can enhance the web to be more navigable, trustworthy, helpful, and accommodating to individual needs.

Why browser-native?

The browser-native approach doesn’t aim to replicate cloud-based capabilities. Instead, it seeks to improve on them:

  • Privacy — all data stays on your device. No personal information, such as recordings or content, ever leaves your device.
  • Responsiveness — commands are carried out locally without network latency, making it feel like an integrated feature rather than a remote addition.
  • Cost and availability — local inference eliminates fees and dependency on remote service uptime. It works anywhere, anytime, regardless of service availability or connectivity.
  • Personalization — cloud-based tools cater to the common denominator. Since the browser-native copilot runs locally, you can fine-tune it to your voice, interests, and habits, molding it to your preferences. Moreover, you can customize its capabilities to your needs, fueled by an open-source community.

Under the hood

Copilots are complex beasts. They must capture user input, interpret it, apply the correct context, execute corresponding actions, and then provide feedback, all while running performantly in the browser. To understand what makes this possible, we’ll review the entire flow outlined in the architecture diagram.

Browser-native copilot architecture diagram (Image by author)

Capturing user input

All interactions start with the user’s input, which Hey AI continuously listens to through the microphone. However, raw audio is a messy stream of mostly long silences and background noise. Without careful filtering and processing, it wastes resources and responds slowly to genuine user input.

To mitigate these concerns, the copilot employs the Silero 5 voice activity detection (VAD) model with the vad-web library, built on Transformers.js. VAD pinpoints when someone is speaking, focusing on what matters by removing the excess noise and providing us with clean audio that’s likely to contain speech.

Unfortunately, we can’t process these audio segments directly since the browser’s audio processing ecosystem is severely lacking. Hence, we must first transcribe the captured voice segments into text to unlock the rich text-based ecosystem of NLP and LLMs.

Transcription is performed by a small variant of the Whisper speech-to-text (STT) model. It runs in ONNX with multi-threading, delivering dozens to hundreds of tokens per second, ensuring real-time transcription that doesn’t leave the user waiting.

Once speech is detected and transcribed, we must identify whether or not it’s directed at the copilot. Hey AI solves it with wake words (default or custom), whose presence in a transcription indicates the following text should be processed for intent recognition.

Intent recognition

Intent recognition is challenging, particularly when the copilot supports over 20 command types (e.g., navigation, clicks, completions, question answering, etc.), and users often chain multiple commands within the same prompt. When coupled with occasional mistranscriptions, simple rule-based matching falls short.

To tackle this complexity and understand the meaning behind the words, Hey AI utilizes a two-layered approach:

  • NLP engine (based on Compromise) — quickly identifies simple commands (like “Open Youtube”), shortening response times and conserving resources for more complex tasks.
  • LLM (Gemma 2 2B using WebLLM) — handles more complex commands confidently identified by the NLP engine. It can understand ambiguous requests, tolerate minor parsing errors, and call for clarification if the intent is unclear.

This approach pairs speed (from the fast NLP engine) and robustness (from the LLM), ensuring Hey AI handles everything from simple instructions to intricate multi-step tasks.

Controlling the browser

Understanding what the user wants is only half the battle. Once the copilot knows the user’s intent (scroll through the page, zoom in, and so on), it must collect additional required context and dispatch the commands to the browser on the user’s behalf. Browser-level actions rely on direct APIs (such as opening new tabs, muting them, etc). Page-level actions are mainly carried out by injected scripts, each with a dedicated responsibility (e.g., DOM manipulation and simulating keyboard and mouse-based interaction).

AI-driven commands, such as question answering and completion, are fulfilled by the LLM. As the most computationally demanding component, it relies on WebGPU to maintain near real-time performance (~200/30 tokens per second encoding/decoding).

Providing feedback

Feedback transforms the copilot from a one-way command executor to a two-way conversational partner:

  • Output from AI-driven commands is synthesized into speech. Hey AI utilizes the Piper text-to-speech (TTS) model using sherpa-onnx, which has a selection of 923 human-like voices for the users to choose from, fitting the copilot’s persona to the user.
  • Status reports are communicated with small, non-intrusive toast overlays (e.g., “Successfully executed switch tab”)
  • Mode changes (e.g., “waiting for command”) are indicated with subtle audio cues.

Challenges and lessons learned.

Building copilots, exceptionally when constrained to browser-native tools and runtimes, is far from simple and introduces many previously unexplored challenges.

On-device AI

The most significant hurdle compared to cloud-based solutions is the resource constraints of the users’ devices.

  • Potency — cloud LLMs, such as ChatGPT and Gemini, boast impressive capabilities powered by immense computing resources. Commodity user hardware can’t compete, constraining browser-native solutions to much smaller and more focused models that utilize various optimization techniques like quantization. Luckily, these models are consistently improving, with the latest models exhibiting capabilities previously reserved for their much larger peers.
  • Performance — although hardware is cheaper and more accessible than ever, the system requirements for small LLMs are still relatively high, generally targeting top-tier devices to reach adequate performance. Gradually, optimizations in AI runtimes, particularly GPU support and WASM multi-threading, enable complex use cases. Hey AI takes advantage of both and distributes the load between CPU and GPU (to an even 3GB each), extending the range of supported devices. In addition, to maximize perceived performance, the extension executes processing ahead of time so inference results are available immediately when needed (e.g., buffering synthesized segments during TTS).

Individuality

Unlike standardized mouse clicks or keystrokes, non-traditional inputs such as voice and gaze vary considerably. Users differ in language, accents, pacing, connotations, thought processes, etc. There are two main difference categories:

  • Technical — as an example, with differences in user pacing, how long should copilots wait until they determine the speech ended? Any constant threshold trades off inclusion with longer response times and potentially higher error rates due to the introduction of noise.
  • Intent — a transcription can mean different things for different users depending on context. For instance, “search for it” may refer to finding the word “it” on the current page, a reference to something the user is looking at, or a Google search.

Overall, accessibility is not a one-size-fits-all. Personalizing the parameters for each user, either by static configuration or dynamic algorithm, is critical to effectively tackling these challenges.

Web heterogeneity

The web is an ever-changing Wild West. Websites have hundreds of ways to implement the same functionality, each potentially requiring a unique user interaction. Unfortunately, this means copilots can’t support all websites equally. To extend support, copilots must be robust to unpredictable structure and DOM interaction models.

Safety

AI safety, especially when accessing web data, is a neverending chase. AI-based solutions must do their best to avoid unintended or harmful behavior, such as navigating to malicious pages or adversarial LLM poisoning. Hey AI keeps the user in the driver’s seat, focusing on granular instructions rather than amorphous tasks.

Conclusion

Merely two years ago, browser-native AI seemed more of an experiment than a practical solution. Since then, leaps in AI efficiency, browser capabilities, runtime optimizations, and a growing open-source ecosystem have unlocked complex on-device use cases previously thought impossible.

This paradigm shift requires a new perspective on the use of local AI. Striving to match the potency of cloud-based systems is a distraction, as consumer devices will always fall behind cloud computing. Instead, we should focus on cases where browser-native AI shows its significant value, such as privacy protection, cost cutting, higher availability, and personalized experiences.

While there are challenges, Hey AI demonstrates that browser-native AI is a viable tool for reshaping the browser experience. AI-powered accessibility and inclusion are already within reach. However, this is only the beginning — new modalities (like eye-tracking), progressively intelligent features, and a more robust framework are on the horizon.

Thanks for reading! I’ll soon release the source code for Hey AI on GitHub. Contributions and feedback are always welcome. Stay tuned for upcoming releases in this project, such as gaze-based control.

I’m eager to connect with individuals or organizations that could benefit from this accessibility tool or its future developments. Please reach out!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->