Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

While Google and OpenAI Fight for the AI Bone, the Open Source Community Is Running Away with It
Generative AI   Latest   Machine Learning   Openai

While Google and OpenAI Fight for the AI Bone, the Open Source Community Is Running Away with It

Last Updated on May 16, 2023 by Editorial Team

Author(s): Massimiliano Costacurta

Originally published on Towards AI.

“Hey, did you hear? they say Google and OpenAI don’t have a competitive advantage in LLM.”
“Yeah, sure… who said it?”
“Google.”
“Wait a minute…”

Photo by Kai Wenzel on Unsplash

One week ago SemiAnalysis released a real shocker when they made public a leaked document from Google titled “We Have No Competitive Edge, And Neither Does OpenAI.” While we can’t be sure if the document is legit, it brings up some thought-provoking points about the real struggle in the world of large language models (LLMs). It’s not Google vs. OpenAI; it’s more like open-source LLMs taking on their closed-source counterparts.

This leaked document hints that both Google and OpenAI might be losing their edge against the ever-growing open-source LLM community. The reason? It’s pretty simple: open-source projects are moving at lightning speed. Faster than large corporations or corporate-backed companies can match, especially since open-source projects don’t face many reputational risks. Apparently written by a Google researcher, the document emphasizes that even though Google and OpenAI have been working hard to create the most powerful language models, the open-source community is catching up at an astonishing speed. Open-source models are quicker, more adaptable, and more portable. They’ve managed to achieve great results with much less resources, while Google is grappling with bigger budgets and more complex models.

What’s more, having tons of researchers working together in the open makes it tougher for companies like Google and OpenAI to stay ahead of the game in terms of technology. The report says that keeping a competitive edge in tech is getting even more difficult now that cutting-edge LLM research is within reach. Research institutions around the globe are building on each other’s work, exploring the solution space in a way that’s way beyond what any single company can do. Turns out, being able to train huge models from scratch on pricey hardware isn’t the game changer it used to be, which means pretty much anyone with a cool idea can create an LLM and share it.

Alright, we’ve seen open-source projects trying to outdo their corporate counterparts before, but let’s dig a bit deeper to see if this is a genuine threat in the AI world.

Ups and downs in the world of open collaboration

Open-source software has always seen its ups and downs. Some projects like BIND, WordPress, and Firefox have done really well, showing that they can stand up against big-name enterprise products. On the flip side, projects like OpenOffice, GIMP and OpenSolaris faced struggles and rapidly lost ground. Regardless, open-source software is still popular, with many websites using Apache web servers, BIND servers, and MySQL databases.

Now, the problem is that keeping open-source projects funded and maintained can be tricky. It takes solid planning, the right resources, and a real connection with users. If a project has a dedicated user base and passionate developers, it’s more likely to stay on top of its game and keep getting better. Back in 2018, OpenAI faced some of these hurdles and decided it was time for a change. They started looking for capital and eventually became a capped-profit company. That means they could get investments and offer investors a return capped at 100x their initial investment.

OpenAI said they needed to make these changes to fund research, support big companies, and keep things safe. So, you could argue they did what they had to do to dodge the usual open-source pitfalls. But that did not come for free, since, while OpenAI has made impressive progress in AI development, its increasing secrecy, lack of transparency, and limited customization options have alienated the very community it once aimed to serve.

On the other hand, Google is really into open-source software, and they are involved in quite a few open-source projects. Just look at Android, their mobile operating system. It’s built on the Linux kernel and has been a game-changer in making open-source software popular in the smartphone world. Today, most smartphones run on Android. Another awesome open-source project from Google is Kubernetes, which has become the top choice for container orchestration. It helps developers automate things like deployment, scaling, and managing containerized applications. Last but not least, let’s not forget Chromium. Google’s Chrome is built on the open-source Chromium project, and it has become super popular since its launch.

By being part of open-source projects like these, Google shows they are really into transparency, openness, and working together to create innovative and flexible software solutions. They are dedicated to making the tech world more inclusive, diverse, and accessible for everyone. For this reason, I wouldn’t be too shocked if Google decided to make their next big language model an open-source project. It could be a clever move because they’d have all their brand, marketing, and developer muscle behind it, giving OpenAI some serious competition. Of course, that’s assuming the model’s quality would be on par, which hasn’t been the case so far. What’s even more crucial is that someone else might snag that spot first. As we’ll see next, there’s a long list of newbies waiting in line.

A guided journey through the open-source LLM boom

One of the coolest aspects of the SemiAnalysis document is the recent timeline highlighting key milestones in the open-source community, particularly in the area of large language models (LLMs). It all starts with what might be considered the “big bang” of recent open-source LLM advancements — the release of LLaMA by Meta on February 24, 2023 . LLaMA is a LLM with sizes from 7B to 65B parameters, claiming to require less computing power, making it ideal for testing new approaches. Actually, it was not released as an open-source model, but one week after the release, LLaMA’s model weights leaked to the public, and everyone got a chance to play around with it. That’s when things started snowballing.

Here is a quick summary of the milestones described in the document:

  • Artem Andreenko’s Raspberry Pi implementation of LLaMA (March 12, 2023)
  • Stanford’s Alpaca release (March 13, 2023)
  • Georgi Gerganov’s 4-bit quantization of LLaMA, letting it run on a MacBook CPU without a GPU (March 18, 2023)
  • Vicuna’s 13B model release (March 19, 2023) was trained for just $300.
  • Cerebras trained an open-source GPT-3 architecture that outshines existing GPT-3 clones.
  • LLaMA-Adapter (March 28, 2023) set a new record on multimodal ScienceQA with just 1.2M learnable parameters using a Parameter Efficient Fine Tuning (PEFT) technique.
  • UC Berkeley’s Koala (April 3, 2023) releases a dialogue model trained entirely on free data, costing $100 and scoring over 50% user preference compared to ChatGPT.
  • Open Assistant release (April 15, 2023) offers a complete open stack for running RLHF (reinforcement learning from human feedback) models with a 48.3% human preference rating.

And the list goes on…

Yes, you got it right. All of this happened in just over two months, proving how lively the open-source scene is. However, most of these open-source models might be known only to insiders and haven’t hit the mainstream (I have to admit that I learned about the majority of them through the leaked document too). But just some days ago, on May 5, a possible game-changer arrived: MosaicML released MPT-7B, setting the bar high for open-source competitors (and maybe even OpenAI). What’s more, it’s licensed for commercial use, unlike LLaMA. The MPT series also supports super-long inputs with context lengths of up to 84k during inference.

MosaicML put the MPT series through rigorous tests on various benchmarks, showing it can match LLaMA-7B’s high-quality standards. The base MPT-7B model is a decoder-style transformer with 6.7B parameters, trained on 1T tokens of text and code. MosaicML also released three fine-tuned versions: MPT-7B-StoryWriter-65k+ for super long context lengths in fiction; MPT-7B-Instruct for short-form instruction following; and MPT-7B-Chat, a chatbot-like model for dialogue generation.

MPT-7B was trained on the MosaicML platform in just 9.5 days using 440 GPUs, with no human intervention. The perks of these improvements are clear in the numbers. For example, MosaicML claims that the MPT-7B model gives a competitive performance with only 7 billion parameters, which is 28 times smaller than OpenAI’s GPT-3 with its 175 billion parameters. This size reduction means big cost savings since fewer resources are needed for both training and deployment. Plus, the smaller MPT-7B model is more portable, making it easier to incorporate into various applications and platforms.

So, the million-dollar question is: can MPT-7B match the quality level we’ve come to expect from ChatGPT? We’ll see. But one thing is for sure — the open-source world of LLMs is buzzing with excitement and innovation, and it won’t be long before we’ll find out.

So, do we have a clear winner?

No, actually we don’t. I don’t think that all the fears expressed in the article are necessarily well founded. Many of the concerns raised in the article might not be well-founded. However, this isn’t exactly great news for Google either, as OpenAI still remains by far the undisputed leader in the LLM market. OpenAI made a clever and bold move in November 2022 by launching ChatGPT for public use, totally free and perhaps not yet fully secured. This move garnered massive traction, making OpenAI the fastest company to reach one million users in just five days, and a whopping 100 million by the end of January 2023.

Impressive numbers aside, there’s an important point to be made here: OpenAI is collecting a huge (and I mean HUGE) amount of user data. While the leaked document claims that faster, cheaper algorithms provide a competitive advantage, that’s only part of the story. In the realm of AI, what truly matters to users is the quality of information that models offer. To make better inferences, more data and feedback are needed, and that’s exactly what OpenAI is collecting.

Additionally, it’s worth noting that OpenAI has Microsoft’s backing, granting them access to a massive cloud of user data. When your AI can whip up a stunning PowerPoint presentation, a comprehensive Excel spreadsheet, or the perfect LinkedIn profile just by describing them in words, the algorithm used to accomplish that becomes irrelevant. But when you ask questions and receive incorrect responses, knowing the algorithm was trained on a mere $300 budget is hardly comforting.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->