Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

DINOv2
Latest   Machine Learning

DINOv2

Last Updated on July 25, 2023 by Editorial Team

Author(s): Michał Oleszak

Originally published on Towards AI.

AI Pulse #1: DINOv2, All The LLMs & Open-Source AI

A new foundational model for computer vision, making sense of the spree of open-source LLMs, and should AI be open-source?

AI Pulse is also available at pulseofai.substack.com.

In this edition:

  • DINOv2, a universal computer vision backbone;
  • A spree of open-source LLMs emerges following LlaMa’s leak;
  • Should AI models be open-sourced?

TL;DR

U+1F4E2 Meta releases the second version of their self-DIstillation with NO labels or DINO model, which can be used as a generic computer vision backbone without the need to fine-tune it.
U+1F4DD Paper: https://arxiv.org/abs/2304.07193
U+1F4BB Code: https://github.com/facebookresearch/dinov2
U+1F440 Demo: https://dinov2.metademolab.com/

The News

DINOv2 is a family of models that learn visual features from unlabeled data. These features can then be used out of the box for a wide range of downstream tasks, including image classification, segmentation, or depth estimation. The models show interesting properties, such as understanding the object’s parts and the scene geometry, which make it a suitable backbone for even more complex tasks.

The novelty is that the DINOv2 backbone, pre-trained in a self-supervised way, does not require fine-tuning. One can take it as-is and, for example, build a small linear classifier on top of it to solve any image classification task. This is in contradiction to all the self-supervised architectures to date, which typically require fine-tuning the entire network’s weight, including the backbone, in order to perform well on downstream tasks.

Meta open-sourced not only the training code but also trained models in a range of sizes.

Delving Deeper

Self-supervised learning (SSL) is a learning paradigm in which the model is trained to learn features from unlabeled data. This is very convenient for use cases in which data annotation is hard or expensive, such as medical diagnosis. But SSL techniques have also yielded performance improvements in other scenarios thanks to the fact that they can learn from larger datasets and are not influenced by biased or incorrect annotations.

DINOv2 builds heavily on top of its first version. Indeed, the authors openly state that most of the technical contributions of v2 aim at accelerating and stabilizing the training. Just like v1, DINOv2 is trained in a self-distillation process with no labels:

  • Two Visual Transformers (ViTs) are instantiated with the same architecture: the Teacher and the Student.
  • A number of random crops are cut out from each training image. Some of them are global crops and contain a large part of the original image, while others are local crops that comprise just a small part.
  • All the crops are passed through the Student network, and only global crops are passed through the Teacher.
  • The output representations from both networks are compared with the cross-entropy loss. Student’s weights are updated based on this loss to encourage them to produce output more similar to that of the Teacher. Teacher’s weights, on the other hand, are updated with an exponential moving average of the Student weights.

DINOv2’s main advantage over its predecessor is the dataset it had available for pre-training. The authors note that most SSL developments so far have been made in the context of pre-training on ImageNet, whose lack of diversity might lead to overfitting to the few dominant modes. To that end, they implement a simple yet effective clustering mechanism that allows them to collect a curated, diverse image set.

Behind the News

Meta has been leading the research on self-supervised methods for computer vision for some time. In 2021 Yann LeCun, Meta’s Chief AI Scientist, published what is now a famous blog post titled Self-supervised learning: The dark matter of intelligence. In it, LeCun claimed that SSL is one of the most promising ways to build background knowledge and approximate a form of common sense in AI systems.

Since then, Meta’s researchers released many successful SSL architectures, including MoCo or DINO. Last week, they summarized their expertise on the topic in The Self-Supervised Learning Cookbook.

A spree of open-source LLMs

TL;DR

U+1F4E2 At the end of February this year, Meta announced LLaMa, their answer to OpenAI’s GPT models. Initially, LLaMa was not intended to be open-sourced, but a week after its announcement, the model leaked on 4chan, commencing a crazy spree of other open-source LLMs that build on top of it. This piece will help you make sense of this abundance of Large Language Models and associated projects.

  1. Alpaca
    U+1F310 https://crfm.stanford.edu/2023/03/13/alpaca.html
    A fine-tuned LLaMa trained to follow instructions. Specifically, Meta’s 7B LLaMa was fine-tuned on 52K instruction-following demonstrations generated from OpenAI’s text-davinci-003, the model behind GPT-3. It is noteworthy how the authors took advantage of the synergy effect created by the abundance of LLMs: they created their model by using another LLM to generate training data to fine-tune yet another LLM.
  2. Vicuna
    U+1F310 https://vicuna.lmsys.org/
    Another fine-tuned LLaMa, this time on conversations between ChatGPT and its users. Specifically, Meta’s LLaMa has been fine-tuned on the data shared by ChatGPT users at sharegpt.com. Reasonably, the model can be expected to mimic ChatGPT’s behavior. The authors used GPT-4 to assess Vicuna and learned that it features 90% of ChatGPT quality.
  3. Koala
    U+1F310 https://bair.berkeley.edu/blog/2023/04/03/koala/
    Similar to Vicuna, Koala is a LLaMa fine-tuned on publicly available conversations. On top of ShareGPT conversations, it also uses a set of other datasets. The author’s main finding is that more data is not always better: a Koala version that uses only high-quality training data performs better than a version fine-tuned on more uncurated datasets.
  4. GPT4-x-Alpaca
    U+1F310 https://huggingface.co/chavinlo/gpt4-x-alpaca
    Just like Alpaca was trained by fine-tuning LLaMa to follow instructions, GPT4-x-Alpaca is a LLaMa fine-tuned on the GPTeacher data, a collection of instruction-following datasets generated by GPT4.
  5. ColossalChat
    U+1F310 https://github.com/hpcaitech/ColossalAI
    A model based on LLaMa. The authors expose not only the chatbot itself but also the entire training pipeline, including the Reinforcement Learning with Human Feedback (RLHF) component.
  6. ChatLLama
    U+1F310 https://github.com/juncongmoo/chatllama
    A LLaMa fine-tuned with RLHF just like ChatGPT. The authors publish the training code allowing everyone to train their own ChatGPT-like model. What’s more, the training is runnable in a single GPU and supposedly 15 times faster than that of ChatGPT.
  7. OpenAssistant
    U+1F310 https://open-assistant.io/
    A project meant to give everyone access to chatbots. As part of the effort, the authors release a large dataset, OpenAssistant Conversations, and ask everyone to contribute by submitting, ranking, and labeling model prompts and responses.
  8. FreedomGPT
    U+1F310 https://www.freedomgpt.com/
    A version of Alpaca accompanied by a simple UI, allowing to run the uncensored model locally and privately.
  9. WizardLM
    U+1F310 https://arxiv.org/abs/2304.12244
    Another LLaMa fine-tuned on instruction-following data. This time, the authors used another LLM to generate instructions of varying complexity. Starting with a set of simple instructions, they used a model to rewrite them step by step into more complex instructions.

Should AI models be open-sourced?

TL;DR

U+1F4E2 The explosion of generative models we have witnessed in the pasts months sparks the discussion about their accessibility. “Nature” makes an important contribution to the debate, speaking in favor of open-sourcing AI.

Debating AI-access

Generative AI has existed for a time already, but the Cambrian explosion we are witnessing these days took off quite recently when the end-users were given the opportunity to interact with the technology directly.

It all started with image-generating models such as DALL-E 2, Stable Diffusion, and Midjourney. Then, the time of Large Language Models came with the release of ChatGPT, followed by a number of similar chatbots. Some of them, including OpenAI’s GPT-based models and Google’s Bard, are paywalled, while others, most notably many of the LLMs built on top of Meta’s LLaMa model, are freely available.

There are as many advocates of open-sourcing AI models as there are critics of the idea. The former often point out that wide access to new algorithms accelerates both research progress as scientists build on each other’s work as well as market adoption as companies can easily build AI-based products. The critics, on the other hand, often warn against bad actors using open-source technology for unethical or dangerous ventures.

Nature’s Voice for OpenAI

A new voice adds to the debate in the form of an article published on the website of the journal Nature. In it, the author advocates for open access to AI models for everyone, presenting the following arguments.

  • Providing unrestricted access to AI models permits investigators to examine the inner workings of the model, adjust its code, and identify bugs. Active involvement and oversight by the scientific community can aid in ensuring the security of such models over time.
  • AI models that are available as open-source are crucial for the ability to replicate scientific findings, as proprietors of closed AI systems have the ability to modify their product or the data used to train it, causing its outputs to change unpredictably.
  • The use of proprietary AI in scientific research raises concerning ethical issues as texts or images used to train these models are often undisclosed and could comprise private information exchanged among social media users or material generated by children who are not able to consent to share these data.

The article goes on to call on scientists to move away from using proprietary AI in their own work where possible and switch to open models. It also urges governments to increase funding for projects oriented toward producing open-source models for research.

Our take on it

Perhaps there is a right place for both proprietary and open-source AI, just like with other forms of software. Some proponents of the open-source talk about the “Linux moment” of generative models, referring to the surge in popularity of free access to source code started by the Linux operating system. But after all, despite Linux’s popularity among developers, the proprietary Microsoft Windows is still the number one OS on the market, followed by MacOS.

Closed models can provide a ton of value for society and their creators at the same time, thus not eliminating the incentive to innovate. They just need to be properly validated and approved for safety. Is the AI Certification Engineer a job of the not-so-distant future?

Thanks for reading! AI Pulse is also available as a free newsletter on Substack. If you liked it, help me improve by subscribing and sharing it with colleagues and friends.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->