Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

NLP News Cypher | 07.19.20
Latest   Machine Learning   Newsletter

NLP News Cypher | 07.19.20

Last Updated on July 27, 2023 by Editorial Team

Author(s): Ricky Costa

Originally published on Towards AI.

Photo by mohammad alizade on Unsplash

NATURAL LANGUAGE PROCESSING (NLP) WEEKLY NEWSLETTER

NLP News Cypher U+007C 07.19.20

Modularity

Twitter’s Help Desk had a productive work week. How productive? U+1F447

very productive

In case you missed it, celebrity Twitter accounts were hacked in a bitcoin ponzi scheme scam. As Twitter and co. scrambled to put the fire out, they deactivated all blue check accounts. A glitch in the matrix!

A couple of days later, select Cloudflare servers went dark as they blamed bad routing for a drop-in services U+1F9D0(sorry Discord). To say the least, tech had a rough week.

But that didn’t stop me from randomly browsing the darknet to find out more about the recent hacking. And I found nothing! Yay! However, I did discover that the US Secret Service purchased a 4-year contract for crypto software from Coinbase, a digital currency exchange. Yep, it’s even in the US Gov’s public fillings:

beta.SAM.gov

Edit description

beta.sam.gov

Why does that matter? According to Benzinga, “Coinbase also collects private user data as part of the anti-money-laundering requirements on its platforms.” U+1F648

Also, ICML happened! Here are some awesome papers:

Stanford AI Lab Papers and Talks at ICML 2020

The International Conference on Machine Learning (ICML) 2020 is being hosted virtually from July 13th – July 18th…

ai.stanford.edu

Highlight: Graph-based, Self-Supervised Program Repair from Diagnostic Feedback

Google at ICML 2020

Machine learning is a key strategic focus at Google, with highly active groups pursuing research in virtually all…

ai.googleblog.com

Highlight: REALM: Retrieval-Augmented Language Model Pre-Training

Carnegie Mellon University at ICML 2020

Carnegie Mellon University is proud to present 44 papers at the 37th International Conference on Machine Learning (ICML…

blog.ml.cmu.edu

Highlight: XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization

Facebook Research at ICML 2020

Machine learning experts from around the world are gathering virtually for the 2020 International Conference on Machine…

ai.facebook.com

Highlight: Aligned Cross Entropy for Non-Autoregressive Machine Translation

Workshop Highlight: Language in Reinforcement Learning:

LaReL 2020

Language is one of the most impressive human accomplishments and is believed to be the core to our ability to learn…

larel-ws.github.io

Honorable Mention

U+26A1Super Duper NLP Repo U+26A1

FYI: Another 52 notebooks were added bringing us to 233 total NLP Colabs. Thank you for contributing: Manu Romero, Abhishek Mishra, Nikhil Narayan, Oleksii Trekhleb, Chris Tran, Prasanna Kumar & Cristiano De Nobili.

The Super Duper NLP Repo

Colab notebooks for various tasks in NLP

notebooks.quantumstat.com

This Week

Adapters, AdapterHub and Modularity (w/ a Top Secret Interview)

GPT-3 Aftermath

Hyperparameter Optimization Using Simple Transformers

UIs for Machine Learning Prototyping

Visualization: Set for Stun

Graph Based Deep Learning Repo

Open-Domain Conversational AI

Dataset of the Week: Critical Role Dungeons and Dragons Dataset (CRD3)

Adapters, AdapterHub and Modularity

Once in a while, cool things happen, and this past week, the AdapterHub framework dropped. In the next evolution of NLP transfer learning, adapters deliver a new (and more modular) architecture.

Research Paper (easy read)U+007C Github

The Hub:

AdapterHub – 175 adapters for 21 text tasks and 32 languages

Loading existing adapters from our repository is as simple as adding one additional line of code: model =…

adapterhub.ml

Oh, we assumed most of you would say “WTF are adapters?!” As a result, we were really excited to speak with AdapterHub’s author Jonas Pfeiffer to get us up to speed on everything adapters and their framework: U+1F447

U+1F60E
  1. Hi Jonas, congrats on your new and awesome framework AdapterHub! For those out of the loop, how would you simply describe adapters?

“Adapters are small modular units encapsulated within every layer of a transformer model, which learn to store task or language specific information. This is achieved by training *only* the newly introduced adapter weights, while keeping the rest of the pre-trained model fixed. The most fascinating concept about adapters is their modularity which opens up many possibilities of combining the knowledge from many adapters trained on a multitude of tasks. In order to make training adapters and subsequently sharing them as easy as possible, we have proposed the AdapterHub framework.”

2. What are some advantages of adapters vs. traditional fine-tuning of pretrained models?

“There are many advantages for both NLP engineers in industry as well as researchers. For practitioners maybe the most interesting concept is an adapters small storage space. Adapters only require 3.5Mb (sharing >99% of the parameters across all tasks), and still achieve state of the art performance. This means, in order to store 125 adapter models on a device, you require as much space as 2 fully fine-tuned BERT models. The biggest advantage provided by adapters is due to their modularity. By freezing pre-trained model weights, traditional multi-task learning problems such as catastrophic forgetting and catastrophic interference between tasks no longer exist. Adapters can thus be trained separately on all kinds of tasks, and subsequently composed or stacked to combine the information stored within them.”

3. When training an adapter, how does its training time compare with traditional fine-tuning?

“So far, we have observed that training adapter is often faster than full fine-tuning. For some setups we can see gains of up to 30% in the time required to perform a single training step. This is because we do not require to perform a backward pass through the entire model such as the BERT embedding matrix, but also due to PyTorch optimization strategies. Unfortunately, for smaller data sets, adapters require more steps than full fine-tuning due to the random weight initialization. We believe that efficiency is a crucial property relevant to many practical applications. This is why we are currently investigating the computational efficiency of different architectures on a larger scale, including several training and inference scenarios.”

4. You have created AdapterHub for the community to find, train and/or use adapters; where can one go to find more information on how they can help?

“Adapters have only been introduced recently, so the research field is quite new. We have tried to summarize our vision about adapters in our paper which we have published together with the AdapterHub framework. For us the AdapterHub is a long term project which we are hoping that the NLP community will be able to leverage in order to develop new research directions, building on adapters and their amazing modular capabilities.”

5. Do you view adapters as the next important step for transfer learning in NLP?

“Sharing information across tasks has a longstanding history in machine learning where multi-task learning has arguably received the most attention, coming with many issues. By first encapsulating the stored information within frozen parameters and then combining it, we are able to mitigate several of these issues. Modularity of knowledge components which can be combined on-the-fly, and at will, is extremely desirable and impactful. So yes, from my perspective adapters are a very important and promising direction for transfer learning and I strongly believe that they have the capacity to speed up research in this field.”

Fin U+1F440

As you can see from Jonas’s answers, this is a remarkable advancement in transfer learning and model architecture. The AdapterHub framework is built on top of Hugging Face’s library and only requires 1–2 lines of code (on top of the usual code you’re used to in the Transformers library) to initialize an adapter.

To show how easy it is to get started (with inference), we created a Colab with BERT stacked with an SST-2 adapter (binary sentiment analysis). Give it a whirl and don’t forget to checkout AdapterHub and train those adapters! And thank you to Jonas for the great intro!

Colab of the Week U+1F91F

Google Colaboratory

Edit description

colab.research.google.com

GPT-3 Aftermath

GPT-3’s getting a lot of feedback this week. On his blog, Max Woolf opines on GPT-3’s impressive abilities and where the language model falls short.

The TL;DR:

  • Blackbox issues continue.
  • Model is slow.
  • Model output still needs to be cherry-picked but at better rates than GPT-2.
  • Insensitive outputs still a problem.

Blog:

Tempering Expectations for GPT-3 and OpenAI's API

On May 29th, OpenAI released a paper on GPT-3, their next iteration of Transformers-based text generation neural…

minimaxir.com

Honorable mention of Yoav Goldberg’s interaction with GPT-3 is also worthwhile to check out on his Twitter feed: https://twitter.com/yoavgo

Hyperparameter Optimization Using Simple Transformers

From the author of the Simple Transformers library, Thilina Rajapakse explores hyperparameter optimization on the Recognizing Textual Entailment (RTE) task. An intuitive step-by-step guide (code included) in addition with visualization from W&B Sweeps integrated in his library.

Hyperparameter Optimization for Optimum Transformer Models

How to tune your hyperparameters with Simple Transformers for better Natural Langauge Processing.

towardsdatascience.com

UIs for Machine Learning Prototyping

Want to add a quick UI to visualize your transformer model? Say hello to Gradio. The library includes Colab/Jupyter support so you can tunnel your inferences from Colab directly to the browser. It includes TensorFlow and PyTorch support, and can be used for CV and NLP demos alike.

FYI, 2 Gradio notebooks are included in the latest update of the Super Duper NLP Repo! Head there (or here ) for a quick demo of its capabilities.

gradio-app/gradio

Quickly create customizable UI components around your TensorFlow or PyTorch models, or even arbitrary Python functions…

github.com

Paper:

LINK

Visualization: Set for Stun

A new python visualization library is out. And the optics are impressive. If you want to venture out of the matplotlib nerd world, check out Multiplex for its stunning visuals.

All it takes to draw a simple text visualization is 10 lines of code:

  1. 3 lines to import matplotlib, Multiplex and the visualization style;
  2. 3 lines to set up the visualization object, load the data and set the style;
  3. 4 lines to draw and show the visualization, including a title and caption.

NicholasMamo/multiplex-plot

Visualizations should tell a story, and tell it in a beautiful way. Multiplex is a visualization library for Python…

github.com

Graph Based Deep Learning Repo

This is a handy resource if you want the low down on graphs & deep learning. This repo contains research literature/survey reviews indexed by year and conference. U+1F440

naganandy/graph-based-deep-learning-literature

The repository contains links to in graph-based deep learning. The links to conference publications are arranged in the…

github.com

Open-Domain Conversational AI

Facebook AI, and its ParlAI library is world-class when it comes to open-domain conversational agents (remember Blender). They released an illustrative overview of what it takes to build great conversational agents, current research and future directions.

LINK

Dataset of the Week: Critical Role Dungeons and Dragons Dataset (CRD3)

What is it?

Dataset is collected from 159 Critical Role DD episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding abstractive summaries collected from the Fandom wiki.

Sample

Where is it?

RevanthRameshkumar/CRD3

This paper describes the Critical Role Dungeons and Dragons Dataset (CRD3) and related analyses. Critical Role is an…

github.com

Every Sunday we do a weekly round-up of NLP news and code drops from researchers around the world.

If you enjoyed this article, help us out and share with friends!

For complete coverage, follow our Twitter: @Quantum_Stat

www.quantumstat.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->