Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Taylor Series in AI.
Artificial Intelligence   Latest   Machine Learning

Taylor Series in AI.

Last Updated on August 9, 2024 by Editorial Team

Author(s): Surya Maddula

Originally published on Towards AI.

P.S. Read thru this article a bit slowly, word by word; you’ll thank me later 😉

— — — —

Let’s see what the Taylor Series is and how it relates to its applications in AI & Processing.

“The study of Taylor series is largely about taking non-polynomial functions and finding polynomials that approximate them near some input” — 3Blue1Brown.

What? 😵‍💫

Okay, let’s try to rephrase that to understand better:

Imagine you have a really complicated function, like a curve on a graph, and you want to understand what it looks like near a certain point. The Taylor Series helps us do this by breaking the function into a bunch of smaller, easier pieces called polynomials.

It is a way to approximate a function using an infinite sum of simpler terms. These terms are calculated using the function’s values and its derivatives (which tell us the slope and how the function changes!).

Consider this:

If you have a function f(x), and you want to approximate it near a point, say at x = a, then this is what the Taylor Series looks like:

f(x) = f(a) + f’(a)(x-a) + f”(a)/2! (x-a)² + f”’(a)/3!(x-a)³

Take a second to go thru that again.

Here,

  • f(a) is the value of the function at x = a
  • f’(a) is the slope at x = a
  • f”(a) is how the slope is changing at x = a

We all know that n! stands for n factorial, which is the product of all positive integers up to n.

ex: 3! = 1 x 2 x 3 = 6

— — — —

Source: 3Blue1Brown

Let’s look at a very simple example to understand this better: the exponential function of e^x.

For e^x around x = 0 is:

(try formulating it yourself first, referring to the formula above 😉

e^x = 1 + x + x²/2! + x³/3! + x⁴/4!

Source: 3Blue1Brown

Conceptual

Think of the Taylor Series as a recipe for building a copy of a function near the point a, sort of like a stencil. The more terms, or in this case, ingredients you add, the closer you will get to the original function, and the closer your approximation will be.

So, if you want to estimate e^x for small values of x, you can just use the first few terms:

e^x = 1 + x + x²/2 + x³/6…

This exercise should give you a good idea of how e^x looks like at x = 0.

Pro-Tip: Repeat this exercise a few times to better grasp the concept.

Okay, so what? How is this useful in the real world?

Well, The Taylor series allows us to approximate complex functions with simpler polynomials, which makes calculations easier and faster!

Here are a few examples —

Physics

Example: Pendulum Motion

Imagine a pendulum, like a clock. Scientists use math to understand how it swings. The exact math is tricky, but for small swings, the Taylor Series helps simplify it, making it easier to predict the pendulum’s motion.

So that you can be late for school.

Engineering

Example: Control Systems

Think about a car’s cruise control, which keeps the car at a steady speed. Engineers use the Taylor Series to simplify complex math so the system can react smoothly and keep the car at the right speed.

So that you can ignore the speed limit.

Economics

Example: Interest Rates

When banks calculate interest on savings, they sometimes use complicated formulas. The Taylor series helps simplify these calculations so they can more easily determine how much money you’ll earn!

So that the government can take the right percentage of that in taxes.

Computer Science

Example: Machine Learning

In ML, computers learn from data. The Taylor series helps simplify the math behind these learning algorithms so computers can learn faster and more effectively.

So that you become lazy and spend all day on them.

Medicine

Example: Medical Imaging

When doctors take MRI or CT scans, they receive a lot of data. The Taylor Series helps turn this data into clear images of the inside of the body, making it easier for doctors to diagnose problems!

So that you ignore their advice and walk to McDonald's (cuz you don’t run XD)

Everyday Technology

Example: GPS Systems

When you use GPS on your phone, it calculates your location using satellites. The Taylor series helps make the math simpler so your GPS can quickly and accurately tell you where you are.

So that you can lie about where you are.

Weather Forecasting

Example: Predicting Temperature

Meteorologists predict the weather using complicated math. The Taylor series helps simplify these equations, allowing them to make more accurate forecasts about temperature, rain, and wind.

So that you never open the weather app and always forget an umbrella.

— — — —

So YOU might not use the Taylor Series in the real world — ever; but it’s used every day to make your life simpler!

— — — —

Now, for the interesting bit:

How do we use the Taylor Series in AI? 🔥

You’ve already taken a look into how this is used in ML above and how it helps simplify the math behind these learning algorithms so computers can learn faster and more effectively.

Let’s dive deeper:

First, where can we even use this in AI?

Forget the term AI for a while. Just think of where we use the Taylor Series in everyday mathematical and engineering problems. We can later extrapolate that into how we use it in AI and Machine Learning.

We’ve already discussed how we use it in physics, engineering, economics, CS, medicine, GPS, and weather forecasting. I suggest you scroll back to that again; it’ll click more now and at the end of this article. 🖱️

In AI, we often deal with complex math problems. The Taylor series helps simplify these problems so our AI can learn and make better decisions.

Example:

Neural Network

For Training AI Models:

When we train an AI model, like a neural network, we want to improve its prediction accuracy. We do this by adjusting its parameters (like weights in a neural network) to minimize errors. (w&b)

Taylor series helps here by letting us approximate how small changes in the parameters will affect the error. This approximation helps us find the best way to adjust the parameters to improve the model’s predictions.

Training Neural Networks:

When training a neural network, we want to minimize a loss function, which is how we measure the difference between the predicted outputs and the actual targets. To achieve this, we adjust the network’s parameters (weights and biases) to reduce the loss. This is usually done by using gradient-based optimization methods.

Example

Imagine you’re on a big hill and you want to find the lowest point. To get there, you need to figure out which direction to walk.

  • The Hill: Think of the hill as the “loss function,” which shows how good or bad your predictions are. The steeper parts of the hill represent higher loss (bad predictions), and the flatter parts represent lower loss (better predictions).
  • Finding the Best Path: When you’re on the hill, you can’t see the whole thing, just the part right around you. To decide which way to walk, you use the slope (how steep it is) right where you are. This is like the “gradient” in ML, which tells you the direction that increases the loss the most.
  • Using the Slope: If you want to get to the lowest point, you walk in the opposite direction of the slope (since you want to go downhill). You keep taking small steps in this direction to lower the loss.

Where does the Taylor Series Help

The Taylor series is like having a small map that shows you how the hill looks around you. It helps you understand the local slope better, so you can make better decisions about which way to walk.

  • Simple Map: The basic Taylor series is like a simple map that shows the hill’s slope around you.
  • Detailed Map: If you want a more accurate map, you might also look at how the hill curves, which is like adding more details to your Taylor series.

1. Training AI Models: Gradient Descent

Cost Function

Same analogy again: Imagine the cost function as a hill we need to climb down to find the lowest point (the best solution). As stated, the lower the value, the better it is.

Gradient

The gradient tells us the direction of the steepest slope.

Gradient Descent:

The Taylor Series helps us approximate the cost function around a point, telling us how it changes when we adjust the parameters slightly. This approximation makes it easier to determine which direction to move in to reduce the cost.

Example:

Imagine you’re trying to adjust the angle of a ramp to make a ball roll into a target. The cost function tells you how far the ball is from the target. The Taylor series helps you understand how changing the ramp’s angle (parameters) will affect the ball’s position (cost) so you can make better adjustments.

2. Making Calculations Easier

Neural networks use something called activation functions to decide whether to activate a neuron (like a switch). One common activation function is the sigmoid function.

Example

Think of the Sigmoid Function as a dimmer switch that adjusts light brightness. The Taylor series helps simplify the math behind how much the light should dim based on the input, making it easier for the neural network to process. It helps a neural network decide whether to activate a neuron. The Taylor series can approximate this function and speed up calculations.

3. Approximating Complex Functions

In Reinforcement Learning, an AI learns by trying different actions and getting rewards or penalties (trial and error). The value function estimates the expected rewards for actions.

How the Taylor Series Helps

The Taylor series approximates the value function, which can be very complex. This approximation helps the AI predict rewards more easily, allowing it to choose better actions.

Example

Imagine you’re playing a video game, and you want to predict which moves will earn you the most points. The value function helps with this prediction, and the Taylor series simplifies the calculations, making it easier to decide the best moves.

4. Handling Uncertainty: Bayesian Inference

Sometimes, we need to understand how uncertain our AI model is about its predictions. The Taylor series helps us estimate this uncertainty, making our AI more reliable.

Example: Bayesian Inference

In Bayesian inference, we update our beliefs about the AI model’s parameters based on new data. The Taylor series helps simplify these updates, making them easier to calculate.

Censius AI Observability analyzing the behavior of a production model. Source: Censius AI.

5. Understanding Model Behavior

The Taylor Series can also be employed to understand and interpret the behavior of machine learning models. By expanding the model’s function around a point, we can gain insights into how changes in input affect the output, which is crucial for tasks like feature importance analysis and debugging models.

Specific Applications

  1. Neural Networks Training: In training neural networks, the backpropagation algorithm often uses the Taylor Series for calculating the gradients of weights.
  2. Regularization Techniques: Some regularization techniques in machine learning, like Tikhonov regularization, can be understood and derived using the Taylor Series expansion.
  3. Non-linear Models: For non-linear models, the Taylor Series provides a way to linearize the model around a point, which is useful for analysis and optimization.
  4. Algorithm Development: Advanced machine learning algorithms, like Gaussian processes and some ensemble methods, sometimes use the Taylor Series for development and refinement.

The fundemental intuition to keep in mind is that they translate derivative information at a single point to approximation information around that point” — 3Blue1Brown

— — — —

So, with the multiple examples and instances, we’ve discussed how the concept of the Taylor Series eases our lives, from real-world applications in Engineering & Computer Science to how it simplifies working with and building AI.

I think that the Taylor series is like a magic tool that turns complicated math into simpler math because it helps AI learn faster, make better decisions, and handle complex problems more efficiently. That’s the inference and understanding I got from the research I’ve done and while drafting this article.

Now, as we’re approaching the end, I want you to reflect back: What exactly do we mean when we say “Taylor Series,” instances of using it irl, examples of Taylor series’ use, and finally, the cherry on top, how do we use Taylor series in AI.

Read through the entire article again, and compare it with the understanding you have now; you’ll notice the difference, as I did 😉

That’s it for this time; thanks for Reading and Happy Learning!

References: How I learned this concept —

Taylor series | Chapter 11, Essence of calculus (youtube.com) (3Blue1Brown)

Exploring the Role of Taylor Series in Machine Learning: From Function Approximation to Model Optimization | by Everton Gomede, PhD | 𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨 | Medium

A Gentle Introduction to Taylor Series — MachineLearningMastery.com

How is Taylor series used in deep learning? (analyticsindiamag.com)

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->