Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Reconstruction of Clean Images from Noisy Data: A Bayesian Inference Perspective
Data Science   Latest   Machine Learning

Reconstruction of Clean Images from Noisy Data: A Bayesian Inference Perspective

Last Updated on October 19, 2024 by Editorial Team

Author(s): Bhavesh Agone

Originally published on Towards AI.

An Introduction to Bayesian Analysis

In its most basic form, Bayesian Inference is just a technique for summarizing statistical inference which states how likely an hypothesis is given any new evidence. The method comes from Bayes’ Theorem, which provides a way to calculate the probability that an event will happen or has happened, given any prior knowledge of conditions (from which an event may not happen):

Here’s a somewhat rough rendering of Bayes’ Theorem:

Where:
– P(A|B) refer to the posterior probability, that is the probability of the hypothesis (A) given data (B).
– P(B|A) is the probability, or how likely the obtained data is in case the hypothesis is correct.
– P(A) is the prior probability or our starting starting point in regard to the hypothesis in question before making use of the sample data.
– P(B) Probability of the data or otherwise known as the evidence.

Bayesian Inference in the most simple method possible can be described as a method that lets us start with some sort of a belief about something- the prior — and then modify that belief based on new information that is received in terms of the likelihood. Bayesian method is thus particularly suitable for uncertainty and incomplete data, such as signal reconstruction from noise.

It is named after an 18th-century mathematician, Thomas Bayes.

The Challenge: Noisy Data in Image Reconstruction

Noise in images is a common problem in a variety of fields:
Some images will fail to reveal clarity due to:

– Medical Imaging: Some CT scans, MRIs, or X-rays are distorted with artefacts generated by the movement of patients during the scan time, hardware resolution limit, or it simply has low resolution.

– Satellite Imagery: At one point, images captured by satellites have been distorted due to atmospheric conditions, sensor limitation, or motion blur, thus making the satellite images not as helpful for environmental monitoring or navigation purposes.

– Astronomy: Sometimes, clearest views of celestial objects are distorted by noise from the telescopes themselves and interference from Earth’s atmosphere.

It is a rather difficult task to reconstruct a clean, original image using noisy data. Convolutional filtering directly applied to the noisy data normally removes the noise but also leads to the loss of important details or introduction of various artifacts.

Bayes Approach to Image Reconstruction

This is where Bayesian Inference comes into the picture. The Bayesian approach to image reconstruction works by combining everything that you may already know about the image-for instance, that edges tend to be smooth or that certain features ought to be continuous-with the noisy data we observe. It tries to find a most likely “clean” image that may have lead to what we see as noisy, given your prior assumptions.

  1. Prior knowledge: It is then used in Bayesian image reconstruction in order to model what we expect the clean image to look like beforehand. For example, natural images tend to be smooth in some parts and high in edges in other parts.
  2. Likelihood: The likelihood function maps the noisy image into the clean image, and with that, it captures how the noise affects the image-for instance, Gaussian noise might blur the image.
  3. Posterior Distribution: By Bayes’ Theorem, we combine the prior and the likelihoods to yield a posterior distribution that now supplies us with a probability distribution over possible clean images. We look for the image maximizing this posterior probability by what is called the MAP estimate.
  4. Markov Chain Monte Carlo (MCMC): In practice, it is often not easy to find the posterior distribution; therefore, the use of Markov Chain Monte Carlo (MCMC) techniques to sample from the distribution, estimating the most probable clean image, is very common.

Example of Bayesian Image Denoising

In this implementation, lets see a method for improving image structure using a combination of Gaussian noise simulation and Bayesian methods. We first add Gaussian noise to grayscale images to simulate a noisy environment. as the noise reduction process uses confidence propagation to refine noisy images and reproduce the message. This is supplemented with global normalization (TV), non-local noise reduction (NLM), and wavelet thresholding to improve image quality by preserving essential features and edges. and the final result will be saved and displayed. which demonstrates the effectiveness of this multifaceted noise reduction technique.

#Importing necesssary libaries for denoising Image using Bayersain
import numpy as np
import cv2
from scipy.fft import fft2, ifft2
from skimage.restoration import denoise_tv_chambolle, denoise_wavelet, denoise_nl_means
from skimage.util import random_noise
from skimage import img_as_float, img_as_ubyte
# Define parameters for noise level, belief propagation, and denoising iterations

sigma2_init = 25 # Initial variance of noise
alpha_init = 0.01 # Initial alpha parameter
max_iterations = 20 # Maximum iterations for the EM algorithm
convergence_threshold = 1e-4 # Convergence threshold for belief propagation
# Function to add Gaussian noise to an image
# This simulates noise in the image for testing denoising techniques

def add_gaussian_noise(image, sigma):
noisy_image = image + np.random.normal(0, sigma, image.shape)
return np.clip(noisy_image, 0, 255)
# Function to compute belief propagation messages with adaptive convergence
# It calculates messages iteratively using Fourier transforms until convergence

def compute_messages(image, alpha, sigma2, prev_messages=None):
height, width = image.shape
messages = np.zeros((height, width)) if prev_messages is None else prev_messages
fft_image = fft2(image)

for _ in range(max_iterations):
fft_messages = fft2(messages)
new_messages = ifft2(fft_image * fft_messages * alpha).real
new_messages = np.clip(new_messages, 0, 255) # Clip to valid pixel range

# Check for convergence based on L2 norm
if np.linalg.norm(new_messages - messages) < convergence_threshold:
break

messages = new_messages
return messages
# Function for denoising using belief propagation and multiple advanced denoising methods
# This includes Total Variation (TV) regularization, Non-Local Means (NLM), and wavelet denoising

def bayesian_denoise(image, noisy_image, alpha, sigma2, max_iterations):
denoised_image = np.copy(noisy_image)
messages = None

for _ in range(max_iterations):
# Compute belief propagation messages
messages = compute_messages(denoised_image, alpha, sigma2, prev_messages=messages)

# Update the denoised image based on messages
denoised_image = (noisy_image + messages) / 2
denoised_image = np.clip(denoised_image, 0, 255)

# Apply Total Variation (TV) denoising to preserve edges
denoised_image = denoise_tv_chambolle(denoised_image, weight=0.1)

# Apply Non-Local Means (NLM) denoising for further noise reduction
denoised_image = denoise_nl_means(denoised_image, h=1.15 * sigma2, fast_mode=True)

# Optionally apply wavelet-based denoising
denoised_image = denoise_wavelet(denoised_image, mode='soft', wavelet_levels=3, method='BayesShrink')

return denoised_image
# Load the image, convert it to grayscale, and apply Gaussian noise to simulate a noisy image

image_path = "img_2.png"
image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE).astype(np.float32)

# Add Gaussian noise to simulate a noisy image
noisy_image = add_gaussian_noise(image, sigma=25)
# Perform Bayesian denoising on the noisy image using the specified techniques
denoised_image = bayesian_denoise(image, noisy_image, alpha_init, sigma2_init, max_iterations)
# Save and display the results
cv2.imwrite("denoised_image_enhanced.jpg", denoised_image)
cv2.imshow("Noisy Image", noisy_image.astype(np.uint8))
cv2.imshow("Denoised Image", denoised_image.astype(np.uint8))sssss
cv2.waitKey(0)
cv2.destroyAllWindows()

Results:

Noisy vs Denoised Image

PSNR (Peak Signal-to-Noise Ratio) is a widely used metric for measuring the quality of reconstructed or processed images. Compare with the original unchanged image. Especially in image compression work. deterioration and restoration to assess where smearing or distortion occurred during processing.

Computation Time vs Image Size & PSNR vs Image Size

Application: Medical Image Recovery

One of the most fascinating applications of Bayesian Inference is in medical imaging, where recovered clean images are crucial for diagnosis.

For example, suppose that a CT scan has been degraded by noise due to either motion by the patient or hardware constraints. A Bayesian framework can take this noisy scan as observed data and prior knowledge of typical body structure-the smoothness of edges or regular organ shapes-to reconstruct the original, clean image. This is particularly useful in Bayesian tomography, where the goal is to reconstruct the 3D structure of an object-which might look like a human body-from noisy 2D images.

In this way, the outputs of several noisy scans are taken and, with Bayesian methods, high-resolution, noise-free images which are much more helpful to doctors can be recovered. Hereby, the chance for an accurate diagnosis is maximized while minimizing scans that otherwise may be expensive or even dangerous through radiation.

Bayesian Inverse Problems: Satellite Image Reconstruction

Another interesting application of Bayesian image reconstruction is in satellite imagery. Satellites are often taking photographs under suboptimal conditions. For example, cloudy weather, low-light environments, or poor weather can often lead to noisy images that are either hard to interpret or carry information with undefined reliability.

Bayesian Inference solves the problem, modeling the noise and using prior knowledge about the Earth’s surface in order to infer the most likely clean image. For example, we know that some areas of land, such as oceans or deserts, tend to have smooth textures, whereas urban areas have sharp, well-defined edges. Then by using the Bayesian framework, we can reconstruct a clean image to look much like the real landscape even after noisy data.

This application proves very useful in monitoring changes brought by the environment, tracking cut-down forest, or even damage assessment brought by natural disasters that require timely and accurate images from space.

Benefits of Bayesian Image Reconstruction

Probability approach. Unlike other methods, Bayesian Inference provides a probabilistic estimate of the clean image, so uncertainty in reconstruction could be quantified. That is the value when dealing with applications like medicine where knowing the confidence level of an image can help in making decisions.
Prior Information Incorporation: Bayesian methods allow for the utilisation of prior knowledge about the image, which results in more accurate reconstructions. For example, in medical imaging, we may use prior information about typical anatomical structures to guide the reconstruction process.

– Versatility: The Bayesian approach can be adapted to a wide range of imaging problems, from medical scans to satellite photos, thus making it a versatile tool for solving image reconstruction problems in different domains.

Bayesian inference allows for robust frameworks in dealing with noisy reconstructions of data. We, therefore, infer a most likely clean image from noisy observations by combining prior knowledge that we have about the image. Bayesian methods are thus very important in all applications, ranging from medical imaging to satellite imaging, because they are flexible and accurate in cleaning up noisy data.

With the progress in computational technology, such as MCMC, together with better algorithms, Bayesian methods will be even more easily applied and result in improved performance over the classic reconstruction methods of images.

Whatever the application area-medical, environmental monitoring, or astronomy- Bayesian Inference might turn to be the tool to extract valuable information from noisy and sparse data, leading to increasing insight and, therefore, better decision making.

Please feel free to share your thoughts in the comment section. Your suggestions are always welcome.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->