Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

From Pixels to Artificial Perception
Latest   Machine Learning

From Pixels to Artificial Perception

Last Updated on July 25, 2023 by Editorial Team

Author(s): Ali Moezzi

Originally published on Towards AI.

Understanding Computer Vision Fundamentals: An Introduction to Image Intrinsic, Representation, Features, Filters and Morphological Operations

Computer vision is a fascinating field that aims to teach machines how to “see” the world as we do. It has numerous practical applications in areas such as self-driving cars, facial recognition, object detection, and medical imaging. In this article, I will first go over what constitutes features in images and how we can manipulate them, and then I will go over various priors from computer vision that are being used in deep learning.

Unsurprisingly, we humans only perceive a portion of the electromagnetic spectrum. As a result, imaging devices are adapted to represent human perception of scenes as possible. Cameras process raw sensor data through a series of operations to achieve a highly familiar representation of human perception. Likewise, even radio-graphic images are calibrated to aid humans perception [2].

Bayer filter procedure [source: Wikipedia]

The camera sensor produces a grayscale grid structure that is constructed through a Bayer color filter mosaic. Cells in this grid represent intensities of a particular color. Thus, instead of recording a 3×8-bit array for every pixel, each filter in the Bayer filter records one color. Inspired by the fact that the human eye is more sensitive to green light, Bryce Bayer allocated twice the filters for green color than blue or red.

Camera ISP then reconstructs the image by applying a demosaicing algorithm in the form of color space. In computer vision, images are represented using different color spaces. The most common color space is RGB (Red, Green, Blue), where each pixel is interpreted as a 3D cube with dimensions of width, height, and depth (3 for RGB).

Another widely used color space is BGR (Blue, Green, Red), which was popular during the development of OpenCV. Unlike RGB, BGR considers the red channel as the least important. After this, a series of transformations such as black level correction, intensity adjustment, white balance adjustment, color correction, gamma correction, and finally, compression is applied.

HSV vs HSL [source: Wikipedia]

Apart from RGB and BGR, there are other color spaces like HSV (Hue, Saturation, Value) and HSL (Hue, Lightness, Saturation). HSV isolates the value component of each pixel, which varies the most during changes in lighting conditions. The H channel in HSV remains fairly consistent, even in the presence of shadows or excessive brightness. HSL, on the other hand, represents images based on hue, lightness, and saturation values.

Features

In computer vision, we look for features to identify relevant patterns or structures within an image. Features can include edges, corners, or blobs, which serve as distinctive attributes for further analysis. Edges are areas in an image where the intensity abruptly changes, often indicating object boundaries. Understanding the frequency of images is also essential, as high-frequency components correspond to the edges of objects.

Fourier Transform decomposition for a sample image [Image by author]

The Fourier Transform is used to decompose an image into frequency components. The resulting image in the frequency domain helps identify low and high-frequency regions, revealing details about the image’s content.

Corners represent the intersection of two edges, making them highly unique and useful for feature matching. Blobs, on the other hand, are regions with extreme brightness or unique texture, providing valuable information about objects in an image.

The ultimate goal of understanding features in images is to preciously align them with our specific requirements or leverage them for other tasks such as object detection. Next, I will discuss some fundamental operations to help us through this process.

Morphological Operation

Features I talked about in the previous section are sometimes not perfect. They might have some noise or extra artifacts that interfere with the rest of our pipeline. However, some simple operations on the image can enhance edges, shapes, and boundaries to reduce these artifacts. We borrow the term “Morphology” from biology as in this subfield, the shapes and structures of plants and animals are studied.

Similarly, in computer vision, we have a large number of operations to help us process the image better. These operations work by moving a “structured element” across the image. The structured element is a small grid, fairly similar to filters that we cover in the coming sections, yet it has only 0, 1 to include or exclude nearby pixels. In other words, a pixel is kept, only if nearby pixels corresponding to elements with value 1 have value > 0; otherwise discarded.

4-neighbor structured elements [Image by author]

Dilation

Dilation grows the foreground object by adding pixels to the boundaries of that object. It is useful to connect disjointed or fragmented parts of an object.

Effect of erosion and dilation on a sample image [Image by author]

Erosion

This operation removes pixels and peels the object along the boundaries. Erosion is particularly useful for removing noise and small artifacts.

Effect of opening and closing on a sample image [Image by author]

Opening

The opening is a compound operation consisting of erosion followed by dilation. While erosion eliminates small objects, it has a caveat where it manipulates object shape. Dilation alleviates this by growing back object boundaries.

Closing

In case, the noise is inside object boundaries, we may want to close these small gaps in the object. As a result, unlike opening, we first apply Dilation to fill in small holes, followed by an Erosion operation to peel object boundaries that should be kept intact but might be diluted by the previous operation.

Filters

Filters play a fundamental role in computer vision, allowing the isolation or enhancement of specific frequency ranges within an image. They are used to filter out irrelevant information, reduce noise, or amplify important features. One popular filter is the Canny Edge detection, which employs both low-pass and high-pass filters in combination for accurate edge detection.

High-pass Filters

Image through a high-pass filter [Image by author]

High-pass filters amplify high-frequency components, such as edges, while suppressing low-frequency information. They emphasize changes in intensity, making them valuable for edge detection and enhancing image features. One commonly used high-pass filter is the Sobel filter, which is designed to detect vertical or horizontal edges. By calculating the gradient magnitude and direction of an image, the Sobel filter identifies the strength and orientation of edges, enabling precise edge detection.

Low-pass Filters

Image through a low-pass filter [Image by author]

Low-pass filters, on the other hand, are used to reduce noise and blur an image, thus smoothing out high-frequency components. They block high-frequency parts of an image and take an average of surrounding pixels. One common low-pass filter is the averaging filter, which applies a 3×3 matrix to weight each pixel and its neighbors equally. This filter brightens the corresponding pixel in the filtered image, resulting in a smoother appearance.

Another widely used filter is the Gaussian filter, which not only blurs the image but also preserves edges better compared to the averaging filter.

Convolution Kernels

Convolution kernels, in general, are matrices that modify an image during filtering. For edge detection, it is crucial that the elements in the kernel sum to zero, allowing the filter to compute the differences or changes between neighboring pixels.

Image Intrinsics

Example of light transport [1]

Humans have an extraordinary ability to recognize objects in unfamiliar scenes regardless of viewing angle or lighting. Early experiments by Land et al. [3] and Horn et al. [4] have let us understand more about Retinex Theory. Based on this Theory, Barrow et al. distinguished an image into three intrinsic components — reflectance, orientation, and illumination. In practice, most methods consider intrinsic image decomposition as decomposing an image into its material-dependent properties, referred to as reflectance or Albedo, and its light-dependent properties, such as shading. Theory suggests reflectance changes result in sharp gradient changes in our retina while, the shading makes smooth gradient changes [3].

Altogether, although deep learning has enabled implicit learning many previous prior where it was used to design image decomposition algorithms, few priors are more often used to enhance the convergence and robustness of models [1].

First, the independent perception of colors from illumination conditions in the human perception system is the basis for the assumption that Albedo is piece-wise flat of high frequency and sparse [1]. Che et al. [6] and Ma et al. used this prior to forming an L1 loss function. On the contrary, a differentiable filtering operation can guide the network towards generating a piece-wise flat Albedo, like seen in works by Fan et al. [8].

The second prior is to ensure the smoothness of shading based on Retinex theory [3]. In deep learning, Cheng et al. [6] modeled this as an L1 norm, while Ma et al. [7] used second-order optimization to ensure smooth shading gradients.

Fast forward today, many researchers have used these priors as inductive bias in their network. In essence, the main lines of work are learning from weak supervision that uses human-annotated similar Albedo regions in an image, full-supervision where a complete set of Albedo and shadow labels are provided, and self-supervision. Supervised approaches require a vast amount of data to generalize adequately [1].

Hence, self-supervised approaches by incorporating them into loss functions in the form of Render Loss or Image Formation Loss can more efficiently decompose intrinsic components.

Synthetic images and labels from SUNCG-PBR dataset [9]

Final Thoughts

Deep Learning has helped us perform computer vision tasks in an end-to-end fashion. Nonetheless, the challenge of generalization in Deep Learning has transformed the role of legacy computer vision techniques from primarily tedious feature engineering to data augmentation or as priors embedded in models. By mastering these concepts, we can develop a deeper understanding of how these building blocks can elevate our computer vision pipelines.

I will write more articles in CS. If you’re as passionate about the industry as I am ^^ and find my articles informative, be sure to hit that follow button on Medium and continue the conversation in the comments if you have any questions. Don’t hesitate to reach out to me directly on LinkedIn!

References:

[1] Garces, E., Rodriguez-Pardo, C., Casas, D., & Lopez-Moreno, J. (2022). A survey on intrinsic images: Delving deep into lambert and beyond. International Journal of Computer Vision, 130(3), 836–868.

[2] Oala, L., Aversa, M., Nobis, G., Willis, K., Neuenschwander, Y., Buck, M., … & Sanguinetti, B. Data Models for Dataset Drift Controls in Machine Learning With Optical Images. Transactions on Machine Learning Research.

[3] Land, E.H., & McCann, J.J. (1971). Lightness and retinex theory. Journal of the Optical Society of America, 61 1, 1–11.

[4] Horn, B.K. (1974). Determining lightness from an image. Comput. Graph. Image Process., 3, 277–299.

[5] Tang, Y., Salakhutdinov, R., & Hinton, G. (2012). Deep lambertian networks. arXiv preprint arXiv:1206.6445.

[6] Cheng, L., Zhang, C., & Liao, Z. (2018). Intrinsic image transformation via scale space decomposition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 656–665).

[7] Ma, W. C., Chu, H., Zhou, B., Urtasun, R., & Torralba, A. (2018). Single image intrinsic decomposition without a single intrinsic image. In Proceedings of the European Conference on computer vision (ECCV) (pp. 201–217).

[8] Fan, Q., Yang, J., Hua, G., Chen, B., & Wipf, D. (2018). Revisiting deep intrinsic image decompositions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8944–8952).

[9] Sengupta, S., Gu, J., Kim, K., Liu, G., Jacobs, D. W., & Kautz, J. (2019). Neural inverse rendering of an indoor scene from a single image. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 8598–8607).

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->