Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

1. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (The first NeRF)
Latest   Machine Learning

1. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (The first NeRF)

Last Updated on July 20, 2023 by Editorial Team

Author(s): YoonwooJeong

Originally published on Towards AI.

Computer Vision

10 NeRF Papers You Should Follow-up — Part 1

Recommending 10 papers to NeRF researchers. Part 2 will be soon available.

Humans generally acquire most information from the eyes. Computer vision has shown successful performances in many tasks, reducing manual labor when dealing with visual data. Recently, visual rendering has been one of the most popular areas in computer vision. NeRF, an abbreviation of Neural Radiance Fields, has shown incredible performance in rendering, resulting in reliable and realistic rendering in real-world scenes. For researchers, a paper with a strong impact involves great interest; however, it is difficult to follow up due to exploding number of variants. In this article, we are going to dive into NeRF and summaries of follow-up papers. This article is focused on introducing variants of NeRF. Here is the list of papers that are introduced in my article.

  1. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
  2. NeRF++: Analyzing and Improving Neural Radiance Fields
  3. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections
  4. NSVF: Neural Sparse Voxel Fields
  5. D-NeRF: Neural Radiance Fields for Dynamic Scenes
  6. DeRF: Decomposed Radiance Fields
  7. Baking Neural Raidance Fields for Real-Time View Synthesis
  8. KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
  9. Depth-supervised NeRF: Fewer Views and Faster Training for Free
  10. Self-Calibrating Neural Radiance Fields
Illustration of NeRF architecture.

Paper Link: https://arxiv.org/abs/2003.08934
Author: Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng
Conference: ECCV20 (Best Paper Honorable Mention)

Description

  • The first paper proposes NeRF architecture. They handle view-dependency problems. Objects have a different color depending on their viewing direction due to properties of the light, such as reflection, by receiving the viewing direction as input.
  • Given a position vector and a viewing direction in canonical space, the network outputs color and volume density. The illustration below visualizes the network architecture.
  • The model is designed end-to-end. For each ray from training images, the model renders the ray's color by weight-summing colors with volume density for points in the ray sampled by stratified sampling. For details, we recommend referring to the paper. The training objective is the L2 difference of predicted color with the ground truth color of each ray.
  • They adopt two strategies for stable and effective training, namely, hierarchical volume sampling and positional encoding.

2. NeRF++: Analyzing and Improving Neural Radiance Fields

Inverted Sphere Parameterization

Paper Link: https://arxiv.org/abs/2003.08934
Author: Kai Zhang, Gernot Riegler, Noah Snavely, Vladlen Koltun
Conference: arXiv20

Description

  • Motivation: The original NeRF has difficulty rendering outdoor scenes due to the ambiguity of setting background depth. In other words, NeRF is incapable of rendering unbounded scenes. NeRF++ addresses this problem by separating foreground and background sampling.
  • They set the unit sphere to separate the foreground and background of scenes. For the points in the foreground, that is to say, points inside the unit sphere, they follow the same method from the original NeRF. However, for the points in the background, that is to say, points outside the unit sphere, they reparameterize the coordinate with its distance. As a consequence, the foreground network receives 5-dimensional inputs; however, the background network receives 6-dimensional inputs.

3. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

Paper Link: https://arxiv.org/abs/2008.02268
Author: Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, Daniel Duckworth
Conference: CVPR20 Oral

Description

  • Motivation: Although NeRF has greatly impacted the rendering tasks, it requires image collections that are captured in static conditions, negligible illumination changes, and no transient objects in scenes. In contrast, NeRF-W enables reliable rendering from unconstrained photo collections, especially gathered from the Internet.
  • The proposed model separates static and transient objects during the rendering process. Since the transient objects are not certainly located in their current poses in different scenes, their model computes the uncertainty of rays. Then, based on the computed uncertainty, their model focuses less on rays with high uncertainty.
  • In my personal opinion, defining an evaluation metric on unconstrained photo collections is a challenging and controversial problem. This paper nicely proposes an evaluation metric with convincing intuition. For more information, I highly recommend reading this paper.

4. NSVF: Neural Sparse Voxel Fields

Paper Link: https://arxiv.org/pdf/2007.11571
Author: Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, Christian Theobalt
Conference: NeurIPS20 (Spotlight)

Description

  • Motivation: Boosting inference time is a key challenge that NeRF should overcome. They point out that the last points in rays are not necessary when accumulated alpha values are almost 1. Moreover, for parts in the canonical space that has less volume density, we could omit the rendering. The authors experimentally demonstrate their inspiration in various datasets.
  • Starting from a large voxel size, the proposed model prunes voxels whose volume densities are below a certain threshold. Then, it decreases the size of voxels. Repeating the aforementioned processes, they acquire a voxel octree with a small voxel size. During the inference time, they skip rendering a point when the point is located inside the pruned voxel. In addition, they adopt an early ray termination which omits to render when the accumulated alpha value is above the threshold.
  • Well-written paper. The idea here encouraged future work to improve the inference time of NeRF.

5. D-NeRF: Neural Radiance Fields for Dynamic Scenes

Paper Link: https://arxiv.org/abs/2011.13961
Author: Albert Pumarola, Enric Corona, Gerard Pons-Moll, Francesc Moreno-Noguer
Conference: CVPR21

Description

  • Motivation: D-NeRF enables the rendering of dynamic scenes. The proposed model only requires a single view for each timestamp, indicating it is highly applicable to real-world rendering.
  • The idea is very simple. They add a network called deformation network that predicts the positional difference in a specific time of a certain location. With estimated location by the deformation network, the canonical network predicts colors and volume density.
  • The idea is simple and intuitive. The work is meaningful since it enables rendering videos with a single view for each timestamp. I hope soon variants of D-NeRF enable rendering real-world scenes that include backgrounds.

6. DeRF: Decomposed Neural Radiance Fields

Paper Link: https://arxiv.org/abs/2011.13961
Author: Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, Andrea Tagliasacchi
Conference: CVPR21

Description

  • Motivation: Forwarding a large MLP network involves a larger computational budget than forwarding multiple small MLP networks. By decomposing scenes into multiple MLP networks, they reduce the number of computations during the rendering process.
  • They explicitly learn the importance of each MLP when rendering a color and volume density in a particular 3D point. Regarding the l1-distance to computed parameter, they weigh the rendered colors from each MLP network.
  • Using a Painter’s Algorithm, the model renders from back to front, to the outer buffer.

7. Baking Neural Rendering Fields for Real-Time View Synthesis

Paper Link: https://arxiv.org/abs/2011.13961
Author: Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, Paul Debevec
Conference: ICCV21

Description

  • Motivation: Although many variants of NeRF have boosted the inference time, it still remains challenging to enable real-time rendering. By separating specular colors and alpha-composite colors, they minimize the computation during inference time while preserving the rendering qualities.
  • The authors propose features called specular features that implicitly encode view-dependent color, i.e. specular color. In addition, from the trained NeRF networks, they generate sparse voxel grids based on estimated volume density in 3D canonical space.
  • Storing the color of the generated sparse voxels grids into a texture atlas, they skip the rendering in inference time. Eventually, the only thing to compute during the inference time is a view-dependent color rendered from a tiny MLP network.

8. KiloNeRF: Speeding up Neural Radiance Fields with Thousands of MLPs

Paper Link: https://arxiv.org/abs/2011.13961
Author: Christian Reiser, Songyou Peng, Yiyi Liao, Andreas Geiger
Conference: ICCV21

Description

  • Motivation: The motivation is similar to the motivation of “DeRF: Decomposed Neural Radiance Fields”. The proposed algorithm first partitions the canonical space with thousands of much smaller networks.
  • Setting the trained vanilla NeRF network as a teacher model, they distillate the teacher model to multiple networks. To select a proper network, they define a mapping function. Simply distilling the teacher networks fails to generate reliable scenes. Thus, they apply L2 regularization to the weights and biases of the last two layers of the networks.

9. Depth-supervised NeRF: Fewer Views and Faster Training for Free

Paper Link: https://arxiv.org/abs/2011.13961
Author: Kangle Deng, Andrew Liu, Jun-Yan Zhu, Deva Ramanan
Conference: ICCV21

Description

  • Motivation: NeRF requires a set of images and the corresponding camera poses of the images. In the general SfM, camera poses are estimated with depth values. However, NeRF ignores the estimated depth values. The authors argue that supervision from the depth values is beneficial to NeRF for learning accurate volume densities, resulting in better rendering accuracy.
  • With trained volume density, they estimate the depth of each ray. The goal of depth loss they propose is to learn accurate depth values with an additional explicit loss term.

10. Self-Calibrating Neural Radiance Fields

Paper Link: https://arxiv.org/abs/2011.13961
Author: Yoonwoo Jeong, Seokjun Ahn, Christopher Choy, Animashree Anandkumar, Minsu Cho, Jaesik Park
Conference: ICCV21

Description

  • Motivation: The general NeRF framework assumes that the estimated camera information using COLMAP is sufficiently accurate. Since the goal of NeRF is to overfit the network into the scenes, the accuracy of the estimated camera information is crucial.
  • The author proposes an extended camera model to reflect complex noises in the camera. By jointly optimizing camera and NeRF parameters, SCNeRF enables rendering without carefully calibrated camera information. Moreover, the algorithm results in better rendering qualities than the vanilla NeRF and NeRF++.

We’ll soon be back with the additional papers in Part2.
If the article was beneficial and interesting, please follow my individual account.
LinkedIn: https://www.linkedin.com/in/yoonwoo-jeong-6994ab185/
GitHub: https://github.com/jeongyw12382
Mail: jyw123822@gmail.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->