Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

CV in Autonomous Vehicles
Latest   Machine Learning

CV in Autonomous Vehicles

Last Updated on July 25, 2023 by Editorial Team

Author(s): Ankit Sirmorya

Originally published on Towards AI.

Introduction

Source [12]

In the field of artificial intelligence (AI), computer vision is a method for analyzing digital images, videos, and other visual inputs to extract meaningful information and to take action or make recommendations based on that information. In the same way that AI enables computers to think, computer vision enables them to see, observe, and comprehend[1]. Computer vision(CV) is widely used in self-driving cars. Many driving technologies use CV for object detection, lane detection, and lane detection. Autonomous Vehicles provide a number of benefits and can be proved to be better than humans in many respects. Autonomous vehicles can not get distracted, drunk, or tired. An additional boost in performance for AVs comes from the advances in artificial intelligence, sensor fusion, and computer vision techniques that essentially self-drive the vehicle.[2]AI technologies power self-driving car systems. Developers of self-driving cars use vast amounts of data from image recognition systems, along with machine learning and neural networks, to build systems that can drive autonomously.[3] An autonomous vehicle must be able to reach its desired destination without any guidance from external systems. This requires that the vehicle must be able to direct itself in a path to reach its destination in a manner that it can avoid any obstacles. An autonomous vehicle makes use of sensors such as Cameras, RADARs, and LiDARs to perceive its surrounding environment and build an understanding of what each of the different elements could do next, and this is where computer vision techniques can help. In this section, we aim to throw some light at some work relating to computer vision in self-driving car research[4]. In 2017 alone, over 40,000 people died in the United States due to car accidents. Across the globe, the number has increased to more than a million people. Most of the accidents could have been avoided if the drivers had paid attention to their surroundings. A number of automobile brands and autonomous vehicle companies are investing billions in self-driving technology. Examples include tech heavyweights such as Tesla, Google’s Waymo, Uber, and Apple. Traditional car companies, including Audi, BMW, Ford, and Volvo, have also shown interest in self-driving technology. By the year 2025, self-driving cars are expected to comprise 20% of the total number of cars sold in the United States.[5] Various components are required to enable self-driving technology to function properly. These components include a long-distance radar system, ultrasonic sensors, cameras paired with image recognition software, and real-time traffic data supported by satellite imagery. Image recognition software used in conjunction with cameras can enable the recognition of other vehicles, the recognition of pedestrians on the road, and the detection and interpretation of traffic signs. Real-time traffic data can be used to determine the optimum route to be used to reach a destination[5].

Computer Vision Architectures for Autonomous Driving

A fully autonomous vehicle must be able to perceive its environment and safely navigate on the basis of multiple sensors rather than human input.

A typical workflow of an autonomous vehicle is as shown below:

Source [9]

The autonomous vehicle models start with the sensing phase. An autonomous vehicle consists of several major sensors, each of which has advantages and drawbacks, which require combining sensors to increase reliability and safety. Most successful implementations of autonomous driving rely heavily on LiDAR for mapping, localization, and obstacle avoidance, using other sensors for peripheral functions.[9] Computer vision uses the camera for all of its tasks. Autonomous vehicles use cameras mostly for object recognition and tracking, for example, to detect lanes, traffic lights, and pedestrians. To enhance safety, existing implementations usually mount eight or more 1,080-pixel cameras around the car such that they can detect, recognize, and track objects in front of, behind, and on both sides of the vehicle. These cameras usually run at 60 Hz and, when combined, generate around 1.8 GB of raw data per second.[9] The second stage is the perception stage which takes data from the sensing stage and applies object detection and tracking to the data. Autonomous vehicles rely on the perception of their surroundings to ensure safe and robust driving performance. This perception system uses object detection algorithms to accurately determine objects such as pedestrians, vehicles, traffic signs, and barriers in the vehicle’s vicinity. Deep learning-based object detectors play a vital role in finding and localizing these objects in real-time.[10] Another perception technique is object Tracking. Object tracking involves tracking moving objects in real-time. It also involves accurately identifying and localizing dynamic objects in the environment surrounding the vehicle. Tracking of surrounding vehicles is essential for many tasks crucial to truly autonomous driving, such as obstacle avoidance, path planning, and intent recognition.[11]The last phase is the Decision-making Phase. It involves pathfinding and obstacle avoidance. Among this obstacle avoidance comes in the computer vision area. Because safety is the paramount concern in autonomous driving, at least two levels of obstacle-avoidance mechanisms must be deployed to ensure that the vehicle will not collide with any object. The first level is proactive and is based on traffic predictions, which involve computer vision algorithms for prediction. At runtime, the traffic-prediction mechanism generates measures like time to collision or predicted minimum distance, which the obstacle-avoidance mechanism uses to replan local paths. If the proactive mechanism fails, the second level, a reactive mechanism, takes over. This mechanism relies on radar data to detect an obstacle and uses that data to override the current control and avoid the detected obstacles.[9]

Source: [14]

Environment Perception is another important part of computer vision architecture. Autonomous vehicles must independently perceive their surroundings in order to provide necessary information for control decisions. In addition to laser navigation and visual navigation, radar navigation is another major method of assessing the environment. The perception of the environment is accomplished by combining multiple sensors (such as laser and radar sensors) to sense the comprehensive information from the environment. In addition to the laser sensor, the radar sensor is used for distance perception, while the visual sensor is used to recognize traffic signs. A typical recognition scheme is shown in the above figure. The self-driving car fuses data from laser sensors, radar sensors, and visual sensors and generates the surrounding environment perception, such as road edge stone, obstacles, road markings, and so on.[14]

Computer Vision algorithms used in Autonomous vehicles

1. Regression Algorithm

It is extremely challenging to develop an image-based algorithm for prediction and feature selection in Autonomous Driving because images (radar or camera) play a very important role in localization and actuation. Regression algorithms leverage the repeatability of the environment to create a statistical model of the relation between an image and the position of a given object in that image. The statistical model can be learned offline and provides fast online detection by allowing image sampling. Furthermore, it can be extended to other objects without requiring extensive human modeling. As an output of the online stage, the algorithm returns an object's position and confidence in the presence of the object.[13]

2. Pattern Recognition Algorithm (Classification)

Images captured by the autonomous vehicle’s sensors contain all types of environmental data; filtering of these images is needed in order to recognize instances of an object category by removing irrelevant data points. Algorithms that recognize patterns are useful for detecting these oft-forgotten data points. Analyzing a data set for patterns is an essential step before attempting to classify the objects. This type of algorithm can also be referred to as data reduction. These algorithms help in reducing the data set by detecting object edges and fitting line segments (polylines) and circular arcs to the edges. Line segments are aligned to edges up to a corner; then a new line segment is started. Circular arcs are fit to sequences of line segments that approximate an arc. The image features (line segments and circular arcs) are combined in various ways to form the features that are used for recognizing an object.[13]

3. Clustering

Sometimes the images obtained by the system are not clear, and it is difficult to detect and locate objects. It is also possible that the classification algorithms may miss the object and fail to classify and report it to the system. The reason could be low-resolution images, very few data points, or discontinuous data. [13] Clustering Algorithm is used to detect clusters from a group of data points. This type of algorithm detects outliers in the dataset. They can classify the input into different classes. They use centroid-based and hierarchical modeling approaches to find clusters. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality. The most commonly used type of algorithm is K-­means, Multi-­class Neural Network.[13]

4. Decision Matrix Algorithm

The decision matrix algorithm systematically analyzes, identifies, and rates the performance of relationships between the sets of information and values. These algorithms are majorly utilized for decision-making. Whether a car needs to brake or take a left turn is based on the level of confidence these algorithms have in recognition, classification, and prediction of the next movement of objects. [15] They contain a decision-making model, for various tasks. All these decisions are then combined to give a final prediction. The most commonly used algorithms are gradient boosting (GDM) and AdaBoosting.

5. YOLO(You Only Look Once)

A real-time algorithm for detecting and recognizing objects within an image is presented here. A regression problem is used to detect objects in YOLO, as well as class probabilities for the discovered pictures. In the YOLO method, objects are recognized in real-time using convolutional neural networks (CNN). [17]. It is one of the most prominent applications of computer vision in autonomous driving, which involves tasks like classification, localization, and detection.[18]. The YOLO model does not have the highest of accuracies but has been one of the most ground-breaking revelations in computer vision as the speed of detection that it has is just astounding. The YOLO algorithm has been trained on a total of 80 different classes, which basically requires a huge amount of data and extremely high computational resources and the model also runs to a considerable depth of 12 layers.[18]

Benefits of Computer Vision in Autonomous Driving

Autonomous driving is assisted by computer vision in a number of ways. Semantic Segmentation, Object Detection, and other computer vision technologies have made it possible for autonomous vehicles to detect objects on the road and detect lanes. The use of CV allows the vehicle to differentiate between pedestrians, road objects, and other vehicles. It also plays an important role in allowing autonomous vehicles to park on their own. The paper on Automated parking Systems [6] stated that Automated Parking is automated driving. The use of Computer Vision in autonomous vehicles can contribute to the creation of advanced and next-generation vehicles that can overcome driving obstacles while keeping the passenger transport passengers to their destination, eliminating human intervention. Computer Vision is able to create great 3D maps, this technology can play an important role in autonomous driving. It will enable self-driving vehicles to capture visual data in real-time. The cameras attached to such vehicles can record live footage and allow computer vision to create 3D maps. Using these maps, autonomous vehicles can understand their surroundings better while spotting obstacles in their path and opt for alternate routes with 3D maps.[16]. Computer vision technology can gather large sets of data using cameras and sensors, including location information, traffic conditions, road maintenance, crowded areas and others. This dataset helps algorithms train faster and better. Many of the images help the algorithm learn to detect objects, image segmentation, etc. Computer vision allows self-driving cars to perform all its functionality in low light mode. As soon as the computer vision detects low-light conditions, it can shift to low-light mode. Such data can be obtained using LiDar sensors, thermal cameras, and HDR sensors. These types of equipment can be used to create high-quality images and videos.[16]

Limitations of Computer Vision in Autonomous Driving

In autonomous vehicles, the quality and reliability of computer vision solutions can be a matter of life or death for either the driver or the pedestrian. One of the most important challenges faced by autonomous cars with regard to computer vision is how to run most of the algorithms in real-time and that, too, in a cluttered and complex environment. Many machine learning models are highly complex and can be difficult to integrate into most cars. Another such limitation is the lack of data. In machine LEarning, the lack of data is a common problem. In the case of Autonomous vehicles, the data is a collection of labeled images of roads, pedestrians, and other vehicles. These data can be difficult to get and hard to accommodate in any low-computing device. Road Safety is an important aspect of autonomous driving. Road safety with self-driving vehicles will be considered from four perspectives: Can self-driving vehicles compensate for contributions to crash causation by other traffic participants, as well as vehicular, roadway, and environmental factors? Can all relevant computational decisions be supplied to a self-driving vehicle? Can computational speed, constant vigilance, and lack of distractibility in self-driving vehicles make the predictive knowledge of an experienced driver irrelevant? In order for self-driving vehicles to be safer and more apt for driving, all of these factors must be mitigated. The prime and most basic task of computer vision algorithms is to recognize an object in a picture. It is generally true that computers outperform humans in a number of image recognition tasks, but there are a few that are of particular interest to autonomous vehicles. [8] Object recognition must be done in real-time. Input from a camera sometimes consists of a set of lines that are constantly flowing from the sensor and are used to display an ever-changing image on a screen rather than a series of complete images. Therefore, it is necessary to recognize objects without actually seeing them. A truck trailer can be a good example of an environment with multiple elements that can be confusing to an autonomous vehicle. A neural network of an autonomous vehicle neural network is tasked with recognizing traffic signs. Another problem faced by CV is Identifying traffic signals and pedestrians properly. Identifying traffic signs quickly and in a volatile environment presents a significant challenge. Signs can be deemed dirty, covered with leaves, bent at an odd angle, or modified in any number of ways. In order to address pedestrian problems, the machine must not only recognize the pedestrian without a doubt but also be able to estimate that pedestrian’s pose. Vehicles must be alert to the pedestrian’s motion when it indicates that he or she intends to cross the road.

Conclusion

Autonomous Driving has achieved major breakthroughs and has progressed toward the advanced stages. Computer vision has played a major role in autonomous driving. The article drives you through different aspects of cv in autonomous driving. Different Computer Architecture for Autonomous Driving has been discussed. A typical workflow of an autonomous vehicle involves three stages: Sensing, Perception, and Decision Making. These phases involve different CV techniques for object detection, tracking and lane detection. Object detection, object tracking, and lane detection features have helped the car in sensing the environment more accurately. As these features become more and more real, passengers are now beginning to trust in the concept of Fully autonomous driving. In Autonomous Driving, Computer vision is still at its intermediate stage and needs more time to develop and improve better and more precise results. The use cases we reviewed are all data-dependent which needs to be more accurate for better decision-making. The main objective of computer vision is to ensure the safety of its passengers and to deliver a smooth self-driving experience. The technology hasn’t been perfected yet as few limitations need to be fixed. But with the pace at which the technology is progressing, intelligent and reliable self-driving cars using computer vision will soon be seen on the roads. The one limitation computer vision is facing is that it needs to provide more accurate details of the present complex environment by taking very less time for detection, tracking, and segmentation.

References

  1. IBM: What is computer vision: “https://www.ibm.com/in-en/topics/computer-vision
  2. Kohli, Puneet, and Anjali Chadha. “Enabling pedestrian safety using computer vision techniques: A case study of the 2018 uber inc. self-driving car crash.” Future of Information and Communication Conference. Springer, Cham, 2019.
  3. Ben Lutkevich : self-driving car (autonomous car or driverless car)
  4. Agarwal, Nakul, Cheng-Wei Chiang, and Abhishek Sharma. “A study on computer vision techniques for self-driving cars.” International Conference on Frontier Computing. Springer, Singapore, 2018.
  5. Kanagaraj, Nitin, et al. “Deep learning using computer vision in self-driving cars for lane and traffic sign detection.” International Journal of System Assurance Engineering and Management 12.6 (2021): 1011–1025.
  6. Heimberger, Markus, et al. “Computer vision in automated parking systems: Design, implementation and challenges.” Image and Vision Computing 68 (2017): 88–101.
  7. Tseng, Y.-H., & Jan, S.-S. (2018). Combination of computer vision detection and segmentation for autonomous driving. 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS).
  8. Konrad Budek: “7 challenges of Computer Vision in self-driving cars
  9. Liu, S., Tang, J., Zhang, Z., & Gaudiot, J.-L. (2017). Computer Architectures for Autonomous Driving. Computer, 50(8), 18–25.
  10. Balasubramaniam, Abhishek, and Sudeep Pasricha. “Object Detection in Autonomous Vehicles: Status and Open Challenges.” arXiv preprint arXiv:2201.07706 (2022).
  11. Rangesh, A., & Trivedi, M. M. (2019). No Blind Spots: Full-Surround Multi-Object Tracking for Autonomous Vehicles using Cameras & LiDARs. IEEE Transactions on Intelligent Vehicles, 1–1.
  12. Boric, S., Schiebel, E., Schlögl, C., Hildebrandt, M., Hofer, C. and Macht, D.M., 2021. Research in Autonomous Driving — A Historic Bibliometric View of the Research Development in Autonomous Driving. International Journal of Innovation and Economic Development, 7(5), pp.27–44.
  13. Machine Learning Algorithms in Autonomous Cars,”https://www.visteon.com/machine-learning-algorithms-in-autonomous-cars/
  14. The key technology toward the self-driving car, “https://www.emerald.com/insight/content/doi/10.1108/IJIUS-08-2017-0008/full/html
  15. Savaram Ravindra The Machine Learning Algorithms Used in Self-Driving Cars” :https://www.kdnuggets.com/2017/06/machine-learning-algorithms-used-self-driving-cars.html
  16. Smriti shrivastava: “COMPUTER VISION MAKES AUTONOMOUS VEHICLES INTELLIGENT AND RELIABLE”, https://www.analyticsinsight.net/computer-vision-makes-autonomous-vehicles-intelligent-and-reliable/#:~:text=The%20computer%20vision%20technology%20can,decisions%20as%20soon%20as%20possible.
  17. Sharif : “Machine Learning Algorithms and Techniques in Self-Driving Cars U+007C Self Driving Cars” https://www.aionlinecourse.com/tutorial/self-driving-cars/machine-learning-algorithms-and-techniques-in-self-driving-cars
  18. Sarda, A., Dixit, S., & Bhan, A. (2021). Object Detection for Autonomous Driving using YOLO algorithm. 2021 2nd International Conference on Intelligent Engineering and Management (ICIEM).

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->