CV in Autonomous Vehicles
Last Updated on July 25, 2023 by Editorial Team
Author(s): Ankit Sirmorya
Originally published on Towards AI.
Introduction
In the field of artificial intelligence (AI), computer vision is a method for analyzing digital images, videos, and other visual inputs to extract meaningful information and to take action or make recommendations based on that information. In the same way that AI enables computers to think, computer vision enables them to see, observe, and comprehend[1]. Computer vision(CV) is widely used in self-driving cars. Many driving technologies use CV for object detection, lane detection, and lane detection. Autonomous Vehicles provide a number of benefits and can be proved to be better than humans in many respects. Autonomous vehicles can not get distracted, drunk, or tired. An additional boost in performance for AVs comes from the advances in artificial intelligence, sensor fusion, and computer vision techniques that essentially self-drive the vehicle.[2]AI technologies power self-driving car systems. Developers of self-driving cars use vast amounts of data from image recognition systems, along with machine learning and neural networks, to build systems that can drive autonomously.[3] An autonomous vehicle must be able to reach its desired destination without any guidance from external systems. This requires that the vehicle must be able to direct itself in a path to reach its destination in a manner that it can avoid any obstacles. An autonomous vehicle makes use of sensors such as Cameras, RADARs, and LiDARs to perceive its surrounding environment and build an understanding of what each of the different elements could do next, and this is where computer vision techniques can help. In this section, we aim to throw some light at some work relating to computer vision in self-driving car research[4]. In 2017 alone, over 40,000 people died in the United States due to car accidents. Across the globe, the number has increased to more than a million people. Most of the accidents could have been avoided if the drivers had paid attention to their surroundings. A number of automobile brands and autonomous vehicle companies are investing billions in self-driving technology. Examples include tech heavyweights such as Tesla, Googleβs Waymo, Uber, and Apple. Traditional car companies, including Audi, BMW, Ford, and Volvo, have also shown interest in self-driving technology. By the year 2025, self-driving cars are expected to comprise 20% of the total number of cars sold in the United States.[5] Various components are required to enable self-driving technology to function properly. These components include a long-distance radar system, ultrasonic sensors, cameras paired with image recognition software, and real-time traffic data supported by satellite imagery. Image recognition software used in conjunction with cameras can enable the recognition of other vehicles, the recognition of pedestrians on the road, and the detection and interpretation of traffic signs. Real-time traffic data can be used to determine the optimum route to be used to reach a destination[5].
Computer Vision Architectures for Autonomous Driving
A fully autonomous vehicle must be able to perceive its environment and safely navigate on the basis of multiple sensors rather than human input.
A typical workflow of an autonomous vehicle is as shown below:
The autonomous vehicle models start with the sensing phase. An autonomous vehicle consists of several major sensors, each of which has advantages and drawbacks, which require combining sensors to increase reliability and safety. Most successful implementations of autonomous driving rely heavily on LiDAR for mapping, localization, and obstacle avoidance, using other sensors for peripheral functions.[9] Computer vision uses the camera for all of its tasks. Autonomous vehicles use cameras mostly for object recognition and tracking, for example, to detect lanes, traffic lights, and pedestrians. To enhance safety, existing implementations usually mount eight or more 1,080-pixel cameras around the car such that they can detect, recognize, and track objects in front of, behind, and on both sides of the vehicle. These cameras usually run at 60 Hz and, when combined, generate around 1.8 GB of raw data per second.[9] The second stage is the perception stage which takes data from the sensing stage and applies object detection and tracking to the data. Autonomous vehicles rely on the perception of their surroundings to ensure safe and robust driving performance. This perception system uses object detection algorithms to accurately determine objects such as pedestrians, vehicles, traffic signs, and barriers in the vehicleβs vicinity. Deep learning-based object detectors play a vital role in finding and localizing these objects in real-time.[10] Another perception technique is object Tracking. Object tracking involves tracking moving objects in real-time. It also involves accurately identifying and localizing dynamic objects in the environment surrounding the vehicle. Tracking of surrounding vehicles is essential for many tasks crucial to truly autonomous driving, such as obstacle avoidance, path planning, and intent recognition.[11]The last phase is the Decision-making Phase. It involves pathfinding and obstacle avoidance. Among this obstacle avoidance comes in the computer vision area. Because safety is the paramount concern in autonomous driving, at least two levels of obstacle-avoidance mechanisms must be deployed to ensure that the vehicle will not collide with any object. The first level is proactive and is based on traffic predictions, which involve computer vision algorithms for prediction. At runtime, the traffic-prediction mechanism generates measures like time to collision or predicted minimum distance, which the obstacle-avoidance mechanism uses to replan local paths. If the proactive mechanism fails, the second level, a reactive mechanism, takes over. This mechanism relies on radar data to detect an obstacle and uses that data to override the current control and avoid the detected obstacles.[9]
Environment Perception is another important part of computer vision architecture. Autonomous vehicles must independently perceive their surroundings in order to provide necessary information for control decisions. In addition to laser navigation and visual navigation, radar navigation is another major method of assessing the environment. The perception of the environment is accomplished by combining multiple sensors (such as laser and radar sensors) to sense the comprehensive information from the environment. In addition to the laser sensor, the radar sensor is used for distance perception, while the visual sensor is used to recognize traffic signs. A typical recognition scheme is shown in the above figure. The self-driving car fuses data from laser sensors, radar sensors, and visual sensors and generates the surrounding environment perception, such as road edge stone, obstacles, road markings, and so on.[14]
Computer Vision algorithms used in Autonomous vehicles
1. Regression Algorithm
It is extremely challenging to develop an image-based algorithm for prediction and feature selection in Autonomous Driving because images (radar or camera) play a very important role in localization and actuation. Regression algorithms leverage the repeatability of the environment to create a statistical model of the relation between an image and the position of a given object in that image. The statistical model can be learned offline and provides fast online detection by allowing image sampling. Furthermore, it can be extended to other objects without requiring extensive human modeling. As an output of the online stage, the algorithm returns an object's position and confidence in the presence of the object.[13]
2. Pattern Recognition Algorithm (Classification)
Images captured by the autonomous vehicleβs sensors contain all types of environmental data; filtering of these images is needed in order to recognize instances of an object category by removing irrelevant data points. Algorithms that recognize patterns are useful for detecting these oft-forgotten data points. Analyzing a data set for patterns is an essential step before attempting to classify the objects. This type of algorithm can also be referred to as data reduction. These algorithms help in reducing the data set by detecting object edges and fitting line segments (polylines) and circular arcs to the edges. Line segments are aligned to edges up to a corner; then a new line segment is started. Circular arcs are fit to sequences of line segments that approximate an arc. The image features (line segments and circular arcs) are combined in various ways to form the features that are used for recognizing an object.[13]
3. Clustering
Sometimes the images obtained by the system are not clear, and it is difficult to detect and locate objects. It is also possible that the classification algorithms may miss the object and fail to classify and report it to the system. The reason could be low-resolution images, very few data points, or discontinuous data. [13] Clustering Algorithm is used to detect clusters from a group of data points. This type of algorithm detects outliers in the dataset. They can classify the input into different classes. They use centroid-based and hierarchical modeling approaches to find clusters. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality. The most commonly used type of algorithm is K-Βmeans, Multi-Βclass Neural Network.[13]
4. Decision Matrix Algorithm
The decision matrix algorithm systematically analyzes, identifies, and rates the performance of relationships between the sets of information and values. These algorithms are majorly utilized for decision-making. Whether a car needs to brake or take a left turn is based on the level of confidence these algorithms have in recognition, classification, and prediction of the next movement of objects. [15] They contain a decision-making model, for various tasks. All these decisions are then combined to give a final prediction. The most commonly used algorithms are gradient boosting (GDM) and AdaBoosting.
5. YOLO(You Only Look Once)
A real-time algorithm for detecting and recognizing objects within an image is presented here. A regression problem is used to detect objects in YOLO, as well as class probabilities for the discovered pictures. In the YOLO method, objects are recognized in real-time using convolutional neural networks (CNN). [17]. It is one of the most prominent applications of computer vision in autonomous driving, which involves tasks like classification, localization, and detection.[18]. The YOLO model does not have the highest of accuracies but has been one of the most ground-breaking revelations in computer vision as the speed of detection that it has is just astounding. The YOLO algorithm has been trained on a total of 80 different classes, which basically requires a huge amount of data and extremely high computational resources and the model also runs to a considerable depth of 12 layers.[18]
Benefits of Computer Vision in Autonomous Driving
Autonomous driving is assisted by computer vision in a number of ways. Semantic Segmentation, Object Detection, and other computer vision technologies have made it possible for autonomous vehicles to detect objects on the road and detect lanes. The use of CV allows the vehicle to differentiate between pedestrians, road objects, and other vehicles. It also plays an important role in allowing autonomous vehicles to park on their own. The paper on Automated parking Systems [6] stated that Automated Parking is automated driving. The use of Computer Vision in autonomous vehicles can contribute to the creation of advanced and next-generation vehicles that can overcome driving obstacles while keeping the passenger transport passengers to their destination, eliminating human intervention. Computer Vision is able to create great 3D maps, this technology can play an important role in autonomous driving. It will enable self-driving vehicles to capture visual data in real-time. The cameras attached to such vehicles can record live footage and allow computer vision to create 3D maps. Using these maps, autonomous vehicles can understand their surroundings better while spotting obstacles in their path and opt for alternate routes with 3D maps.[16]. Computer vision technology can gather large sets of data using cameras and sensors, including location information, traffic conditions, road maintenance, crowded areas and others. This dataset helps algorithms train faster and better. Many of the images help the algorithm learn to detect objects, image segmentation, etc. Computer vision allows self-driving cars to perform all its functionality in low light mode. As soon as the computer vision detects low-light conditions, it can shift to low-light mode. Such data can be obtained using LiDar sensors, thermal cameras, and HDR sensors. These types of equipment can be used to create high-quality images and videos.[16]
Limitations of Computer Vision in Autonomous Driving
In autonomous vehicles, the quality and reliability of computer vision solutions can be a matter of life or death for either the driver or the pedestrian. One of the most important challenges faced by autonomous cars with regard to computer vision is how to run most of the algorithms in real-time and that, too, in a cluttered and complex environment. Many machine learning models are highly complex and can be difficult to integrate into most cars. Another such limitation is the lack of data. In machine LEarning, the lack of data is a common problem. In the case of Autonomous vehicles, the data is a collection of labeled images of roads, pedestrians, and other vehicles. These data can be difficult to get and hard to accommodate in any low-computing device. Road Safety is an important aspect of autonomous driving. Road safety with self-driving vehicles will be considered from four perspectives: Can self-driving vehicles compensate for contributions to crash causation by other traffic participants, as well as vehicular, roadway, and environmental factors? Can all relevant computational decisions be supplied to a self-driving vehicle? Can computational speed, constant vigilance, and lack of distractibility in self-driving vehicles make the predictive knowledge of an experienced driver irrelevant? In order for self-driving vehicles to be safer and more apt for driving, all of these factors must be mitigated. The prime and most basic task of computer vision algorithms is to recognize an object in a picture. It is generally true that computers outperform humans in a number of image recognition tasks, but there are a few that are of particular interest to autonomous vehicles. [8] Object recognition must be done in real-time. Input from a camera sometimes consists of a set of lines that are constantly flowing from the sensor and are used to display an ever-changing image on a screen rather than a series of complete images. Therefore, it is necessary to recognize objects without actually seeing them. A truck trailer can be a good example of an environment with multiple elements that can be confusing to an autonomous vehicle. A neural network of an autonomous vehicle neural network is tasked with recognizing traffic signs. Another problem faced by CV is Identifying traffic signals and pedestrians properly. Identifying traffic signs quickly and in a volatile environment presents a significant challenge. Signs can be deemed dirty, covered with leaves, bent at an odd angle, or modified in any number of ways. In order to address pedestrian problems, the machine must not only recognize the pedestrian without a doubt but also be able to estimate that pedestrianβs pose. Vehicles must be alert to the pedestrianβs motion when it indicates that he or she intends to cross the road.
Conclusion
Autonomous Driving has achieved major breakthroughs and has progressed toward the advanced stages. Computer vision has played a major role in autonomous driving. The article drives you through different aspects of cv in autonomous driving. Different Computer Architecture for Autonomous Driving has been discussed. A typical workflow of an autonomous vehicle involves three stages: Sensing, Perception, and Decision Making. These phases involve different CV techniques for object detection, tracking and lane detection. Object detection, object tracking, and lane detection features have helped the car in sensing the environment more accurately. As these features become more and more real, passengers are now beginning to trust in the concept of Fully autonomous driving. In Autonomous Driving, Computer vision is still at its intermediate stage and needs more time to develop and improve better and more precise results. The use cases we reviewed are all data-dependent which needs to be more accurate for better decision-making. The main objective of computer vision is to ensure the safety of its passengers and to deliver a smooth self-driving experience. The technology hasnβt been perfected yet as few limitations need to be fixed. But with the pace at which the technology is progressing, intelligent and reliable self-driving cars using computer vision will soon be seen on the roads. The one limitation computer vision is facing is that it needs to provide more accurate details of the present complex environment by taking very less time for detection, tracking, and segmentation.
References
- IBM: What is computer vision: βhttps://www.ibm.com/in-en/topics/computer-visionβ
- Kohli, Puneet, and Anjali Chadha. βEnabling pedestrian safety using computer vision techniques: A case study of the 2018 uber inc. self-driving car crash.β Future of Information and Communication Conference. Springer, Cham, 2019.
- Ben Lutkevich : self-driving car (autonomous car or driverless car)
- Agarwal, Nakul, Cheng-Wei Chiang, and Abhishek Sharma. βA study on computer vision techniques for self-driving cars.β International Conference on Frontier Computing. Springer, Singapore, 2018.
- Kanagaraj, Nitin, et al. βDeep learning using computer vision in self-driving cars for lane and traffic sign detection.β International Journal of System Assurance Engineering and Management 12.6 (2021): 1011β1025.
- Heimberger, Markus, et al. βComputer vision in automated parking systems: Design, implementation and challenges.β Image and Vision Computing 68 (2017): 88β101.
- Tseng, Y.-H., & Jan, S.-S. (2018). Combination of computer vision detection and segmentation for autonomous driving. 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS).
- Konrad Budek: β7 challenges of Computer Vision in self-driving carsβ
- Liu, S., Tang, J., Zhang, Z., & Gaudiot, J.-L. (2017). Computer Architectures for Autonomous Driving. Computer, 50(8), 18β25.
- Balasubramaniam, Abhishek, and Sudeep Pasricha. βObject Detection in Autonomous Vehicles: Status and Open Challenges.β arXiv preprint arXiv:2201.07706 (2022).
- Rangesh, A., & Trivedi, M. M. (2019). No Blind Spots: Full-Surround Multi-Object Tracking for Autonomous Vehicles using Cameras & LiDARs. IEEE Transactions on Intelligent Vehicles, 1β1.
- Boric, S., Schiebel, E., SchlΓΆgl, C., Hildebrandt, M., Hofer, C. and Macht, D.M., 2021. Research in Autonomous Driving β A Historic Bibliometric View of the Research Development in Autonomous Driving. International Journal of Innovation and Economic Development, 7(5), pp.27β44.
- Machine Learning Algorithms in Autonomous Cars,βhttps://www.visteon.com/machine-learning-algorithms-in-autonomous-cars/β
- The key technology toward the self-driving car, βhttps://www.emerald.com/insight/content/doi/10.1108/IJIUS-08-2017-0008/full/htmlβ
- Savaram Ravindra βThe Machine Learning Algorithms Used in Self-Driving Carsβ :https://www.kdnuggets.com/2017/06/machine-learning-algorithms-used-self-driving-cars.html
- Smriti shrivastava: βCOMPUTER VISION MAKES AUTONOMOUS VEHICLES INTELLIGENT AND RELIABLEβ, https://www.analyticsinsight.net/computer-vision-makes-autonomous-vehicles-intelligent-and-reliable/#:~:text=The%20computer%20vision%20technology%20can,decisions%20as%20soon%20as%20possible.
- Sharif : βMachine Learning Algorithms and Techniques in Self-Driving Cars U+007C Self Driving Carsβ https://www.aionlinecourse.com/tutorial/self-driving-cars/machine-learning-algorithms-and-techniques-in-self-driving-cars
- Sarda, A., Dixit, S., & Bhan, A. (2021). Object Detection for Autonomous Driving using YOLO algorithm. 2021 2nd International Conference on Intelligent Engineering and Management (ICIEM).
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI