Software, Data, and Ethics Behind Autonomous Vehicles

Article By Rei Morikawa | September 02, 2019

The first autonomous vehicles were built in the 1980s. Since then, major companies and research organizations have started developing prototypes: General Motors, Bosch, Nissan, Audi, Volvo, Oxford University, Google, and more. Manufacturers have included cruise control features in their recent models, so your car can follow road markings and automatically adjust speed to maintain a proper distance between vehicles in the same lane. In several years, perhaps we’ll start seeing fully autonomous vehicles on the roads. 

 

Training Data for Autonomous Vehicles

An autonomous vehicle must be competent in a wide range of machine learning processes before it can drive safely. Multi-step AI systems allow autonomous vehicles to process image and video data in real time and safely coordinate with other vehicles on the road. 

We as humans have no problem recognizing other vehicles, pedestrians, trees, road signs, and other objects when we’re driving. But we need to train the algorithms that control autonomous vehicles with large custom image and video datasets.

 

Object Classification

Object classification is the first step for training machine learning models to understand the contents of an image. An example of an object classification task would be to look at a set of images and categorize them as “containing a car” or “not containing a car.” To improve object classification algorithms, you’ll need a large, annotated training dataset to serve as the ground truth. 

 

Object Localization

Object localization is the process of locating objects within images, usually with bounding boxes. For bounding box annotation, human annotators draw rectangular boxes around certain objects within an image, such as vehicles, pedestrians, and cyclists. Bounding boxes are usually represented by the center coordinates (bx, by), rectangle height (bh) and rectangle width (bw). 

 

autonomous-vehicle-bounding-box

 

The Technology Behind Autonomous Vehicles

How does an autonomous vehicle know where to go? We can answer this question by going through four main steps: perception, localization, path planning, and vehicle command.

 

Perception 

The automobile industry often refers to computer vision as “perception.” This is because cameras are the primary tool that vehicles use to create a digital map and perceive their environment. Every autonomous vehicle must be equipped with cameras. For example, Tesla equips its cars with 8 surround cameras that provide 360 degree visibility around the car at up to 250 meters of range. These cameras are key to essential tasks like finding lanes, estimating road curvature, and detecting and classifying traffic lights. 

The cameras that we have today are weak at estimating distance, height, and velocity of other vehicles and objects. That’s where sensor fusion comes in. Autonomous vehicles use sensor fusion to detect obstacles and other vehicles. With sensor fusion, autonomous vehicles can identify the positions and speeds of approaching objects.

 

Localization

Before an autonomous vehicle can know where to go, it needs to figure out the current location using localization and mapping techniques. Traditional GPS and true range multilateration techniques have an average error range of up to ten meters, but that is far too inaccurate. Instead, autonomous vehicles use unique localization techniques and algorithms to estimate their current location with an error of less than ten centimeters. There are several different types of technology for localization, including camera, radar, and lidar sensors. 

Radar (radio detection and ranging) is a detection system that emits radio waves to detect objects within a radius of a couple of meters. The most advanced radar technology today can identify the precise details of a few centimeters, from more than 100 meters away. Cars have been equipped with radar for years, to help us see blind spots and avoid collisions. Radar is more accurate for moving objects than static objects, and can directly estimate the position and speed of moving objects. The disadvantage of radar is low resolution, which makes it a good candidate to use together with cameras for computer vision, which have the opposite strengths and weaknesses. 

Lidar (light detection and ranging) uses infrared sensors to determine the distance to an object. The lidar instrument emits up to 150,000 pulses of rapid laser signals per minute. The signals bounce back from objects, and the lidar sensor measures how much time it took for each signal to bounce back. Using this information, the lidar instrument can calculate distance between itself and the object, and the object’s size.

 

Path Planning and Control Systems

At the planning stage, the autonomous vehicle already has an understanding of its position and environment. Now, using the data collected in the previous stages, the autonomous vehicle predicts where agents such as pedestrians and other vehicles will move. Then, it calculates trajectories to get from point A to point B, on the safest and most optimal route.

In the final command stage, the autonomous vehicle follows the calculated trajectory. This is done by sending signals to the hardware to control the steering wheel, acceleration, and brakes. 

In urban areas, autonomous vehicles need to process far more information at once. Sometimes, the hand signal of bikers or the posture of pedestrians are critical information for predicting their movements. This TED talk explains the path planning and control system in more depth.

 

Benefits of Autonomous Vehicles

Autonomous vehicles are just software on wheels. In addition to getting you places quickly, optimization algorithms can also make fuel consumption as efficient as possible. By 2050, autonomous vehicles could reduce fuel consumption by as much as 44% for passenger vehicles and 18% for trucks

In addition, autonomous vehicles can reduce the number of accidents caused by careless, distracted, or drunk drivers. Car accidents cause 25% of road congestion, so autonomous vehicles will also reduce traffic. We could also see improved human health. Medical studies have shown that traffic jams cause elevated blood pressure, anxiety, depression, decreased cardiovascular fitness, and low-quality sleep. 

 

The Future of Autonomous Vehicles

We might not be too far from having autonomous vehicles on the roads. But before that can happen, we as a society still need to answer some important remaining questions. AI ethics and accountability are being largely debated in the news lately. If an autonomous vehicle crashes, who is responsible: the owner, driver, insurance, company, or manufacturer?

Manufacturers will also need to address customer concerns about cost, privacy, and especially safety. Autonomous vehicles may not be 100% safe, but the value of AI should be compared against the status quo. Recent studies have shown that autonomous vehicles would be safer than human drivers. This makes sense mathematically, but going forward, the public should have a better understanding about the pros and cons, and what to do if something goes wrong with an autonomous vehicle on the road. 

Lionbridge provides custom bounding box image datasets for your model
The Author
Rei Morikawa

Rei writes content for Lionbridge’s website, blog articles, and social media. Born and raised in Tokyo, but also studied abroad in the US. A huge people person, and passionate about long-distance running, traveling, and discovering new music on Spotify.

Welcome!

Sign up to our newsletter for fresh developments from the world of training data. Lionbridge brings you interviews with industry experts, dataset collections and more.