Picture Perfect

The self-driving revolution is under way, as automakers and technology companies race to deliver the first generation of fully autonomous vehicles (AVs) in the next few years. Proponents of AVs say that removing human beings from the driver’s seat worldwide may eventually save millions of lives every year, slash greenhouse gas emissions, and allow workers to be more productive.

The numbers back them up, too. Multiple studies show that human error is the critical factor in most car accidents. Meanwhile, researchers studying road-transport automation estimate that AVs could reduce fuel consumption in transportation anywhere from 4% to 90% in the United States alone. Then, as soon as these cars that do most, or perhaps all, of the driving go mainstream, the time saved globally by commuters could add up to 1 billion hours daily, the management consulting firm McKinsey & Company predicted.

Those are big promises. But any major benefit of driverless technology will rely on ubiquity. More self-driving cars would need to be on the road if the aforementioned benefits are to take effect.

According to “The Future of Autonomous Cars,” a strategy report from Berg Insight published in November 2016, that pervasiveness probably won’t happen for a while. The report forecasts that sales of vehicles with conditional and high automation will reach 24 million by 2030. It also says that there will be a total of 71 million autonomous cars in active use on the roads by 2030. Those numbers are nowhere near the more than 1 billion driver-operated cars on the road today, or the billion more predicted to be sold over the next two decades, according to Navigant Research.

Although the advantages of driverless cars have yet to be realized, the human element of driving is steadily disappearing from the equation. Powerful systems that currently allow for hands-free parallel parking and automatic emergency braking help prevent collisions. Such systems process environmental data gathered by sensor arrays in order to recognize potential risks and instruct the car to react faster than our gray matter can. But arriving at the completely human-free driving experience we’ve been promised will involve the further advancement of accurate, compact, and affordable technology called lidar, which stands for light detection and ranging.

Velodyne LiDAR, a global leader in the technology, plays a vital part in that evolution. In August 2016, the Morgan Hill, California–based firm completed a $150 million investment deal with Ford Motor Company and Baidu, China’s leading search engine company. This will allow Velodyne to rapidly expand the design and production of its automotive lidar sensors. David Hall, Velodyne’s founder and CEO, has said that the investment will accelerate the cost reduction and scaling of the company’s sensors, making them widely accessible and enabling mass deployment of fully autonomous vehicles.

In 1983, Hall and his brother Bruce founded Velodyne, which was named to Fast Company’s list of the world’s most innovative transportation companies in 2017. It was originally conceived as a manufacturer of high-end acoustic products, specializing in subwoofers. A serial inventor, Hall came up with and patented a unique accelerometer-based system that allowed Velodyne’s ULD-18 subwoofer line to play music louder, lower, and clearer than competitive products. The ULD-18 produced distortion levels that were between 20 to 30 times lower than anything else available, according to the company. That product literally created a market, says Mike Jellen, Velodyne’s president. “It became one of the hottest audio products of the ’80s.”

Velodyne still offers audio products such as speakers and headphones, but in the late 1990s, Hall developed a burgeoning interest in robotics that has shaped the company’s primary offering. He quickly became an expert in the field and entered the competitive TV show Robot Wars in 2000. A battle bot created by his team, named Drillzilla, placed second in the world championship on the first season of Robot Wars: Extreme Warriors, in 2001. In 2004, Hall’s team entered a first-of-its-kind race that was sponsored by the Defense Advanced Research Projects Agency. Called the Darpa Grand Challenge, the goal was to create an autonomous vehicle that could navigate a 142-mile course through the Mojave Desert. Hall applied skills such as mechanical engineering, machining, custom sensor design, and software programming that he had learned over the years.

Hall’s vehicle didn’t finish the course, but neither did anyone else’s that first year of competition. Hall discovered the shortcomings of relying on a camera-only setup. It was difficult to see objects at range and across the horizon for a wide field of view, explains Jellen. Dealing with changing light was also challenging. Hall wanted a 360-degree sensor that worked all the time and in all conditions.

Consequently, Hall reevaluated his system and conceived lidar systems specifically made to guide the autonomous vehicles used in the Grand Challenge. “In terms of spatial resolution, lidar is at least an order of magnitude better than radar,” says Jim McBride, senior technical leader of autonomous vehicles for Ford and leader of the company’s Darpa team. “Plus, there’s an enormous difference in the field of view.” This means lidar can produce more precise maps and paint a more detailed picture of the surrounding environment.

About 13 years after Hall’s lidar breakthrough, Velodyne has become a leading developer of laser-based sensors for the auto industry, creating the visual maps used by many AVs currently in development. But simply knowing the layout and position of the road isn’t nearly enough to make possible a completely driverless automobile. The car also needs to be able to detect obstacles such as pedestrians, cyclists, and other vehicles.

Positioned on top of an AV, Velodyne’s rotating array sprays laser light in every direction, capturing the reflected signals and sending them to an onboard computer that uses the data to create an instantaneous three-dimensional map of the environment. Major players in the AV space believe that this laser-light-based technology is essential to autonomy because it is significantly more accurate than cameras or radar alone. “Before Velodyne, the best unit one could buy would put a single scan out,” McBride says. The Velodyne unit produces 64 parallel scans. “Velodyne’s 64-beam laser takes a 360-degree picture and captures up to 1.3 million readings per second, making it the most versatile sensor on the car.”

Why is it so much more accurate? Jellen says it is because lidar generates its own light source, which a standard camera can’t do without a flash. “If a camera doesn’t have a light source, it’s relying on the sun to provide those photons,” he explains. The resolution and accuracy of the images or data captured by a camera vary based on the quality of that ambient light and focal plane of the lens, which is why they ultimately produce 2-D images.

While radar can accurately track movement and speed in many conditions, the system has poorer resolution compared to lidar. Considered “the master of 3-D mapping,” lidar not only measures the exact distance to other cars and obstacles, but can also determine their sizes and shapes more accurately than radar alone. The result is a full 3-D image.

Velodyne’s top-of-the-line sensor has been used on test cars from Google, Ford, and Volvo. The rotating device captured more than a million data points from its surroundings every second. Other earlier high-end models could cost up to $80,000, but Velodyne’s latest advanced sensors, among them the VLP-16 Puck, which is about the size of a hockey puck, cost around $8,000. The company also recently announced Velarray, a new compact sensor that uses a fixed array of lasers and has a range of 200 meters.

“The convergence of and invention of new technologies is creating an opportunity to revolutionize the way we think about travel,” says Mike Dennison, president of the Consumer Technologies Group at Flex, which is a Velodyne partner. “The pioneers in lidar technology are developing real solutions that were only dreamed of just a few years ago and are revolutionizing autonomous cars, drones, airplanes, and more.”

Lidar is not only promising to transform the auto industry. It has applications in industries such as archaeology, agriculture, robotics, and urban planning. Archaeologists at Colorado State University used the technology in 2015 to map a 13-square-kilometer settlement site in central Mexico in just 45 minutes—compare that to the two previous seasons, when they were able to survey just 2 square kilometers. NASA used lidar to develop a mechanism for docking spacecraft on the International Space Station with incredible precision. City planners and building developers have used the technology to create detailed maps. “You can walk around or through a building today and generate full high-precision drawings,” says Jellen.

There are other potential lidar applications: in drones for the delivery of consumer goods, in the fields for measuring crops to assess production during the year, and at the dockside for automatic barge loading and unloading. Compared with standard cameras, “you can survey much larger fields of view more accurately with lidar across various lighting conditions,” Jellen says.

Hall isn’t sitting back and relaxing, though. His most recent lidar invention is a computer-aided gyroscopic suspension system for his boat, Martini 1.5. According to Jellen, the self-stabilization technology can keep the vessel perfectly level even in surfs of up to 5 feet, preventing a martini glass that’s set on deck from tipping over. It’s clear that more opportunities are on the horizon.