Why self-driving car companies are spilling their secrets 1

Why self-driving car companies are spilling their secrets

Posted Wednesday, Aug 21, 2019 by Jeff Safire

Self-driving technology is hard — so hard that even the industry front-runner is showing its cards to try to get more brainpower on the problem.

by Joann Muller for Axios

Self-driving-cars-secrets-Axios
Illustration: Lazaro Gamio/Axios

Driving the news: Waymo announced Wednesday it’s sharing what is believed to be one of the largest troves of self-driving vehicle data ever released in the hope of accelerating the development of automated vehicle technology.

“The more smart brains you can get working on the problem, whether inside or outside the company, the better,” says Waymo principal scientist Drago Anguelov.

Why it matters: Data is a critical ingredient for machine learning, which is why until recently, companies developing automated driving systems viewed their testing data as a closely guarded asset.

But there’s now a growing consensus that sharing that information publicly could help get self-driving cars on the road faster.

What’s happening: The idea is to eliminate what has been a major roadblock for academia — a lack of relevant research data.

Aptiv, Argo and Lyft have released maps and images collected via cameras and lidar sensors.
Now, even Waymo — the market leader, with more than 10 million autonomous test miles — is opening up its digital vault.

Context: On any given day, an AV can collect more than 4 terabytes of raw sensor data, but not all of that is useful, Navigant Research analyst Sam Abuelsamid writes in Forbes.

During testing, a safety driver typically oversees the vehicle’s operation, while an engineer with a laptop in the passenger seat makes a notation of interesting encounters or challenging scenarios.

At the end of the day, all the sensor data from the vehicle is downloaded. The “good stuff,” as Abuelsamid calls it — encounters with pedestrians, cyclists, animals, traffic signals and more — is analyzed and labeled.

It’s a labor-intensive process, as the New York Times described in a story this week.

Humans — lots and lots of humans, NYT notes — must label and annotate all the data by hand so the AI system can understand what it’s “seeing” before it can begin learning.

People pore over images of street scenes, drawing digital boxes around and adding labels to things that are important to know, like: This is a pedestrian, a stroller, a double yellow line.

Read full article…

This article appeared first at Axios.com on 21.Aug.2019.


No Comments