Archer

 

Look, no hands! It’s a self-driving car!

Not really — it’s just my regular old human-driving car.

But you can actually ride in one. And soon, you could be buying a self-driving car of your own.

They’re being tested thoroughly for safety, right? 

Right?

Watch our video report here:

 

Archer News Network takes you along for a ride through self-driving car eye testing research.

 

Humans go to the DMV for driver testing, and you may even get an eye check. What about self-driving cars?

Their eyes are sensors with a computer brain. But they may not see everything the right way.

“Absolutely, absolutely,” said Suman Jana, a Columbia University researcher who looked into the safety and security of self-driving cars with fellow researchers Kexin Pei, Yinzhi Cao and Junfeng Yang. 

They presented their findings this week at the SOSP 2017 conference in Shanghai, China and won “Best Paper” award.

Computer vision has some blind spots, Jana said.

For example, this new app called Nude is supposed to filter out nude images you take on your phone and store them in a secret file.

 But a Gizmodo article says the app’s vision filtered out very non-sexy images of a sleeping dog and a breakfast pastry.

“That’s a perfect example of the things I’m talking about in terms of not working,” said Jana.

Computer vision for cars does a much better job, he said, but there are still a few gaps and unusual cases.

“It works most of the time. What are the cases where it doesn’t work?” the researchers asked, according to Jana.

They found them, like a case involving these images of a road.

 

Road images used for testing self-driving cars. Image credit: DeepXplore: Automated Whitebox Testing of Deep Learning Systems

 

The researchers said a simulated self-driving car viewed the first image of a curving road and correctly decided to turn left.

But when it saw the second image — a slightly darker version of the same image — it decided to turn right and crash into the guard rail.

Luckily, it was just a simulated car.

“We found that even the state-of-the-art things are making a lot of mistakes,” he said.

In a high-profile case in Florida last year, Tesla reported one of its “autopilot” cars could not tell the difference between the side of a white semi-trailer and the bright sky, and did not hit the brakes.

 

White semi-trailer hit by Tesla car in May 2016. Image credit: NTSB crash report

 

Investigators said the human driver was not paying enough attention and did not hit the brakes either.

The car crashed and the driver died. 

 These kinds of vision error cases are rare, but do happen, Jana said.

“The system can make really deadly mistakes,” he said.

 

Tesla car after crash with semi-trailer in May 2016. Image credit: NTSB crash report

 

Car developers are testing self-driving vehicles, but Jana believes the current testing is not quite enough.

 There are simply too many possible combinations of events that could lead to crashes, he said.

“As they’re testing more and more, they’re realizing that the combination is so large. No matter how much resources they put in, even for Google, it’s very hard to cover all the possibilities,” Jana said.

The researchers’ testing showed a possible solution, he said, a way to test for those events you may not anticipate, like the trailer and the sky or attackers online changing just a few random lines of data.

The researchers use something called “neuron coverage” to look for those odd or dangerous behaviors in the cars, and said they found thousands of them during their analysis.

This kind of testing could open the cars’ “eyes” to dangers they can’t yet “see,” according to Jana.

“To be inclusive of all the situations, irrespective of how rare they are,” he explained. “You have tested them, then you can be confident that your system will be making the correct decision, even if it comes up one in a million times. Or even if there is an attacker who causes a weird combination of events to happen.” 

 

Testing results showing self-driving car vision errors. Image credit: DeepXplore: Automated Whitebox Testing of Deep Learning Systems

 

Suman hopes developers will start to use their special testing to look for all possible problems now — before they put the automated cars out into the real world.

“If we pay attention to security right from now, we might have a chance of getting things a bit better, like not repeating our past mistakes,” he said.

Until, then, will he ride in a self-driving car? After all, advocates say they’ll be much safer than humans at the wheel.

“Um, probably not,” Jana responded. “This is very exciting technology, but right now my answer would be no.”

“Do not trust machine learning yet,” he added. “It’s like trust, but verify.”