I wrote last week that I thought building the consumer interfaces and passenger experiences was the hard part of building autonomous cars. This week an article in The Information showed that even Google is having a tough time getting car tech just right.
According to the article the Waymo cars are subject to frequent sudden stops, presumably as their systems try to determine whether or not its safe to proceed. The author writes
The hesitation at the intersection is one of many flaws evident in Waymo’s technology, say five people with direct knowledge of the issues in Phoenix. More than a dozen local residents who frequently encounter one of the hundreds of Waymo test vehicles circulating in the area complained about sudden moves or stops. The company’s safety drivers—individuals who sit in the driver’s seat—regularly have to take control of the wheel to avoid a collision or potentially unsafe situation, the people said.
There are some simple reasons that you see a lot of autonomous car testing in California, Arizona and Nevada. Those those places are generally sunny, mostly flat and have cities with wide grid pattern streets. If you’re going to teach a computer to drive on public roads you don’t want to see if the sensors can handle looking down the crest of a hill, or dealing with roads and sensors covered with water, mud or snow.
I’m not here to pile on Waymo. Building autonomous cars is a really hard problem. I remain optimistic about the future of this technology. I’m just generally less willing to believe the hype that fleets of these vehicles, without safety drivers, are just around the corner.
It’s going to take time to train these systems properly and patience for those of us on the road manually driving to adapt to the AI reactions and expectations. As the article quoted
“It’s still a student driver, but it’s the best student driver out there,” said another person familiar with the Waymo program.