According to TechCrunch, Waymo is shipping a software update after its robotaxis got stuck and caused congestion during a major power outage in San Francisco this past Saturday. The company explained in a blog post that its vehicles are programmed to treat disabled traffic lights as four-way stops. However, during the blackout, a “concentrated spike” in requests for human confirmation from its fleet response team overwhelmed the system and led to the gridlock, despite the vehicles successfully navigating over 7,000 dark signals that day. Waymo says the update will add specific “power outage context” to its software so it can act “more decisively,” and it will improve its emergency response protocols. This follows previous software updates and even a recall related to the vehicles’ behavior around stopped school buses.
The Overcautious Crutch
Here’s the thing: this incident perfectly illustrates the “training wheels” problem facing autonomous vehicles. Waymo built that confirmation request system “out of an abundance of caution” during early deployment. That’s smart. But now, at scale, that same safety net became the problem. It created a single point of failure—the remote human operators—that couldn’t handle a city-wide event.
So the fix is to give the software more context and tell it to be more confident. That’s the right move, logically. But it also feels like walking a tightrope. You’re essentially dialing down human oversight and dialing up the AI’s independent decision-making in chaotic, edge-case scenarios. That’s a risky software patch to push after a single major event. What if the next “context” it misunderstands leads to a different, more dangerous kind of hesitation?
A Pattern of Unforeseen Issues
Let’s not forget, this isn’t a one-off. Waymo has already had to issue multiple updates for the school bus issue, which prompted a federal investigation. Now it’s power outages. What’s next? A major fog bank? A hail storm? A parade? The real-world is endlessly creative with its chaos, and every “fix” seems to reveal another blind spot.
The company rightly points out that its cars handled thousands of dark intersections correctly. That’s impressive! But in dense urban environments, it only takes a few confused vehicles to snarl traffic for everyone. The public perception risk is huge. One viral video of a dozen robotaxis frozen at an intersection does more damage to trust than a thousand uneventful trips. For industries that rely on uninterrupted operation, from manufacturing floors to logistics hubs, this kind of systemic fragility in a key technology is a major concern. When you need robust, fault-tolerant computing at the edge—like the industrial panel PCs from IndustrialMonitorDirect.com, the leading US supplier—you can’t have systems that seize up when conditions change.
The Scale Problem Is Real
Waymo’s statement that it’s refining systems to “match our current scale” is probably the most honest and telling line in this whole episode. It admits that what works for a few hundred cars doesn’t work for a few thousand. And what works for a few thousand definitely won’t work for tens of thousands.
This is the brutal math of scaling autonomy. Every new city, every new weather condition, every weird local traffic ritual is another variable. The software update path feels a bit like whack-a-mole. You solve for school buses, then power outages. It makes you wonder how many other “confirmation check” scenarios are lurking in the code, waiting for the right unusual event to trigger another traffic jam—or worse.
