Monday, December 18, 2023

Elon Musk's Big Lie About Tesla Is Finally Exposed. By Ed Niedermeyer


www.rollingstone.com
Dec. 17, 2023
11 - 14 minutes
Skip to main content
Elon Musk’s Big Lie About Tesla Is Finally Exposed
More than 2 million of the cars are being recalled — because Tesla’s “self-driving” systems have always been anything but
FREMONT, CA - SEPTEMBER 29: Tesla CEO Elon Musk speaks during an event to launch the new Tesla Model X Crossover SUV on September 29, 2015 in Fremont, California. After several production delays, Elon Musk officially launched the much anticipated Tesla Model X Crossover SUV. The (Photo by Justin Sullivan/Getty Images)
Elon Musk speaks during at a Tesla event on September 29, 2015 in Fremont, California. Justin Sullivan/Getty Images

Back in 2016, Elon Musk claimed that Tesla cars could “drive autonomously with greater safety than a person. Right now.” It was a lie, one that sent Tesla’s stock price soaring — and made Musk among the wealthiest people on the planet. That lie is now falling apart in the face of a new recall of 2 million Teslas. It’s also revealing to the broader public what close observers of Tesla have always known (and the company itself admits in the fine print of its legal agreements): Tesla’s so-called “self driving” technology works fine — as long as there’s a human behind the wheel, alert at all times. 

Out of all the scandals over the last decade or so of venture capital-fueled excess, Tesla’s dangerous and hype-happy approach to driving automation technology has been one of the most important but also one of the most hidden in plain sight. Just like the Mechanical Turk of 1770, everyone has been so focused on the technology itself that they’ve missed the human factors that power the entire spectacle. Just as worryingly, regulators have missed that forcing humans to babysit incomplete systems introduces entirely new risks to public roads.

If you read the official notice for Tesla’s recall of more than two million vehicles equipped with Autopilot, the thing that jumps out is that it’s not really about a defect in the Autopilot technology itself. At least not in the sense that the system’s cameras are breaking, or its software is seeing red lights as green lights, or its AI is making disturbing choices in “trolley problem” exercises or anything like that. The problem, strangely enough, has everything to do with humans.
Popular on Rolling Stone

Humans, the regulatory technobabble reveals, do the strangest things sometimes. It turns out that when a human uses a “driving assistance” system that steers, brakes and accelerates for them, sometimes they stop paying attention to the road. This wouldn’t be a problem if Teslas could actually drive themselves safely, and the company took legal liability for the actions its software makes when it navigates 5,000 pound vehicles on public roads. But because none of those things is true, users must be poised to rescue Autopilot from itself at any moment, or face having it drive them into an object at high speed–perhaps a semi truck turning across their lane–as has happened on several occasions.
Editor’s picks

In short, when the human stops paying attention it’s as big a problem as if a camera or radar sensor became disconnected from the computer running the code. Which makes perfect sense when you read even deeper into Tesla’s fine print, and find that the owner owner bears all legal responsibility for everything the system does, ever. By telling its customers its cars are almost self-driving and designing them without guardrails, Tesla induces inattention only to blame the victim. (The company didn’t respond to a request to comment for this article.)

To be clear, if humans were a manufactured part of the Autopilot system, its designers would have taken into account a well-known defect of ours: when we get bored we stop paying attention. A 1983 paper pointing out the “ironies of automation” pointed out a problem going all the way back to behavioral research from the early 20th Century: if automation takes over too much of a task, the human becomes inattentive and may miss the critical part of the task they are needed for, especially if it’s time-sensitive like taking over to prevent a crash. It’s not a matter of being a bad driver or a bad person, no human can monitor a boring task forever without eventually becoming inattentive, leaving them unable to make a complex rescue maneuver on a second’s notice.

Of course, all this has been well understood in the specific context of Autopilot for years as well. After the first couple of publicly-reported Autopilot deaths — way back in 2016 when Musk was saying they were already autonomously driving safer than humans — the National Transportation Safety Board began investigating accidents involving Autopilot. In three fatal crashes, two of them in nearly identical circumstances, drivers died because they weren’t paying attention when their Tesla drove them into an unexpected obstacle at high speed. In the two nearly identical Florida crashes, the system was active on a road it wasn’t designed for.
Related

What the NTSB found in those three crashes was not a singular defect in Autopilot’s self-driving system per se, because from a legal perspective Autopilot was not technically driving. By calling Autopilot a so-called “Level 2” driver assistance system (using the Society for Automotive Engineering’s arcane levels of automation taxonomy), Tesla created a technology that automates the major controls of the car but leaves the human driver legally in charge. A lack of driver monitoring, systems to keep the human with legal and ultimate safety responsibility engaged, was a key missing piece. Combine that with the ability to activate the system anywhere, even on roads that Tesla says it isn’t designed for, and you get the bizarre new horror of humans looking away as the automation they overtrust drives them into easily avoidable (if unexpected) objects. 

Due to a quirk of regulatory design, the NTSB has the gold standard of crash investigation capabilities but no power to to do more than make recommendations based on its findings. After investigating three fatal crashes the board pleaded with the agency with actual regulatory power, the National Highway Traffic Safety Administration, to take action, but no action came. Both NHTSA and Tesla ignored the evidence of three in-depth investigations pointing out this fatal combination of flaws in Autopilot’s design.

At least until 2021, according to the new recall notice, when NHTSA opened an investigation into no fewer than 11 Autopilot-involved crashes into emergency responder vehicles. By this time Musk had MCed numerous stock price-spiking hype events around the technology, and had been collecting deposits from customers since late 2016 for a “Full Self-Driving” version of the technology. Despite the reported deaths and clear evidence that the only video of a driverless Tesla was heavily staged, even Musk admits that his hype around self-driving technology has been the central factor in the recent growth of his wealth to titanic proportions.

But of course all of it rests on the backs of humans behind steering wheels, what Madeline Clare Elish calls “Moral Crumple Zones.” Tesla keeps these paying liability sponges behind the wheel largely through the strength of a statistical lie: that Autopilot is safer than human drivers. Tesla has been officially making this claim in its “Quarterly Safety Reports” since 2018 (though Musk has been making it for longer still), despite the fact that its sweeping statistical comparison doesn’t take into account any of the best-known factors affecting road safety. When road safety researcher Noah Goodall adjusted the best publicly available data for factors like road type and driver age in a peer-reviewed paper, Tesla’s claim of a 43% reduction in crashes turned into an 11% increase in crashes. 

Had Tesla designed an Autopilot-like system with the goal of enhancing safety it would combined the strengths of sensor technologies with the incredible cognitive power of a human, creating an augmented “cyborg” system with the human at the center. Instead it built a simulacrum of a self-driving system, a spectacle for consumers and Wall Street alike, that boosted profits and stock prices at the expense of anyone who happened to be looking at their phone when the system made a mistake. Rather than enhancing our safety as drivers, Autopilot forces humans to wait attentively to respond the moment something goes wrong, the kind of “vigilance task” that humans are notoriously bad at.

Now that it’s been caught selling a simulacrum of self-driving and overstating its safety benefits, Tesla’s answer is the usual: it can fix all this with a software update. Since Tesla can’t install infrared eye-tracking cameras or laser map approved roads like competitor systems do with a mere software update, NHTSA has to play along. The only thing Tesla can do by software is constantly bombard drivers with warnings to remind them of the truth they have obscured for so long: you are actually in control here, pay attention, the system will not keep you safe. 

But even in the tiny victory of forcing a recall based on human factors, NHTSA has contributed in its small way to the growing understanding that Tesla’s claims about their technology are untrue and unsafe. Musk has been arguing since 2019 that Tesla’s self-driving technology was progressing so fast that adding driver monitoring wouldn’t make sense, and any human input would only introduce error into the system. After giving him four years worth of the benefit of the doubt, NHTSA is at long last calling the bluff.

Though hardly a heroic effort to protect the public roads, this recall does open the door for broader action. The Department of Justice has had investigations into Tesla’s “Full Self-Driving” for some time now, and the tacit admission that humans are still the safety critical factor in Tesla’s automated driving system may be a prelude to more muscular enforcement. More importantly, it provides ammunition for an army of hungry personal injury lawyers to tear into Tesla’s cashpile in a feeding frenzy of civil litigation.
Trending

If the end is coming for Tesla’s dangerous and deceptive foray into self-driving technology, it can’t come soon enough. As long as the richest man in the world got there at least in part by introducing new risks to public roads, his success sets a troubling example for future aspirants to towering wealth. Out of fear of that example alone, let us hope this recall is only the beginning of the regulatory action against Autopilot.

Ed Niedermeyer is the author of Ludicrous: The Unvarnished Story of Tesla Motors, and cohost of The Autonocast. He has covered and commented on cars and mobility technology for a variety of outlets since 2008. 

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.