Government-led investigations of two recent collisions involving Teslas have raised questions about the safety of emerging vehicle technologies, particularly Tesla’s Autopilot feature.
Both collisions occurred in Michigan, but only one of the drivers said they were using Autopilot at the time of the crash. In addition to the two crashes in Michigan, the National Highway Traffic Safety Administration (NHTSA) said it is actively investigating 21 other collisions involving Teslas.
This isn’t the first time Tesla’s technology has come under scrutiny. The well-known electric vehicle manufacturer faced numerous questions in 2016 following the death of a Florida driver whose Autopilot system failed to brake for a tractor-trailer crossing the road.
Each Tesla vehicle comes with Autopilot, a supposed safety feature complete with cameras and sensors that allow the car to brake, steer, and accelerate automatically. The company’s messaging, including statements like “full potential for autonomous driving,” have gotten them in trouble domestically and internationally. Last year, a German court banned Tesla from using such messaging on the basis that the company misled consumers on the vehicles’ automated driving abilities.
NHTSA said the Autopilot system does not make a Tesla vehicle “capable of driving itself. The most advanced vehicle technologies available for purchase today provide driver assistance and require a fully attentive human driver at all times performing the driving task and monitoring the surrounding environment.”