Fatal Tesla accident exposes fundamental flaws in the levels of driving automation framework

Ill-conceived standards can kill. The Tesla accident in which Joshua D. Brown was killed in early May could not have happened if SAE (Society of Automative Engineers), NHTSA  and BAST had not provided a rationalization for placing cars with incomplete and inadequate driving software on the road.

Since their publication the frameworks for driving automation (by SAE 2014, NHTSA 2013, BAST 2010) have been criticized for ignoring established knowledge in human factors. All experts in the field agree that it is not possible to expect human drivers to continuously supervise driving automation software and correct its shortcomings and errors at split-second notice when problematic traffic situations occur. SAE Level 2 and level 3 are therefore inherently unsafe and these levels should not have appeared as a viable variant of driving automation software in any framework at all!

Frameworks are not arbitrary. Unfortunately, the driving automation frameworks were heavily influenced by the perceived needs of the auto industry which already had driver assistance systems on the road and favored a gradual evolution of their systems towards fully autonomous driving. It is understandable that the authors wanted a framework that simplifies the path towards fully autonomous driving not just from a technical but also from a legal and commercialization perspective where automation can occur in baby-steps, most of which would not involve fundamental changes and would not require legislators to take a hard look at the underlying technology.

This is how Tesla was able to put their vehicle with auto-pilot software on the market. It was presented as a small step from cruise control to full lateral and acceleration/deceleration control by the system. Nothing else should change, they argued: the human is still in full control and bears full responsibility (which means that the driver will always be the scapegoat if something goes wrong!); the vehicle does not have the ambition of performing all tasks by itself. The frameworks clearly provide support for this argument. But they overlook the key difference: the software now handles the driving task continuously, for longer stretches of time without the need for human action. There is a fundamental difference between continuous driving systems vs. ad-hoc, short-term operations of driver assistance systems (i.e. parking, emergency braking, lane warning etc.) which only take over driving functions for short periods of time. Any framework for automated driving should have included this distinction!

Software that assumes the driving task continuously changes everything! Human drivers can and will relax. Their minds will no longer be on the traffic around them at all times. It is well known that human drivers tend to trust autonomous driving algorithms too quickly and underestimate their deficiencies. And it takes a significant amount of time to get back into the loop when the car needs to return the control function back to the driver. Unfortunately the authors of the framework failed to think through the details and problems that follow on levels 2 and 3. They thought about strategies for handing back the control from the car to the human; but apparently they did not perform a risk analysis where they considered how potential crisis situations that require rapid reaction could be mastered. Such an analysis would have shown immediately that
a) there are many possible critical situations where a hand-off from the vehicle to the driver can not be carried out quickly enough to avoid catastrophic consequences and
b) there are many situations where a driver in supervision mode is not able to detect a lack of capability or misbehavior by the driving automation software fast enough.

The Tesla accident is a good example to illustrate these problems. Although the accident occurred on May 7th, only some details have been released. The accident occurred around 3:40 PM on a divided highway 500 near Williston, Florida (view the map). A tractor-trailer turned left, crossing the path of the Tesla. Without braking at all, the Tesla hit the trailer approximately in the middle, went under it, emerged on the other side and continued driving for several hundred feet before coming to a stop at a telephone pole. More info on the accident (including the police sketch). The weather was good, no rain, dry road, good visibility. The road runs straight for miles. At 3:40 PM the sun stood in the West, behind the Tesla. The speed limit on the road was 65mph (104km/h), which translates into a stopping distance of 64 meters. Stopping time would have been about 4 seconds (which would also have been enough time for the truck to clear the intersection). The size of the tractor-trailer has not been made public but it was probably between 65 and 73 feet (20 and 22 meters). Assuming a standard lane width of 12 feet (3.7m), and estimating the distance between both sections of the divided highway based on the Google earth image to be about 20m, the trailer had almost enough available space between both lanes to make the 90 degree turn and could then continue straight on crossing the two lanes of the highway. If we assume that the left turn (the part at the lowest average speed) takes at least 6 seconds (time estimated from a video showing trailer trucks making a left turn) and the truck then passes the intersection at an average speed of 10mph (16km/h), then the truck needs an additional 6 seconds to clear the intersection. As the trailer was hit in the middle by the Tesla driving in the outer lane, the truck must have been about 30 feet (10m) short of clearing the intersection. Thus the tractor-trailer would have cleared the intersection about 2 seconds later.

At the moment, much of the discussion about the accident centers around the driver’s attention. We will never know whether or when the driver saw the truck. There are several possible scenarios: If we take the time horizon of 10 seconds (=6+6-2) before the accident when the trailer-truck initiated the turn, then the Tesla had a distance of about 280 meters to the intersection. At this distance, the large trailer-truck moving into the intersection would have been clearly visible. A driver engaged in the driving task (not on auto-pilot) could not have failed to see the truck and – given the lack of other nearby traffic or visual distractions – would have noticed with enough lead time that the truck is continuing onto the intersection. A step on the brake would have defused the situation and avoided the accident.

The scenario looks very different with auto-pilot. The driver knew that the road went straight for miles, with optimal visibility which translates into a low overall driving risk. The driver may have paid attention, but not as much attention as when driving without auto pilot. When a car drives by itself for many miles a driver won’t be as alert as when he performs the driving function himself. The attention will wane, the truck on the left side may have received a short glance by the driver. The truck’s intent to make a left turn would have been obvious;  but the truck slowed down when he entered the turn about 10 seconds before impact and the driver would certainly have expected that the truck will come to a stop and that the auto-pilot is also aware of the large truck. Thus even if the driver saw the truck initiate the turn, he would probably not have been concerned or inclined to pay special attention to the truck. This was just another one of probably thousands of intersections that Joshua Brown, who used the auto-pilot frequently and blogged about it, had passed. His confidence in the Tesla for handling intersections may have been high. Although he knew that the auto-pilot is not perfect, he probably did not expect that a large truck would be overlooked. In addition, he was probably aware of a Youtube video entitled “Tesla saves the day” which had circulated widely a few months ago. It showed how a Tesla had auto-braked just in time for a car crossing the path from the left.

The critical time window for recognizing the gravity of the situation and acting to prevent the accident was less then 10 seconds; and only 6 seconds before impact was it unmistakably clear that the truck is moving into the intersection instead of coming to a stop. If the driver was not fully focused on the road all the time but was alert in the 3 seconds between 6 and 3 seconds prior to impact he could have prevented the accident. But it is unrealistic to expect that a non-active driver will become fully focused on the traffic at each and every intersection that a car on auto-pilot passes and that he will always be alert for hard to anticipate, extremely rare but very critical short-term situations.

Even if the driver saw the truck and recognized that it was moving into the intersection 3 to 6 seconds before impact, then other problems arise: he has to jump into action and take over from the car. This needs time – both for the decision to revoke control from the car and for physically assuming control of the vehicle. Part of the driver’s brain has to work through the expected behavior of the car: If the car has not yet decelerated does this mean that it has not seen the large truck at all or does it mean that it is not necessary to brake (the car may have come to the conclusion that the trailer-truck will clear the intersection in time). Could it really be that the car does not see this blatantly obvious trailer-truck….? Have I completely overestimated the capability of this car? The shorter the remaining reaction time when the driver realizes the impending crisis, the more dangerous and potentially paralyzing this additional mental load may become.

Developers of driver assistance systems can not expect that drivers are fully alert all the time and ready to takeover in a split second. Moreover, they can not expect that drivers understand and can immediately recognize deficiencies or inadequacies of the software. Who would have expected that Tesla’s auto pilot does not recognize a tractor trailer in the middle of an intersection?

But the key problem is not a software issue. It is the mindset which offloads the responsibility from the driving software to the driver. Developers will be much more inclined to release imperfect software if they can expect the driver to fill any gap. That Tesla uses a non-redundant mono camera is another illustration of the problem. What if the camera suddenly malfunctions or dies on a winding road with the auto-pilot engaged and the driver does not pay enough attention to take over in a split-second? How is it possible to release such a system fully knowing that drivers using these systems will not always be paying full attention. This is only possible because we have standards that let developers offload the responsibility to the driver.

The often-raised counter argument that the level 2 auto pilot has already saved lives is not valid: it confuses two different kinds of driver assistance systems: those – such as emergency braking systems – which only take over the driving function for short periods of time when they are really needed and those that assume continuous control of the driving function for longer stretches of time and thus lead human drivers to take their minds off the road at least part of the time. Short term functions such as emergency braking are not controversial. They do not depend on the auto-pilot and it is them, not the auto-pilot, which is saving the lives.

There is only one variant in which software that assumes the driving task continually, for longer stretches of time can be developed and released to the market: the autonomous driving system must take full responsibility for the driving task and it may not require human supervision when engaged. Thus Levels 4 and up are viable approaches. The Tesla accident does not only show a software problem; it illustrates the dangers of levels 2 and levels 3. Theses levels must be scrapped from the framework!

Comments are closed.