The positive risk profile of self-driving cars

The two recent fatal accidents with self-driving cars by Uber and Tesla have not led to the major backlash which many people had predicted. While this does not come as a surprise (the predictions ignored the long history of technical innovations, where accidents have rarely slowed or even halted the advance of a technology), nevertheless, the two harrowing accidents increase the concern of the public and of regulators about the safety of self-driving cars.

Therefore this is the right time to perform a more careful analysis of the risk profile of this technology. As we will show in the following, the specific forms of risk, accident scenarios, and risk mitigation strategies for self-driving cars differ very significantly from other technologies that have been developed over the last centuries. To illustrate the differences, we will examine three key aspects of the risk profile of self-driving car technologies and contrast them with established technologies:

1) One- or two sided distribution of safety outcomes
Self-driving cars are an unusual product from the perspective of safety-related outcomes. Practically every product comes with the risk that it’s use may inflict harm under some circumstances. For most products the safety related outcomes are either harm (negative outcome) or no effect. A much smaller group of products can also lead to positive safety-related outcomes – their use increases safety. A self-driving car will prevent some accidents (positive outcome) or cause accidents (negative outcome); this two-sided distribution of safety outcomes contrasts with other product categories such as microwaves, coffee machines or electric drills which have only one-sided safety outcomes. From one perspective, products with two-sided safety distributions are preferable over products with one-sided distributions. But they present a challenge for risk analysis and for ethical considerations because uncertainty about the distribution of negative outcomes may need to be balanced against the certainty of positive outcomes. Delaying the use of self-driving cars for too long may cause harm (accidents that would not have happened).

In the health sector, this dilemma is a well-known problem for the approval of medical treatments. And the US Food and Drug Administration (FDA) has worked hard to balance both sides of the distribution (both by speeding up the approval process and by enabling critically ill patients to get access to experimental treatments in certain cases). But self-driving cars differ from medical treatments in a very positive way: Whereas the expected positive effects of a treatment often do not materialize (uncertainty on the positive part of the distribution), there is much more certainty about the positive safety outcomes of self-driving cars (accident prevention) and we already have statistical data for the safety benefits of some driver assistance systems.

Thus any legislative effort for regulating the approval of self-driving cars, needs to consider both sides of the distribution of safety outcomes.

2) Alignment of safety goals with development goals
For most products, safety is not an innate part or consequence of the development process. Over the last century we have learned the hard way that a large body of laws and regulations are needed (which then lead to well thought out internal processes) to ensure that safety is adequately addressed in all phases of the development process.

However, the situation is different for self-driving cars. For anyone developing an autonomous vehicle, the primary and overarching development goal of self-driving cars is to be able to operate the vehicle safely at all times. Driving as such is NOT the primary goal, it is a secondary concern because just navigating the car on the road and keeping control of speed and direction is only a very small part of the development problem.
The internal state of the car at any given moment is most important, because the car needs to constantly monitor its environment, identify road signs, traffic lights, predict actions of other traffic participants,  etc. Therefore the main concern of development teams is to make sure that the car has a complete and accurate internal representation (of state and probable behavior) of what is going on around it. The key metrics in the development process are not just driving errors but their much earlier cause – shortcomings in sensing, interpretation, prediction. Thus the development of self-driving cars is a constant and intensive search for failures, potential errors, potential flaws. As a consequence, even in the absence of any safety regulations, it would not be possible to develop a self-driving car for the market without being constantly focused on safety. Of course, this is not a guarantee that no mistakes will be made. And this is not a guarantee that the development process will lead to absolutely flawless vehicles (that is not possible). But the technology of self-driving cars is one of only very few technologies where safety issues are inherently the primary focus of development.

3) Efficiency of recall process for defective products
Self-driving cars are almost unique in another, third dimension of risk: For most technologies it is difficult to prevent harm once a defective model is released to the public (and this has important implications for regulation). Once an Espresso machine, a drug or another product reaches the hands of thousands or millions of users it is very difficult to ensure that a defective product model will not lead repeatedly to harm somewhere. Recalls take time and rarely reach all owners. Again, the situation is very different for self-driving cars. They incorporate wireless communication and update mechanisms that allow the near-instant grounding of defective vehicles models. A worst-case scenario where a flaw is discovered after tens of thousands of vehicles have been released to public roads is not realistic: when accidents point to the flaw, the other cars on the road will quickly be grounded and thus further accidents will be prevented from happening. Of course this does not mean that standards for approving self-driving cars should be lax but rather that we should keep the likely risk scenarios in perspective, when we consider regulations for self-driving cars.

In summary, the risk profile of self-driving cars is quite unusual because it is positive on the following three dimensions:
— With self-driving cars, safety is the primary development objective and focus, it is an inherent part of the development process and can never be just an afterthought or constraint of the development process
— Self-driving cars have double-sided safety outcomes: Besides the risk of failure, they also increase the safety of passengers. Keeping self-driving cars off the road for to long because of worries about accidents may be harmful
— Self-driving cars allow instant grounding of defective models; defects can not harm large groups of customers

In the public and regulatory discourse we need to do justice to the unique risk characteristics of self-driving cars!

P.S. For more on self-driving car safety and how (not) to determine statistically whether self-driving cars are safe, see my earlier post on Misconceptions of Self-Driving cars: Misconception 7: To convince us that they are safe, self-driving cars must drive hundreds of millions of miles

 

Misconception 7: To convince us that they are safe, self-driving cars must drive hundreds of millions of miles

Top-misconceptions-of-self-driving-cars-arrow
One of the most difficult questions for self-driving cars
concerns their safety: How can we determine whether a particular self-driving car model is safe? The most popular an­swer to this question is based on a straightforward application of statis­tics and leads to conclusions such as that “…fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hun­dreds of billions of miles to demon­strate their reliability…”. This state­ment comes from a recent RAND re­port by Nidri Kalra and Susan Pad­dock on the topic. Unfortunately, these statements are untenable in this form because the statistical argument contains major oversights and mis­takes, which we will point out in the following.

7.1 Failure rate estimation

The argument is usually presented as a problem of failure rate estimation where observed failures (accidents involving self-driving cars) are com­pared against a known failure rate (accident rates of human drivers). Accidents are modeled as discrete, independent and random events that are determined by a (statistically con­stant) failure rate. The failure rate for fatal accidents can be calculated by dividing the number of accidents with fatalities by the number of vehi­cle miles traveled. If we consider the 32,166 crashes with fatalities in traf­fic in the US in 2015 and relate them to the 3.113 billion miles which mo­tor vehicles traveled, then the failure rate is 32,166 / 3.113 billion = 1.03 fatalities per 100 million miles. The probability that a crash with fatality occurs on a stretch of 1 mile is ex­tremely low (0,0000010273%) and the opposite, the success rate, the probability that no accident with fa­tality occurs on a stretch of 1 vehicle-mile-traveled (VMT) is very high (99,999998972%). By observing cars driving themselves, we can obtain es­timates of their failure rate. The con­fidence that such estimates reflect the true failure rate increases with the number of vehicle miles traveled. Simple formulas for binomial proba­bility distributions can be used to cal­culate the number of miles which need to be driven without failure to reach a certain confidence level: 291 million miles need to be driven by a self-driving car without fatality to be able to claim with a 95% confidence level that self-driving cars are as reli­able as human drivers. This is nearly three times the distance between fa­talities that occur during human driv­ing. If we relax the required confi­dence level to 50%, then at least 67 million miles need to be driven with­out fatality before we can be confi­dent that self-driving cars are safe. Although this calculation is simple most authors – including the authors of the RAND report – use the wrong measures. Instead of dividing the number of crashes involving fatalities (32,166) by VMT, they divide the number of fatalities (35,091) by VMT. This overstates the failure rate of human drivers because a single ac­cident may lead to multiple fatalities and the number of fatalities per fatal accident may depend on many fac­tors other than the reliability of the driver.

Continue reading the full text of this misconception or go to the list of misconceptions

Fatal Tesla accident exposes fundamental flaws in the levels of driving automation framework

Ill-conceived standards can kill. The Tesla accident in which Joshua D. Brown was killed in early May could not have happened if SAE (Society of Automative Engineers), NHTSA  and BAST had not provided a rationalization for placing cars with incomplete and inadequate driving software on the road.

Since their publication the frameworks for driving automation (by SAE 2014, NHTSA 2013, BAST 2010) have been criticized for ignoring established knowledge in human factors. All experts in the field agree that it is not possible to expect human drivers to continuously supervise driving automation software and correct its shortcomings and errors at split-second notice when problematic traffic situations occur. SAE Level 2 and level 3 are therefore inherently unsafe and these levels should not have appeared as a viable variant of driving automation software in any framework at all!

Frameworks are not arbitrary. Unfortunately, the driving automation frameworks were heavily influenced by the perceived needs of the auto industry which already had driver assistance systems on the road and favored a gradual evolution of their systems towards fully autonomous driving. It is understandable that the authors wanted a framework that simplifies the path towards fully autonomous driving not just from a technical but also from a legal and commercialization perspective where automation can occur in baby-steps, most of which would not involve fundamental changes and would not require legislators to take a hard look at the underlying technology.

This is how Tesla was able to put their vehicle with auto-pilot software on the market. It was presented as a small step from cruise control to full lateral and acceleration/deceleration control by the system. Nothing else should change, they argued: the human is still in full control and bears full responsibility (which means that the driver will always be the scapegoat if something goes wrong!); the vehicle does not have the ambition of performing all tasks by itself. The frameworks clearly provide support for this argument. But they overlook the key difference: the software now handles the driving task continuously, for longer stretches of time without the need for human action. There is a fundamental difference between continuous driving systems vs. ad-hoc, short-term operations of driver assistance systems (i.e. parking, emergency braking, lane warning etc.) which only take over driving functions for short periods of time. Any framework for automated driving should have included this distinction!

Software that assumes the driving task continuously changes everything! Human drivers can and will relax. Their minds will no longer be on the traffic around them at all times. It is well known that human drivers tend to trust autonomous driving algorithms too quickly and underestimate their deficiencies. And it takes a significant amount of time to get back into the loop when the car needs to return the control function back to the driver. Unfortunately the authors of the framework failed to think through the details and problems that follow on levels 2 and 3. They thought about strategies for handing back the control from the car to the human; but apparently they did not perform a risk analysis where they considered how potential crisis situations that require rapid reaction could be mastered. Such an analysis would have shown immediately that
a) there are many possible critical situations where a hand-off from the vehicle to the driver can not be carried out quickly enough to avoid catastrophic consequences and
b) there are many situations where a driver in supervision mode is not able to detect a lack of capability or misbehavior by the driving automation software fast enough.

The Tesla accident is a good example to illustrate these problems. Although the accident occurred on May 7th, only some details have been released. The accident occurred around 3:40 PM on a divided highway 500 near Williston, Florida (view the map). A tractor-trailer turned left, crossing the path of the Tesla. Without braking at all, the Tesla hit the trailer approximately in the middle, went under it, emerged on the other side and continued driving for several hundred feet before coming to a stop at a telephone pole. More info on the accident (including the police sketch). The weather was good, no rain, dry road, good visibility. The road runs straight for miles. At 3:40 PM the sun stood in the West, behind the Tesla. The speed limit on the road was 65mph (104km/h), which translates into a stopping distance of 64 meters. Stopping time would have been about 4 seconds (which would also have been enough time for the truck to clear the intersection). The size of the tractor-trailer has not been made public but it was probably between 65 and 73 feet (20 and 22 meters). Assuming a standard lane width of 12 feet (3.7m), and estimating the distance between both sections of the divided highway based on the Google earth image to be about 20m, the trailer had almost enough available space between both lanes to make the 90 degree turn and could then continue straight on crossing the two lanes of the highway. If we assume that the left turn (the part at the lowest average speed) takes at least 6 seconds (time estimated from a video showing trailer trucks making a left turn) and the truck then passes the intersection at an average speed of 10mph (16km/h), then the truck needs an additional 6 seconds to clear the intersection. As the trailer was hit in the middle by the Tesla driving in the outer lane, the truck must have been about 30 feet (10m) short of clearing the intersection. Thus the tractor-trailer would have cleared the intersection about 2 seconds later.

At the moment, much of the discussion about the accident centers around the driver’s attention. We will never know whether or when the driver saw the truck. There are several possible scenarios: If we take the time horizon of 10 seconds (=6+6-2) before the accident when the trailer-truck initiated the turn, then the Tesla had a distance of about 280 meters to the intersection. At this distance, the large trailer-truck moving into the intersection would have been clearly visible. A driver engaged in the driving task (not on auto-pilot) could not have failed to see the truck and – given the lack of other nearby traffic or visual distractions – would have noticed with enough lead time that the truck is continuing onto the intersection. A step on the brake would have defused the situation and avoided the accident.

The scenario looks very different with auto-pilot. The driver knew that the road went straight for miles, with optimal visibility which translates into a low overall driving risk. The driver may have paid attention, but not as much attention as when driving without auto pilot. When a car drives by itself for many miles a driver won’t be as alert as when he performs the driving function himself. The attention will wane, the truck on the left side may have received a short glance by the driver. The truck’s intent to make a left turn would have been obvious;  but the truck slowed down when he entered the turn about 10 seconds before impact and the driver would certainly have expected that the truck will come to a stop and that the auto-pilot is also aware of the large truck. Thus even if the driver saw the truck initiate the turn, he would probably not have been concerned or inclined to pay special attention to the truck. This was just another one of probably thousands of intersections that Joshua Brown, who used the auto-pilot frequently and blogged about it, had passed. His confidence in the Tesla for handling intersections may have been high. Although he knew that the auto-pilot is not perfect, he probably did not expect that a large truck would be overlooked. In addition, he was probably aware of a Youtube video entitled “Tesla saves the day” which had circulated widely a few months ago. It showed how a Tesla had auto-braked just in time for a car crossing the path from the left.

The critical time window for recognizing the gravity of the situation and acting to prevent the accident was less then 10 seconds; and only 6 seconds before impact was it unmistakably clear that the truck is moving into the intersection instead of coming to a stop. If the driver was not fully focused on the road all the time but was alert in the 3 seconds between 6 and 3 seconds prior to impact he could have prevented the accident. But it is unrealistic to expect that a non-active driver will become fully focused on the traffic at each and every intersection that a car on auto-pilot passes and that he will always be alert for hard to anticipate, extremely rare but very critical short-term situations.

Even if the driver saw the truck and recognized that it was moving into the intersection 3 to 6 seconds before impact, then other problems arise: he has to jump into action and take over from the car. This needs time – both for the decision to revoke control from the car and for physically assuming control of the vehicle. Part of the driver’s brain has to work through the expected behavior of the car: If the car has not yet decelerated does this mean that it has not seen the large truck at all or does it mean that it is not necessary to brake (the car may have come to the conclusion that the trailer-truck will clear the intersection in time). Could it really be that the car does not see this blatantly obvious trailer-truck….? Have I completely overestimated the capability of this car? The shorter the remaining reaction time when the driver realizes the impending crisis, the more dangerous and potentially paralyzing this additional mental load may become.

Developers of driver assistance systems can not expect that drivers are fully alert all the time and ready to takeover in a split second. Moreover, they can not expect that drivers understand and can immediately recognize deficiencies or inadequacies of the software. Who would have expected that Tesla’s auto pilot does not recognize a tractor trailer in the middle of an intersection?

But the key problem is not a software issue. It is the mindset which offloads the responsibility from the driving software to the driver. Developers will be much more inclined to release imperfect software if they can expect the driver to fill any gap. That Tesla uses a non-redundant mono camera is another illustration of the problem. What if the camera suddenly malfunctions or dies on a winding road with the auto-pilot engaged and the driver does not pay enough attention to take over in a split-second? How is it possible to release such a system fully knowing that drivers using these systems will not always be paying full attention. This is only possible because we have standards that let developers offload the responsibility to the driver.

The often-raised counter argument that the level 2 auto pilot has already saved lives is not valid: it confuses two different kinds of driver assistance systems: those – such as emergency braking systems – which only take over the driving function for short periods of time when they are really needed and those that assume continuous control of the driving function for longer stretches of time and thus lead human drivers to take their minds off the road at least part of the time. Short term functions such as emergency braking are not controversial. They do not depend on the auto-pilot and it is them, not the auto-pilot, which is saving the lives.

There is only one variant in which software that assumes the driving task continually, for longer stretches of time can be developed and released to the market: the autonomous driving system must take full responsibility for the driving task and it may not require human supervision when engaged. Thus Levels 4 and up are viable approaches. The Tesla accident does not only show a software problem; it illustrates the dangers of levels 2 and levels 3. Theses levels must be scrapped from the framework!

The left turn problem for self-driving cars has surprising implications

Self-driving car technology advances rapidly, but critics frequently point out that some hard problems remain. John Leonard, who headed MIT’s self-driving cars project at the DARPA 2007 urban challenge, eloquently describes various challenging situations including hand-waving police officers and left turns in heavy traffic.

The hand-waving police officer problem can be solved easily with a simple workaround: The car just detects the hand waving situation. It then dispatches a camera feed to a remote control center and asks a remote human operator for guidance (similar to Google’s patent 8996224).

The left turn problem is more interesting. Such situations occur more frequently and they do present significant challenges. Self-driving car prototypes have been known to wait for long intervals at intersections before they finally made the left turn – heavily testing the patience of human drivers stuck behind them. The video by John Leonard clearly shows how hard it can be to make a left turn when traffic is heavy  in all directions and slots between cars coming from the left and the right are small and rare.

How do human drivers handle such situations? First they wait and observe the traffic patterns. If opportunities for left turns are rare they adjust their driving strategy. They may accelerate faster and will try to inch into a smaller slot than usual. Sometimes they will move slightly into the lane of cars coming from the left to signal that they are intent on making a turn and expect other cars to make room or they will try to find an intermediate spot between the various lanes and break the left turn down into one move towards this spot, come to a stop there and then into a second move from the the intermediate position into the the target lane. Leonard is right that programming such maneuvers into self-driving cars presents a major challenge.

But the problem is more fundamental. When we develop self-driving cars, we gain insights about the domain of driving and extend our knowledge not only about algorithms but also about human driving. To make a left turn, self driving cars have to analyze the traffic situation at the intersection. They are much better than humans at simultaneously identifying the traffic participants in all directions, to detect their speed, and they are quite good at anticipating their trajectories. Current driverless car prototypes also have no problems to decide on an appropriate path for the left turn. When a self-driving car hesitates at an intersection, the reason is not a problem with the algorithm but rather that the self-driving car finds that the safety margins for executing the turn are too small in the current situation: the risk is too high. Unfortunately, this problem can not be solved through better algorithms but only by increasing the level of acceptable risk! The risk of a left turn at an intersection is determined by the layout of the intersection, physics and the range of potential behavior of the other traffic participants, none of which can be changed by the self-driving car.

Left turns are indeed known to be risky. We may not think about it when we make a left turn, but accident statistics paint a very clear picture. An NHTSA study that analyzed crossing path crashes found that police-reported crashes involving left turns (913,000) are almost 10 times as frequent than police-reported crashes of right turns (99,000). If we consider that right and left turns are not equally distributed in normal driving (right turns occur more frequently but exact data are not available)  then the risk of a left turn may be between 10 and 20 times larger than the risk of a right turn. In 2013 crashes between a motorcycle and another vehicle making a left turn cost 922 lives; this amounted to nearly half (42%) of all fatalities due to crashes involving a motorcycle and another vehicle. Arbella insurance reported that in 2013 31% of its severe accident claims involved left turns. Thus human drivers have little reason to be proud of their left-turn capabilities.

As a consequence, UPS has largely eliminated left turns many years ago. Recently the route planning app Waze has rolled out a new feature that allows users to plan routes without left turns. These two examples show that self-driving cars do not even need the capability of making left turns in heavy traffic. It is possible to get along without such turns.

Thus the left turn problem for self-driving cars leads to the following three insights:

1) The left turn problem is not so much a problem of self-driving cars, it really is a problem of human drivers who take too many risks at left turns as we can see from the large number of left-turn accidents and from the risk analysis which self-driving cars carefully perform when making a left turn. Autonomous cars should never make left turns as readily and rapidly as human drivers. As human drivers we need to be more self-critical about our own capabilities and more ready to question our assumptions about driving instead of using our driving behavior as our implicit standard for self-driving cars.

2) We need to carefully consider the acceptable risk profiles for self driving vehicles. Risk profiles are not black and white; there are more alternatives than the high levels of risk that we take every day as human drivers without much thinking and the minimal risk strategies adopted by all current self-driving car prototypes. It would be unacceptable to let self-driving cars barge into dense traffic in the same way as we sometimes consider viable and mostly get away with. But it would be possible to reduce the time that a driverless car has to wait when turning or merging by allowing the car to increase the acceptable risk by a small amount if clearly defined conditions are met.In this area, much work and thinking is required. Expecting a self-driving car to minimize all conceivable risks and then operate as quickly as human drivers is a contradiction in itself. Instead of minimizing all risks, we need to seriously discuss what kind of small risks should be allowed in well defined situations.

3) We should not underestimate the creativity of the market to deal with the remaining problems of self-driving cars. Many of the frequently-cited problems of self-driving cars have practical workarounds that don’t require that much intelligence (triple right-turns instead of a left turn, remote assistance to deal with the hand gesture problem, limit the first self-driving taxis to snow-free regions for the snow problem etc.).

Volvo’s liability promise for autonomous mode may cut out insurance companies and independent repair shops

Volvo has recently stated that they will accept full liability for accidents that happen while the car drives in fully autonomous mode. This takes the heat away from the discussion about liability issues for self-driving cars. But it also has side effects that strengthen the business model of the auto maker: By accepting full liability the auto maker in effect shoulders the liability not only for all defects of the software (which no auto maker can evade anyhow) but also for all other accidents that may occur in autonomous mode. Some accidents can not be prevented: Obstacles may suddenly appear on the way (animals, pedestrians, other objects) and make an accident unavoidable. Defects of the roadway, certain weather conditions, and certain questionable behaviors of other traffic participants may lead to accidents that even the best software can not prevent.

Therefore the acceptance of full liability contains both a promise regarding the quality of the software and an insurance element: Volvo must either add the total, non-zero, lifetime risk of driving in autonomous mode to the purchase price of their self-driving cars. This could have the disadvantage of making their cars more expensive. Or they could duplicate the insurance industry’s business model and request that their customers subscribe to a (low) supplementary insurance policy. The latter has the advantage that risk profiles – total number of miles driven per year and the area where the cars are driven (urban, country, highway) can be taken into account. But the insurance industry would surely mobilize against the latter approach and decry it as anti-competitive.

In the following we therefore examine the first case where Volvo decides to include the cost of insurance as a hidden element in the purchase price in more detail: It is hard to provide a good estimate of the risks but there are some numbers we can build from: In 2012 US insurance expenditures for a car had an average value of $815 per year. If we take this as a proxy for the risk of human driving, then factoring in the risk of human driving for a 12 year life expectancy of a car would increase the purchase price by $9780. How much lower will the risk of autonomous mode driving be? A representative study of more than 5000 severe accidents in the United States published by the NHTSA which was carried out between 2005 and 2007 provides some clues: The study found that human errors were the most critical factor in more than 93% of the accidents. In less severe accidents human error probably plays an even bigger, but certainly not smaller role. Other factors were: Technical failures: 2.0%, road conditions: 1.8%, atmospheric conditions (including glare): 0.6%. If we assume that autonomous vehicles do not add significant additional modes of error, then they should be able to reduce the number of accidents by at least a factor of 10 ( 1/(1-0.93) = 14.2). Because the vehicles drive more defensively, break earlier in critical situations, are much more consistent in their behavior in critical situations than humans (some of whom will not react at all in a critical situation, not even step on the brakes) the average damage per accident is likely to be significantly smaller than the average current damage. Therefore the costs of vehicle accidents are likely to fall even further; we estimate that autonomous vehicles have the potential of reducing accident costs by a factor between 15 and 50. This assumes that autonomous vehicles do not create major additional risks and don’t somehow cause rare but unusually enormous accidents. Under these assumptions, Volvo’s liability promise can be added into the purchase price: If we assume a reduction of damages by a factor of 15, the life-span risk (12 years) translates into 652$ of additional costs for each fully autonomous car which Volvo sells.

Accepting full liability for all accidents in autonomous mode may therefore indeed be a viable strategy for Volvo and other makers of fully autonomous vehicles. This move cuts out the insurance industry and – if copied by other auto makers – should not be a competitive disadvantage, because the risks are unlikely to differ greatly from auto maker to auto maker. In addition, auto makers might use this approach to open additional revenue streams for more risky use of vehicles where they might request additional fees – for example for heavily used fleet vehicles.

There is another side-effect of assuming liability for accidents in autonomous mode. Accidents are more likely if the cars are not maintained properly. Therefore auto makers may place more stringent requirements on maintenance, shorten maintenance intervals and require that the cars be maintained in certified repair shops only – which eliminates the business of independent repair shops. By increasing maintenance revenues, auto makers may be able to offset the costs of assuming liability for accidents.

In summary, Volvo’s shrewd move to assume liability may extend their revenue streams while cutting out insurance companies and independent repair shops.

Accident rates of self-driving cars: A critique of the Sivak/Schoettle study

To what degree are self-driving cars likely to reduce accidents and traffic deaths? This is a very important but very hard question which has implications for testing, insurance, regulations and governments considering to accelerate or delay the introduction of autonomous cars. Now two researchers, Michael Sivak and Brandon Schoettle, of the Transportation Research Institute at the University of Michigan have examined this problem in a short study titled “Road safety with self-driving vehicles: General limitations and road sharing with conventional vehicles and arrived at four conclusions which – when read carefully – provide little insight into the problem but when read casually seem to raise doubts about the expectation that self-driving cars will be significantly safer than human drivers.

As an example the abstract summarizes their second conclusion as follows: “It is not a foregone conclusion that a self-driving vehicle would ever perform more safely than an experienced, middle-aged driver”.

Who could argue against this statement? Of course, this is not a foregone conclusion. This is a hard problem and a substantial question. Neither would it be a a foregone conclusion that a self-driving vehicle would ever perform more safely than an experienced, young driver (or even an unexperienced young driver). But many readers will interpret this conclusion that the authors – after having analyzed the issue – have found substantial problems that raise doubts as to whether autonomous cars could ever perform better than experienced, middle-aged drivers. But the full text of the report contains just one sentence which further examines this problem:

“To the extent that not all predictive knowledge gained through experience could exhaustively be programmed into a computer (or even quantified), it is not clear a priory (italics by the original authors) whether computational speed, constant vigilance, and lack of distractability of self-driving vehicles would trump the predictive experience of middle-aged drivers”. (Page 4)

Nobody can argue with this statement. It would be a good introduction to a chapter that looks at this problem in more detail, provides some framework, examines the different aspects etc. etc. But this does not materialize.

If we read the study carefully, then we find a pattern that valid questions are being raised, a small number of the aspects relating to these questions are outlined, and then the questions are rephrased into conclusions which themselves are questions. This is unfortunate because the topic is extremely important. More than a million people die in traffic accidents every year. If – twenty years from now – we might look back from a situation where traffic accidents have fallen by more than a factor of five, then we will be able to state with certainty how many lives could have been saved if self-driving cars would have been introduced a few years earlier. We might find that tens of thousands of people have lost their lives because governments and regulators did not realize the risk of delaying a highly beneficial technology and business and innovators were reluctant to advance the technology because of a climate of mistrust and skepticism with respect to the technology. Of course, from the perspective of today this is not a foregone conclusion but we need to make an effort to understand the risks and likely accident patterns of autonomous vehicles much better.

There are lives at stake both if we are too optimistic and too pessimistic over the potential of this technology. But the problem is not symmetric: If we are too pessimistic with respect to the potential of this technology, then we can easily find ourselves in a situation in the future where we find in hindsight that thousands of lives have been lost because of this pessimism and the resulting delay of the introduction. On the other hand, if we are overly optimistic with regard to the technology, and accelerate innovation in this area, it is unlikely that thousands of lives will be lost because the cars do not perform as safely as expected. We can be confident that certification bodies will do their work and uncover problems before they can cause thousands of deaths and regulators will most surely step in immediately when these cars do not perform as expected. At the current stage therefore, pessimism about the technology’s potential may be much more deadly than optimism (which should not be confounded with being blind about the risks).

We should work together urgently to formulate a theory of human traffic accidents and self-driving car accidents which can help us shed light on the issue and understand and organize the many different aspects of this problem. This is hard but it can be done. Please contact me at info.2011 ( at ) inventivio ( dot ) com if you are already working on this topic, if you know of a suitable approach for covering this problem or if you are interested in working together on this topic. I will post one approach on how this could be achieved next week.

Changes:
2015-01-23: Added link to the full text of the study.