Supervising autonomous cars on autopilot: A hazardous idea

As autonomous vehicle technology matures, legislators in several US states, countries and the United Na­tions are debating changes to the legal framework. Unfortunately one of the core ideas of these legal efforts is untenable and has the potential to cripple the technology’s progress. We show that the idea that drivers should supervise au­tonomous vehicles is based on false premises and will greatly limit and delay adoption. Given the enormous loss of life in traffic (more than one million persons per year world wide) and the safety potential of the tech­nology, any delay will incur large human costs.
Read the full paper (pdf).

Invalid assumptions about advanced driver assistance systems nearing full autonomy

  • The average human driver is capable of supervising such systems
  • Humans need to supervise such systems
  • A plane’s auto pilot is a useful analogy for such systems
  • Driver assistance systems will gradually evolve into fully autonomous systems

Supervising auto­no­mous cars is neither necessary nor possible

The car industry is innovating rapidly with driver assistance systems. Hav­ing started with park-assist, lane-de­parture warning, etc., the latest sys­tems now include emergency braking and even limited autonomous driving in stop-and-go traffic or on the high­way (new Daimler S-Class).

As the systems become more capa­ble, the situations will greatly in­crease where driving decisions are clearly attributable to a car’s software and not directly to the driver. This raises difficult questions of responsi­bility and liability in the case of acci­dents. From a legal perspective, the easiest solution is to keep the driver in the loop by positing a relationship between the driver and the car where the car executes the driver’s orders and the driver makes sure that the car only drives autonomously in situa­tions which it is capable of handling. The driver thus becomes the supervi­sor who is responsible for the actions of the car’s software to which he dele­gates the task of driving.

Unfortunately this legal solution can not accommodate advanced driver assistance systems which perform the driving tasks for longer periods in ur­ban, country- and highway traffic. We will call these systems auto-drive systems to distinguish them from the current, simpler driver assistance sys­tems which are typically used for narrow tasks and short times.

The legal model rests on the follow­ing two invalid assumptions:

1) An average human driver is ca­pable of supervising an auto drive-system

All ergonomic research clearly shows that the human brain is not good at routine supervision tasks. If a car drives autonomously for many miles without incident, a normal human will no longer pay attention. Period! No legal rule can change this fact. The human brain was not built for supervision tasks. In addition the su­pervision of a car traveling at high speed or in urban settings is very dif­ferent from supervising a plane which is on auto-pilot (see below).

If the developers of the auto-drive system build and test their car on the assumption that a human actively monitors the car’s behavior at all times because situations may arise that the car can not handle alone, then accidents will happen because some of the drivers won’t be able to react fast enough when such situa­tions occur.

Even if a human could remain alert during the whole drive, the problem remains how the user can distinguish which situations a car is able to handle and which situations it can not handle. How much knowledge will a driver need to have about the car’s capabilities? Once auto-drive systems evolve beyond the current very limited highway and stop-and-go scenarios, and are capable to drive in rain and urban settings, it will become very difficult for the manufacturer to enumerate and concisely describe the situations the car can or can not handle. It will become impossible for the average driver to memorize and effectively distinguish these situations.

2) Humans need to supervise cars operating in auto-drive mode

We saw in the last section that humans can not be relied upon to correct mistakes of a car while driv­ing. But humans might still be needed to ensure that the car does not attempt to drive autonomously in situ­ations that it can not handle well.

However, the car is equipped with a wide array of sensors and continu­ously as­sesses its environment. If it’s autono­mous capability has limita­tions, it must be able to detect such situations automatically. Therefore there is no need to burden the driver with the task of deter­mining whether the car is fit for the current situation.

Instead, the car needs to inform the driver when it encounters such a situa­tion and then requests to transfer control back to the driver.

Therefore any non-trivial driver as­sistance system must be able to in­form the driver when it enters situa­tions it can not handle well. There is no need to require that the casual driver be more knowledgeable than the system about its capabilities.

Auto-pilot: the wrong analogy

The most frequently used analogy for a driver-assistance system is the auto-pilot in a plane. Mentally as­signing the status of a pilot to the car’s driver who then watches over the auto-drive system may have ap­peal. But it overlooks the fundamen­tal differences between both con­texts: A car driving autonomously differs very much from a plane on auto-pilot. The nature of the tasks and the required reasoning capabili­ties differ considerably:

a) Physics of motion: A plane moves in 3-dimensional space trough a gas. Its exact movement is hard to formal­ize and predict and depends on many factors that can not be measured eas­ily (local air currents, water droplets, ice on the wings). A trained pilot may have an intuitive understanding of the movement that is beyond the ca­pabilities of the software. In contrast, a car moves in 2-dimensional space; its movement is well understood, easy to handle mathematically and predict, even in difficult weather (provided speeds are adequate to the weather).

b) Event horizon. Situations that re­quire split-second reactions are very rare while flying; they occur fre­quently while driving a car. Thus the hand-off and return of control be­tween human and machine is much more manageable in flight than in a car. There are many situations which an auto-drive system must be able to handle in full autonomy because the time is not there to hand off control to the human.

c) Training. The supervision task is the primary job function of a pilot, requires extensive, continual training and has many regulations to ensure alertness. This does not apply and can not realistically be applied to the average driver.

Therefore the relationship between pilot and auto-pilot can not be used as a model for the relationship be­tween driver and driver-assistance system.

Driver assistance systems can not gradu­ally evolve into auto-drive systems

Much of the discussion on the progress of autonomous vehicle tech­nology assumes that driver assistance systems will gradually evolve to auto-drive systems which are capable of driving on all types of roads in all kinds of driving situations. Initially, auto-drive will be available only for a few limited scenarios such as high­way driving in good weather. There­after more and more capable auto-drive systems will appear until the systems are good enough to drive ev­erywhere in all situations.

Unfortunately, this evolution is not likely. Cars which drive au­to­no­mously can not return control to a driver immediately, when they en­counter a difficult situation. They must be capable of handling any situa­tion for a considerable time until the driver switches his attention to the driving task and assesses the situa­tion. These cars can not limit themselves to driving in good weather or light rain only – they must be able to handle sudden heavy rain for as long as the driver needs to re­turn to the driving task which for safety reasons must be more than just a few seconds. At realistic speeds these cars may travel a considerable distance in this time. If the car can safely handle this delay, it must proba­bly be able to travel long dis­tances in heavy rain, too.

The same issue applies to traffic situa­tions: While highways may look like an ideal, well structured and rela­tively easy environment for driv­ing, many complex situations can arise there at short notice which a car on auto-pilot must recognize and deal with correctly. This includes many low-probability events which never­theless arise from time to time, such as people walking or riding their bi­cycle on highways. Driving in urban settings is much more complex and therefore a gradual path of auto-drive evolution is even more unlikely in such settings. Thus there maybe some low-hanging fruit for the developers of auto-drive applications (limited highway-driving); but almost all the rest of the fruit is hanging very far up the tree! Systems that are capable of driving in urban/countryside traffic can not start with limited capabilities. From the first day, they must be able to handle a very wide variety of situations that can occur in such settings.

Regulations that harm

We have already shown that the re­quirement of supervised driving is neither necessary nor can it be ful­filled for advanced driver assistance systems. But one could argue that the requirement does little harm. This is not the case. Wherever this rule is adopted, innovation will be curtailed. The safer and more convenient fea­tures of autonomous vehicles will only be available to the affluent and it will take a long time until most of the cars on the road are equipped with such technology. This means many more lives lost in traffic acci­dents, much less access to individual mobility for large groups of our popu­lation without driver’s license (such the elderly and the disabled), more waste of energy, resources, space for mobility.

Any country that adopts such rules will curtail innovation in car-sharing and new forms of urban inter-modal and electric mobility that become possible when autonomous vehicles mature that can drive without passen­gers.

It is obvious today that legislation that requires drivers to supervise ad­vanced driver assistance systems will not stand the test of time.

Download as PDF

Changes 2013-09-26: Updated title and part of the text

Oxford Mobile Robotics advances driverless car research

Oxford’s mobile robotics group has been making rapid progress in the development of driverless cars. As Prof. Paul Newmann explained in a lively lecture last Thursday (as part of the 14th Annual Robotics Systems Conference), it took his group of 20 PhD students just 4 months to build an autonomous car that was able to navigate local streets.

oxford-autonomous-car
Prototype Autonomous Car (Photo: Hars, 2013)

While being equipped with some algorithms for obstacle detection, the car primarily serves as a test bed for advanced navigation algorithms. Similar to Google, the group uses prior knowledge about the roads to be traveled, but their algorithms can work with much simpler and much less expensive sensors. The car does not need 3D LIDAR sensors. It uses a much cheaper 2D Lidar which is affixed to the very front of the vehicle. The rotating laser captures a slice of points with distance information in a single line below the car as well to the right and the left of the car. As the car moves forward and scans line after line a 3D picture gradually emerges. The car determines its position by comparing the data points gathered to its prior knowledge. The sensor can capture about 40 lines per second. This works well for low speeds but would have to be increased for higher velocities.

Prof. Newmann has also come up with a new approach for navigating in snow and rain. Localization can be very difficult when snow changes the environment’s appearance. His solution is only seemingly simple: instead of trying to detect invariant properties of the landscape, he proposes to accept that the environment may have multiple appearances. Thus he adds the different ways that the environment may look to his store of prior knowledge. As the car drives a known area, it identifies that prior view (winter, summer..) which most closely matches the data captured by its sensors and uses it for localization. It will be interesting to see how robust this approach of “experience-based navigation” can be and how many variations of the environment will be needed to allow fully autonomous driving.

The group currently has two driverless car prototypes; one of them is part of a cooperation with Nissan. It will be interesting to see whether Nissan will incorporate some of the groups navigation algorithms into their solution.