Fatal Tesla accident exposes fundamental flaws in the levels of driving automation framework

Ill-conceived standards can kill. The Tesla accident in which Joshua D. Brown was killed in early May could not have happened if SAE (Society of Automative Engineers), NHTSA  and BAST had not provided a rationalization for placing cars with incomplete and inadequate driving software on the road.

Since their publication the frameworks for driving automation (by SAE 2014, NHTSA 2013, BAST 2010) have been criticized for ignoring established knowledge in human factors. All experts in the field agree that it is not possible to expect human drivers to continuously supervise driving automation software and correct its shortcomings and errors at split-second notice when problematic traffic situations occur. SAE Level 2 and level 3 are therefore inherently unsafe and these levels should not have appeared as a viable variant of driving automation software in any framework at all!

Frameworks are not arbitrary. Unfortunately, the driving automation frameworks were heavily influenced by the perceived needs of the auto industry which already had driver assistance systems on the road and favored a gradual evolution of their systems towards fully autonomous driving. It is understandable that the authors wanted a framework that simplifies the path towards fully autonomous driving not just from a technical but also from a legal and commercialization perspective where automation can occur in baby-steps, most of which would not involve fundamental changes and would not require legislators to take a hard look at the underlying technology.

This is how Tesla was able to put their vehicle with auto-pilot software on the market. It was presented as a small step from cruise control to full lateral and acceleration/deceleration control by the system. Nothing else should change, they argued: the human is still in full control and bears full responsibility (which means that the driver will always be the scapegoat if something goes wrong!); the vehicle does not have the ambition of performing all tasks by itself. The frameworks clearly provide support for this argument. But they overlook the key difference: the software now handles the driving task continuously, for longer stretches of time without the need for human action. There is a fundamental difference between continuous driving systems vs. ad-hoc, short-term operations of driver assistance systems (i.e. parking, emergency braking, lane warning etc.) which only take over driving functions for short periods of time. Any framework for automated driving should have included this distinction!

Software that assumes the driving task continuously changes everything! Human drivers can and will relax. Their minds will no longer be on the traffic around them at all times. It is well known that human drivers tend to trust autonomous driving algorithms too quickly and underestimate their deficiencies. And it takes a significant amount of time to get back into the loop when the car needs to return the control function back to the driver. Unfortunately the authors of the framework failed to think through the details and problems that follow on levels 2 and 3. They thought about strategies for handing back the control from the car to the human; but apparently they did not perform a risk analysis where they considered how potential crisis situations that require rapid reaction could be mastered. Such an analysis would have shown immediately that
a) there are many possible critical situations where a hand-off from the vehicle to the driver can not be carried out quickly enough to avoid catastrophic consequences and
b) there are many situations where a driver in supervision mode is not able to detect a lack of capability or misbehavior by the driving automation software fast enough.

The Tesla accident is a good example to illustrate these problems. Although the accident occurred on May 7th, only some details have been released. The accident occurred around 3:40 PM on a divided highway 500 near Williston, Florida (view the map). A tractor-trailer turned left, crossing the path of the Tesla. Without braking at all, the Tesla hit the trailer approximately in the middle, went under it, emerged on the other side and continued driving for several hundred feet before coming to a stop at a telephone pole. More info on the accident (including the police sketch). The weather was good, no rain, dry road, good visibility. The road runs straight for miles. At 3:40 PM the sun stood in the West, behind the Tesla. The speed limit on the road was 65mph (104km/h), which translates into a stopping distance of 64 meters. Stopping time would have been about 4 seconds (which would also have been enough time for the truck to clear the intersection). The size of the tractor-trailer has not been made public but it was probably between 65 and 73 feet (20 and 22 meters). Assuming a standard lane width of 12 feet (3.7m), and estimating the distance between both sections of the divided highway based on the Google earth image to be about 20m, the trailer had almost enough available space between both lanes to make the 90 degree turn and could then continue straight on crossing the two lanes of the highway. If we assume that the left turn (the part at the lowest average speed) takes at least 6 seconds (time estimated from a video showing trailer trucks making a left turn) and the truck then passes the intersection at an average speed of 10mph (16km/h), then the truck needs an additional 6 seconds to clear the intersection. As the trailer was hit in the middle by the Tesla driving in the outer lane, the truck must have been about 30 feet (10m) short of clearing the intersection. Thus the tractor-trailer would have cleared the intersection about 2 seconds later.

At the moment, much of the discussion about the accident centers around the driver’s attention. We will never know whether or when the driver saw the truck. There are several possible scenarios: If we take the time horizon of 10 seconds (=6+6-2) before the accident when the trailer-truck initiated the turn, then the Tesla had a distance of about 280 meters to the intersection. At this distance, the large trailer-truck moving into the intersection would have been clearly visible. A driver engaged in the driving task (not on auto-pilot) could not have failed to see the truck and – given the lack of other nearby traffic or visual distractions – would have noticed with enough lead time that the truck is continuing onto the intersection. A step on the brake would have defused the situation and avoided the accident.

The scenario looks very different with auto-pilot. The driver knew that the road went straight for miles, with optimal visibility which translates into a low overall driving risk. The driver may have paid attention, but not as much attention as when driving without auto pilot. When a car drives by itself for many miles a driver won’t be as alert as when he performs the driving function himself. The attention will wane, the truck on the left side may have received a short glance by the driver. The truck’s intent to make a left turn would have been obvious;  but the truck slowed down when he entered the turn about 10 seconds before impact and the driver would certainly have expected that the truck will come to a stop and that the auto-pilot is also aware of the large truck. Thus even if the driver saw the truck initiate the turn, he would probably not have been concerned or inclined to pay special attention to the truck. This was just another one of probably thousands of intersections that Joshua Brown, who used the auto-pilot frequently and blogged about it, had passed. His confidence in the Tesla for handling intersections may have been high. Although he knew that the auto-pilot is not perfect, he probably did not expect that a large truck would be overlooked. In addition, he was probably aware of a Youtube video entitled “Tesla saves the day” which had circulated widely a few months ago. It showed how a Tesla had auto-braked just in time for a car crossing the path from the left.

The critical time window for recognizing the gravity of the situation and acting to prevent the accident was less then 10 seconds; and only 6 seconds before impact was it unmistakably clear that the truck is moving into the intersection instead of coming to a stop. If the driver was not fully focused on the road all the time but was alert in the 3 seconds between 6 and 3 seconds prior to impact he could have prevented the accident. But it is unrealistic to expect that a non-active driver will become fully focused on the traffic at each and every intersection that a car on auto-pilot passes and that he will always be alert for hard to anticipate, extremely rare but very critical short-term situations.

Even if the driver saw the truck and recognized that it was moving into the intersection 3 to 6 seconds before impact, then other problems arise: he has to jump into action and take over from the car. This needs time – both for the decision to revoke control from the car and for physically assuming control of the vehicle. Part of the driver’s brain has to work through the expected behavior of the car: If the car has not yet decelerated does this mean that it has not seen the large truck at all or does it mean that it is not necessary to brake (the car may have come to the conclusion that the trailer-truck will clear the intersection in time). Could it really be that the car does not see this blatantly obvious trailer-truck….? Have I completely overestimated the capability of this car? The shorter the remaining reaction time when the driver realizes the impending crisis, the more dangerous and potentially paralyzing this additional mental load may become.

Developers of driver assistance systems can not expect that drivers are fully alert all the time and ready to takeover in a split second. Moreover, they can not expect that drivers understand and can immediately recognize deficiencies or inadequacies of the software. Who would have expected that Tesla’s auto pilot does not recognize a tractor trailer in the middle of an intersection?

But the key problem is not a software issue. It is the mindset which offloads the responsibility from the driving software to the driver. Developers will be much more inclined to release imperfect software if they can expect the driver to fill any gap. That Tesla uses a non-redundant mono camera is another illustration of the problem. What if the camera suddenly malfunctions or dies on a winding road with the auto-pilot engaged and the driver does not pay enough attention to take over in a split-second? How is it possible to release such a system fully knowing that drivers using these systems will not always be paying full attention. This is only possible because we have standards that let developers offload the responsibility to the driver.

The often-raised counter argument that the level 2 auto pilot has already saved lives is not valid: it confuses two different kinds of driver assistance systems: those – such as emergency braking systems – which only take over the driving function for short periods of time when they are really needed and those that assume continuous control of the driving function for longer stretches of time and thus lead human drivers to take their minds off the road at least part of the time. Short term functions such as emergency braking are not controversial. They do not depend on the auto-pilot and it is them, not the auto-pilot, which is saving the lives.

There is only one variant in which software that assumes the driving task continually, for longer stretches of time can be developed and released to the market: the autonomous driving system must take full responsibility for the driving task and it may not require human supervision when engaged. Thus Levels 4 and up are viable approaches. The Tesla accident does not only show a software problem; it illustrates the dangers of levels 2 and levels 3. Theses levels must be scrapped from the framework!

US Secretary of Transportation: driverless cars all over the world by 2025

Anthony Foxx, Secretary of Transportation visited the Frankfurt Auto Show together with his colleagues from the G7 and German Chancellor Merkel. In an interview with German newspaper Frankfurter Allgemeine Zeitung, he stated that he is very optimistic with respect to driverless cars and expects to see them in use everywhere in the world within 10 years. He wants to accelerate the process for the introduction of new technologies such as self-driving cars and avoid the current legislative delays of five or six years. Of, course safety must always be assured.

The Frankfurt Auto Show clearly demonstrates how much more seriously politicians and the auto industry are taking autonomous car technology and the changes that they will bring.

Source: Frankfurter Allgemeine Zeitung, 2015-09-19

Global technical regulations for autonomous vehicles: Informal working group established

As regulators grapple with autonomous technology, conflicts between country-specific laws could impede the adoption of this technology. The United Nations has a forum (“WP29“) which aims to avoid such problems by harmonizing vehicle regulations. Many aspects of technical regulations for wheeled vehicles are discussed in a broad range of (informal) working groups. Because of the rapid progress of autonomous technology, the informal working group on Intelligent Transport Systems has recently been renamed and refocused as informal working group on ITS/Automated Driving.

The participants are now laying the ground work for future regulations. They have discussed various approaches to frame levels of autonomy and seem to be leaning toward SAE’s 6 levels of automated driving. Unfortunately, this framework is not very useful because most of the interest lies in just 2 of the six levels, because it can be misinterpreted as conveying a linear progression of technology from level to level and because it is based on a limited, somewhat mechanistic perspective but fails to see the full complexity of the software-based self-driving vehicle and the complexity of the context in which it operates, which it interacts with and constantly learns about.

Fortunately, the group decided against addressing highly automated first and fully automated driving only beginning in 2016 (see annotated working group document). Both topics will now be considered somewhat in parallel, although the group still leans more toward highly automated driving. One of their future discussion items will be usage scenarios for highly automated driving. Maybe they will also consider some scenarios for fully automated driving and then begin to understand the extent to which mobility and with it the role of passenger vehicles will change.
An excellent source for information about this process is GlobalAutoReqs.com, which maintains an up-to date list of cross-referenced documents related WP9.

 

Five guiding principles for autonomous vehicle policy

As self-driving car technology matures, politicians and regulators find themselves called to action. But the technology is a moving target and views about the technology’s path and impact vary widely. So how should policy makers approach the subject? Here are five guiding principles proposed by Marc Scribner,  a transportation and telecommunications policy specialist and research fellow at the Competitive Enterprise Institute. Scribner only discussed the principles briefly at a recent presentation at the Cato Institute. In the following I supplement each of his five bullet points with my interpretation:

1. Recognize and promote the huge potential benefits of self-driving cars

Policy makers need to familiarize themselves with the potential benefits of self-driving cars. First, they need to get the concepts right and clearly distinguish self-driving cars (which can drive without human supervision, even empty, and don’t need additional infrastructure) from other technologies such as driver assistance systems and connected cars. Connected cars and driver assistance systems are certainly also interesting topics but their benefits pale in comparison to the benefits of cars that drive themselves. Besides greatly reducing accidents, self-driving cars also bring individual motorized mobility to those who do not have a driver’s license – including people with disabilities and the elderly. They reduce energy consumption, simplify the introduction of alternative fuels and reduce the load on the road infrastructure.
Policy makers need to recognize that self-driving cars can solve or greatly reduce many longstanding problems. This is not a technology where a wait-and-see attitude is warranted. Politicians need to actively promote this technology. Of course, this does not mean that the technology’s risk should be ignored.

2. Reject the precautionary principle

Safety is a key concern and a key benefit of self-driving cars. There is good reason to expect mature self-driving cars to drive much safer than humans. They are equipped with 360 degree sensors, including cameras, radar and Lidar, are always alert, never tired, don’t drink and adopt a defensive, risk-minimizing driving strategy. But letting the first such cars drive by themselves on public streets is a difficult decision: what if anything goes wrong?
The application of the precautionary principle avoids this situation by requiring the developer to prove that the car is harmless. Unfortunately, proving that a self-driving car is safe is a hard problem and strict application of the principle could significantly delay the introduction of self-driving vehicles.
This weakness of the precautionary principle is well-known: There is the risk that erring on the side of caution when certifying self-driving cars prolongs the current carnage on our  on our roads. Unfortunately, we don’t have the luxury to delay a well-functioning self-driving car for a few more years to be extra-sure that everything is perfect when 33,000 people die in traffic accidents per year in the US alone and more than 1 million per year worldwide.
As much as it is not acceptable to let first prototypes roam the streets unsupervised it is not acceptable to delay and delay just to be on the safe side. A middle ground must be found. This is not an easy task for policy makers but one on which lives depend.

3. Don’t presume to know how the technology and law will evolve

Will autonomous vehicle technology gradually evolve from driver assistance systems? Will they first appear on the highway or in low-speed local settings? What new business models will emerge and what role will machines play? Will the US be the first to legalize fully autonomous vehicles or does the Vienna Convention on road traffic really prevent many European Countries from adopting self-driving vehicles? There are so many paths that this technology can take, so many changes in many different areas of business and society, so many proponents and possibly opponents that it is hard to be right about the path of technology and – consequently – of law. It is very dangerous to assume that the technology will evolve in one way, then regulate for this situation and subsequently find that the technology evolves very differently.

4. Let the innovators innovate

This section was originally entitled ‘minimize legislative and regulatory intervention’ and included the goal to give the innovators the space to innovate. But here I differ with Scribner: Unfortunately, transportation law is so much based on the concept of vehicles driven by humans that many laws do need to be changed. Current traffic laws contain so many elements that inhibit progress for this new and safer technology. Autonomous vehicles change the concept of what a car is and the laws need to be updated accordingly. Otherwise innovators will find it hard to make progress. This is a task that should be started immediately – before fully autonomous vehicles are ready for public roads.

5. Preserve technology neutrality

Laws and regulations should be technologically neutral. As much possible, they should avoid favoring a specific technical approach.

United Kingdom prepares to play leading role in driverless car revolution

The country which started the industrial revolution and the first revolution in mobility is determined not to sit on the sidelines as the next mobility revolution unfolds. The UK government wants to accelerate the adoption of autonomous vehicle technology and ensure that the UK plays a prominent role by establishing a UK city or region as a test and demonstration site for self-driving cars.

To start this process, it convened about 100 people in London in Mid-February to discuss the criteria for site selection. The city/region will be funded with 10 Mio Pounds. The very efficiently managed workshop rapidly generated insights about success criteria for such sites.

There seemed to be much consensus that fully autonomous vehicles hold the most promise; they will provide completely new opportunities in mobility services, applications and business models. There was some disagreement as to the state of autonomous technology. While some argued that the technology is basically there, others voiced concerns that significant challenges still remain. Disagreement was also visible with respect to standardization and interoperability. While some argued that the vehicles should be standardized and easily transferred to new locations, others argued that imposing such requirements would be too early and would accomplish little.

A representative from Google stressed the importance of speed in the implementation – a comment that reflected a sense of urgency which most participants seemed to share: There is only a short window of opportunity to gain a leadership position in this rapidly moving field.

Within Europe, the United Kingdom has some unique advantages for the early implementation of self-driving cars: It is not bound by the stipulations of the Vienna Convention on Traffic that every car must be controlled by a driver at all times. Unlike most European Countries (except Spain) it has never ratified the convention. In addition, its car industry is not as dominant as in many other countries (the UK is on position 17 of the 40 nations listed by the Organization of Motorvehicle Manufacturers (OICA) with respect to the number of employees in the car industry as percentage of the whole workforce; In contrast, Sweden, the Czech Republic, Germany and Spain are among the top five. This also means that the UK has less to fear from the disruption of the auto industry which fully autonomous vehicles might cause. At the same time, the UK has an excellent industry and research base, top universities including Prof. Newmanns Oxford Mobile Robotics Group, and already has a head start with more traditional electric driverless pods operating at Heathrow.

Given that another project is already under way to implement 100 self-driving pods in Milton-Keynes between 2015 and 2017(funded at much higher rates), the UK might indeed achieve a critical mass to become a key player in this autonomous vehicle revolution.

Supervising autonomous cars on autopilot: A hazardous idea

As autonomous vehicle technology matures, legislators in several US states, countries and the United Na­tions are debating changes to the legal framework. Unfortunately one of the core ideas of these legal efforts is untenable and has the potential to cripple the technology’s progress. We show that the idea that drivers should supervise au­tonomous vehicles is based on false premises and will greatly limit and delay adoption. Given the enormous loss of life in traffic (more than one million persons per year world wide) and the safety potential of the tech­nology, any delay will incur large human costs.
Read the full paper (pdf).

Invalid assumptions about advanced driver assistance systems nearing full autonomy

  • The average human driver is capable of supervising such systems
  • Humans need to supervise such systems
  • A plane’s auto pilot is a useful analogy for such systems
  • Driver assistance systems will gradually evolve into fully autonomous systems

Supervising auto­no­mous cars is neither necessary nor possible

The car industry is innovating rapidly with driver assistance systems. Hav­ing started with park-assist, lane-de­parture warning, etc., the latest sys­tems now include emergency braking and even limited autonomous driving in stop-and-go traffic or on the high­way (new Daimler S-Class).

As the systems become more capa­ble, the situations will greatly in­crease where driving decisions are clearly attributable to a car’s software and not directly to the driver. This raises difficult questions of responsi­bility and liability in the case of acci­dents. From a legal perspective, the easiest solution is to keep the driver in the loop by positing a relationship between the driver and the car where the car executes the driver’s orders and the driver makes sure that the car only drives autonomously in situa­tions which it is capable of handling. The driver thus becomes the supervi­sor who is responsible for the actions of the car’s software to which he dele­gates the task of driving.

Unfortunately this legal solution can not accommodate advanced driver assistance systems which perform the driving tasks for longer periods in ur­ban, country- and highway traffic. We will call these systems auto-drive systems to distinguish them from the current, simpler driver assistance sys­tems which are typically used for narrow tasks and short times.

The legal model rests on the follow­ing two invalid assumptions:

1) An average human driver is ca­pable of supervising an auto drive-system

All ergonomic research clearly shows that the human brain is not good at routine supervision tasks. If a car drives autonomously for many miles without incident, a normal human will no longer pay attention. Period! No legal rule can change this fact. The human brain was not built for supervision tasks. In addition the su­pervision of a car traveling at high speed or in urban settings is very dif­ferent from supervising a plane which is on auto-pilot (see below).

If the developers of the auto-drive system build and test their car on the assumption that a human actively monitors the car’s behavior at all times because situations may arise that the car can not handle alone, then accidents will happen because some of the drivers won’t be able to react fast enough when such situa­tions occur.

Even if a human could remain alert during the whole drive, the problem remains how the user can distinguish which situations a car is able to handle and which situations it can not handle. How much knowledge will a driver need to have about the car’s capabilities? Once auto-drive systems evolve beyond the current very limited highway and stop-and-go scenarios, and are capable to drive in rain and urban settings, it will become very difficult for the manufacturer to enumerate and concisely describe the situations the car can or can not handle. It will become impossible for the average driver to memorize and effectively distinguish these situations.

2) Humans need to supervise cars operating in auto-drive mode

We saw in the last section that humans can not be relied upon to correct mistakes of a car while driv­ing. But humans might still be needed to ensure that the car does not attempt to drive autonomously in situ­ations that it can not handle well.

However, the car is equipped with a wide array of sensors and continu­ously as­sesses its environment. If it’s autono­mous capability has limita­tions, it must be able to detect such situations automatically. Therefore there is no need to burden the driver with the task of deter­mining whether the car is fit for the current situation.

Instead, the car needs to inform the driver when it encounters such a situa­tion and then requests to transfer control back to the driver.

Therefore any non-trivial driver as­sistance system must be able to in­form the driver when it enters situa­tions it can not handle well. There is no need to require that the casual driver be more knowledgeable than the system about its capabilities.

Auto-pilot: the wrong analogy

The most frequently used analogy for a driver-assistance system is the auto-pilot in a plane. Mentally as­signing the status of a pilot to the car’s driver who then watches over the auto-drive system may have ap­peal. But it overlooks the fundamen­tal differences between both con­texts: A car driving autonomously differs very much from a plane on auto-pilot. The nature of the tasks and the required reasoning capabili­ties differ considerably:

a) Physics of motion: A plane moves in 3-dimensional space trough a gas. Its exact movement is hard to formal­ize and predict and depends on many factors that can not be measured eas­ily (local air currents, water droplets, ice on the wings). A trained pilot may have an intuitive understanding of the movement that is beyond the ca­pabilities of the software. In contrast, a car moves in 2-dimensional space; its movement is well understood, easy to handle mathematically and predict, even in difficult weather (provided speeds are adequate to the weather).

b) Event horizon. Situations that re­quire split-second reactions are very rare while flying; they occur fre­quently while driving a car. Thus the hand-off and return of control be­tween human and machine is much more manageable in flight than in a car. There are many situations which an auto-drive system must be able to handle in full autonomy because the time is not there to hand off control to the human.

c) Training. The supervision task is the primary job function of a pilot, requires extensive, continual training and has many regulations to ensure alertness. This does not apply and can not realistically be applied to the average driver.

Therefore the relationship between pilot and auto-pilot can not be used as a model for the relationship be­tween driver and driver-assistance system.

Driver assistance systems can not gradu­ally evolve into auto-drive systems

Much of the discussion on the progress of autonomous vehicle tech­nology assumes that driver assistance systems will gradually evolve to auto-drive systems which are capable of driving on all types of roads in all kinds of driving situations. Initially, auto-drive will be available only for a few limited scenarios such as high­way driving in good weather. There­after more and more capable auto-drive systems will appear until the systems are good enough to drive ev­erywhere in all situations.

Unfortunately, this evolution is not likely. Cars which drive au­to­no­mously can not return control to a driver immediately, when they en­counter a difficult situation. They must be capable of handling any situa­tion for a considerable time until the driver switches his attention to the driving task and assesses the situa­tion. These cars can not limit themselves to driving in good weather or light rain only – they must be able to handle sudden heavy rain for as long as the driver needs to re­turn to the driving task which for safety reasons must be more than just a few seconds. At realistic speeds these cars may travel a considerable distance in this time. If the car can safely handle this delay, it must proba­bly be able to travel long dis­tances in heavy rain, too.

The same issue applies to traffic situa­tions: While highways may look like an ideal, well structured and rela­tively easy environment for driv­ing, many complex situations can arise there at short notice which a car on auto-pilot must recognize and deal with correctly. This includes many low-probability events which never­theless arise from time to time, such as people walking or riding their bi­cycle on highways. Driving in urban settings is much more complex and therefore a gradual path of auto-drive evolution is even more unlikely in such settings. Thus there maybe some low-hanging fruit for the developers of auto-drive applications (limited highway-driving); but almost all the rest of the fruit is hanging very far up the tree! Systems that are capable of driving in urban/countryside traffic can not start with limited capabilities. From the first day, they must be able to handle a very wide variety of situations that can occur in such settings.

Regulations that harm

We have already shown that the re­quirement of supervised driving is neither necessary nor can it be ful­filled for advanced driver assistance systems. But one could argue that the requirement does little harm. This is not the case. Wherever this rule is adopted, innovation will be curtailed. The safer and more convenient fea­tures of autonomous vehicles will only be available to the affluent and it will take a long time until most of the cars on the road are equipped with such technology. This means many more lives lost in traffic acci­dents, much less access to individual mobility for large groups of our popu­lation without driver’s license (such the elderly and the disabled), more waste of energy, resources, space for mobility.

Any country that adopts such rules will curtail innovation in car-sharing and new forms of urban inter-modal and electric mobility that become possible when autonomous vehicles mature that can drive without passen­gers.

It is obvious today that legislation that requires drivers to supervise ad­vanced driver assistance systems will not stand the test of time.

Download as PDF

Changes 2013-09-26: Updated title and part of the text

Audi first automaker to receive Nevada test license for autonomous cars

After Continental reported their test license in Nevada in December, Audi USA now claims to have become the second recipient of a license for testing autonomous cars – before Continental and after Google. Audi’s driverless Audi TTS race car can now roll through Nevada. The car has been developed by Stanford University and the Volkswagen Electronics Research lab in Silicon valley. In their statement Audi introduces the somewhat misleading term ‘piloted’ driving and ‘piloted’ parking for their autonomous driving. Apparently they still have a hard time imaging a future without driver or pilot…

Source: Audi 1,2

Update 09 Jan: German newspaper FAZ reports from CES hat Audi was optimistic about the introduction of driverless cars and expects to see autonomous vehicles on the market before the end of this decade.

 

 

 

Driverless campus shuttle being tested at Swiss university

Students at the Ecole Polytechnique of Lausanne may soon drive across campus in up to 6 driverless shuttles developed by French company Induct. The Navia shuttles, of which the first was delivered to the university in December for testing, operate autonomously with a speed of up to 20km per hour. They are fully electric, are equipped with GPS, laser sensors, 3D cameras and can transport up to 8 persons. The shuttles are ideal for last mile transportation. As the laws in most European countries still require all cars to be operated by a driver they currently only can be operated in private areas – such as airports, business and amusement parks, shopping malls, university campuses etc. By removing the first/last mile hurdle, Induct’s shuttle technology has great potential for making public transport more appealing and effective. Compared to individual autonomous vehicles, they are also much easier to justify economically because the high costs of current autonomous technology (especially 3D sensors) are less of an issues for multi-passenger vehicles which clock so many more operating hours than private cars.

Source and copyright: http://www.induct-technology.com

Induct is not the only company focusing on autonomous shuttles. Google operates (or has operated?) a fleet of autonomous golf carts on their campus. Robosoft, another French company also offers 2 types of such shuttles, which have been developed in the European CityMobil research project).

The technology certainly has great potential to become a starting point for more efficient and environment-friendly autonomous people movers and buses. Hopefully the legal framework will be adapted soon to allow the operation of such shuttles in public. This applies especially to European countries which have been heavily financing research in such autonomous transportation systems for almost a decade (and are continuing to do so e.g. in the new CityMobil2 project).

The dangers of mixed-mode autonomous vehicles

Will autonomous cars de-skill their human drivers? In a thoughtful presentation MIT researcher Bryan Reimer points out the dangers of letting cars drive themselves autonomously part of the time. As people rely on automated driving more, they drive less themselves and their experience shrinks which may make them more likely to err at the steering wheel. He also dismisses the idea that humans would be effective at monitoring an autonomous car’s actions and take over in difficult situations: Besides having to be constantly alert, they would need a much deeper understanding of the autonomous car’s capabilities and limitations to be effective in such situations.

These are important insights for the evolution of autonomous vehicles. They have direct implications on the way that driverless vehicles are conceptualized and for the legal frameworks. Current driverless car laws are are based on the idea that a human is in control or should be able to take over immediately in critical situations. The reality will be different. The laws will need to address truly autonomous operation (where no occupant can be held liable for the car’s operation).

Reimer proposes to increase human-centered research and developoment to improve the interface between driver and autonomous vehicle. But it is hard to see how this could overcome the dilemma he has sketched. Improving the autonomous capabilities of these cars to the point where they perform verifiably better than almost all human drivers seems to be the only realistic alternative.

Automakers trying to slow down Google

Does Google’s driverless car technology threaten established car manufacturers? They clearly seem to think so: Lobbyists of the Alliance of Automobile Manufactures have succeeded in throwing a wrench into the process of legalising autonomous cars in California. Because of their concerns about liability issues, the Senate Transportation Committee decided to route the bill to the Rules Committee where it will possibly be assigned to another panel for further review on liability. This could mean a significant delay for California’s bill and puts into question whether California could become a key state in driverless car introduction. It looks like Nevada will keep this crown for some time.

Google has repeatedly lamented that car manufacturers show little interest in driverless car technology. This is not surprising because driverless cars will greatly reduce the total number of cars needed. Given that private vehicles currently sit idle more than 95 percent of the time, the total number of cars needed could conceivably shrink by a factor of 10!

This action by the Alliance of Automobile Manufactures is only a harbinger of things to come. As driverless technology matures, the fight will get nastier. But the public can only benefit from driverless cars: Countless lives saved, lower total transportation costs and greater mobility for the elderly and young. Automakers need to prepare for this future now. Closing the eyes and trying to prevent the inevitable is the wrong strategy.