Misconception 7: To convince us that they are safe, self-driving cars must drive hundreds of millions of miles

One of the most difficult questions for self-driving cars
concerns their safety: How can we determine whether a particular self-driving car model is safe? The most popular an­swer to this question is based on a straightforward application of statis­tics and leads to conclusions such as that “…fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hun­dreds of billions of miles to demon­strate their reliability…”. This state­ment comes from a recent RAND re­port by Nidri Kalra and Susan Pad­dock on the topic. Unfortunately, these statements are untenable in this form because the statistical argument contains major oversights and mis­takes, which we will point out in the following.

7.1 Failure rate estimation

The argument is usually presented as a problem of failure rate estimation where observed failures (accidents involving self-driving cars) are com­pared against a known failure rate (accident rates of human drivers). Accidents are modeled as discrete, independent and random events that are determined by a (statistically con­stant) failure rate. The failure rate for fatal accidents can be calculated by dividing the number of accidents with fatalities by the number of vehi­cle miles traveled. If we consider the 32,166 crashes with fatalities in traf­fic in the US in 2015 and relate them to the 3.113 billion miles which mo­tor vehicles traveled, then the failure rate is 32,166 / 3.113 billion = 1.03 fatalities per 100 million miles. The probability that a crash with fatality occurs on a stretch of 1 mile is ex­tremely low (0,0000010273%) and the opposite, the success rate, the probability that no accident with fa­tality occurs on a stretch of 1 vehicle-mile-traveled (VMT) is very high (99,999998972%). By observing cars driving themselves, we can obtain es­timates of their failure rate. The con­fidence that such estimates reflect the true failure rate increases with the number of vehicle miles traveled. Simple formulas for binomial proba­bility distributions can be used to cal­culate the number of miles which need to be driven without failure to reach a certain confidence level: 291 million miles need to be driven by a self-driving car without fatality to be able to claim with a 95% confidence level that self-driving cars are as reli­able as human drivers. This is nearly three times the distance between fa­talities that occur during human driv­ing. If we relax the required confi­dence level to 50%, then at least 67 million miles need to be driven with­out fatality before we can be confi­dent that self-driving cars are safe. Although this calculation is simple most authors – including the authors of the RAND report – use the wrong measures. Instead of dividing the number of crashes involving fatalities (32,166) by VMT, they divide the number of fatalities (35,091) by VMT. This overstates the failure rate of human drivers because a single ac­cident may lead to multiple fatalities and the number of fatalities per fatal accident may depend on many fac­tors other than the reliability of the driver.

Continue reading the full text of this misconception or go to the list of misconceptions

The race for fully self-driving cars has reached a pivotal point

Several events from the last months provide a strong signal that autonomous vehicle technology has led the auto industry to a pivotal point: The first auto makers are adapting their business model for fully self-driving cars and are providing explicit time frames!

Earlier this year GM invested 500 million USD in Lyft, purchased self-driving technology startup Cruise Automation for more than 1 billion USD and announced in July that GM will build its first self-driving cars for use within the Lyft fleet as self-driving taxi. In May BMW announced that they would have a self-driving car on the market within 5 years. Next came Uber, which acquired autonomous truck startup Otto for 680 Million USD and is now beginning field trials of fully self-driving taxis in Pittsburgh. But the key change at Uber is the way that its CEO Kalanick frames the issue. He makes it clear that Uber’s survival depends on being first (or tied for first) in rolling out a self-driving taxi network.

The latest announcement comes from Ford which plans to provide mobility services with fully autonomous self-driving Fords by 2021. This is a major effort: Ford is doubling its development staff in Silicon Valley, aims to have the largest fleet of self-driving car prototypes by the end of this year and will triple the size of this fleet again next year. It has also purchased 3 companies related to autonomous driving technology and has purchased a stake in Velodyne, the leading manufacturer of LIDARs for autonomous driving.

When we started to monitor the development of self-driving car technology in 2009 we expected that this technology would turn into an avalanche that sweeps through the auto industry. There have been many signs over the past years that the avalanche is picking up speed but until now we have been reluctant to claim that it is in full swing because even though the auto industry was continually increasing their activity around self-driving car technology all players had been very reluctant to openly call this a race and to publicly position fully self-driving cars as a key element of their strategy. There was a lot of posturing, many eye-catching public demonstrations of self-driving car prototypes but very little tangible action aimed at turning fully self-driving car prototypes into a real product.

After these recent signals, this situation has changed. It is now clear that auto makers have begun competing in earnest to adapt their business models to the coming wave of fully self-driving cars. No longer is Google the only company which is stepping on the gas; auto industry executives (and Uber) are now openly competing to bring the first self-driving cars on the market. It will come as no surprise to the readers of this blog that the initial business models are not concerned with selling cars but to provide mobility services.

These signals are important in themselves. They heat up the competition and force the rest of the auto industry to decide how to adapt their business model to fully self-driving cars and to explain this strategy to their investors, journalists and analysts. They increase the value of companies in the space and increase the competition for human capital (Google has probably lost between 500 million and 1 billion USD in human capital from the exodus of key members of their self-driving car group in this year (680 mio USD Uber paid for the Otto startup founded early 2016 by 4 Googlers (including Anthony Levandowski), plus Chris Urmson.). They also increase the effort of all parties involved (auto industry, suppliers, regulators, journalists, related industries such as transport & logistics, insurance, health care etc.) to understand the implications of fully self-driving cars which gradually drives away the many misconceptions and more clearly shows risks and opportunities. We are in the middle of a global, distributed innovation process around self-driving cars and driverless mobility where all parties are learning, refining their thinking, changing their vision of the future and adapting their actions accordingly. The avalanche is in full swing now and it will be a tough ride for those who fail to adapt while there is still time…

Fatal Tesla accident exposes fundamental flaws in the levels of driving automation framework

Ill-conceived standards can kill. The Tesla accident in which Joshua D. Brown was killed in early May could not have happened if SAE (Society of Automative Engineers), NHTSA  and BAST had not provided a rationalization for placing cars with incomplete and inadequate driving software on the road.

Since their publication the frameworks for driving automation (by SAE 2014, NHTSA 2013, BAST 2010) have been criticized for ignoring established knowledge in human factors. All experts in the field agree that it is not possible to expect human drivers to continuously supervise driving automation software and correct its shortcomings and errors at split-second notice when problematic traffic situations occur. SAE Level 2 and level 3 are therefore inherently unsafe and these levels should not have appeared as a viable variant of driving automation software in any framework at all!

Frameworks are not arbitrary. Unfortunately, the driving automation frameworks were heavily influenced by the perceived needs of the auto industry which already had driver assistance systems on the road and favored a gradual evolution of their systems towards fully autonomous driving. It is understandable that the authors wanted a framework that simplifies the path towards fully autonomous driving not just from a technical but also from a legal and commercialization perspective where automation can occur in baby-steps, most of which would not involve fundamental changes and would not require legislators to take a hard look at the underlying technology.

This is how Tesla was able to put their vehicle with auto-pilot software on the market. It was presented as a small step from cruise control to full lateral and acceleration/deceleration control by the system. Nothing else should change, they argued: the human is still in full control and bears full responsibility (which means that the driver will always be the scapegoat if something goes wrong!); the vehicle does not have the ambition of performing all tasks by itself. The frameworks clearly provide support for this argument. But they overlook the key difference: the software now handles the driving task continuously, for longer stretches of time without the need for human action. There is a fundamental difference between continuous driving systems vs. ad-hoc, short-term operations of driver assistance systems (i.e. parking, emergency braking, lane warning etc.) which only take over driving functions for short periods of time. Any framework for automated driving should have included this distinction!

Software that assumes the driving task continuously changes everything! Human drivers can and will relax. Their minds will no longer be on the traffic around them at all times. It is well known that human drivers tend to trust autonomous driving algorithms too quickly and underestimate their deficiencies. And it takes a significant amount of time to get back into the loop when the car needs to return the control function back to the driver. Unfortunately the authors of the framework failed to think through the details and problems that follow on levels 2 and 3. They thought about strategies for handing back the control from the car to the human; but apparently they did not perform a risk analysis where they considered how potential crisis situations that require rapid reaction could be mastered. Such an analysis would have shown immediately that
a) there are many possible critical situations where a hand-off from the vehicle to the driver can not be carried out quickly enough to avoid catastrophic consequences and
b) there are many situations where a driver in supervision mode is not able to detect a lack of capability or misbehavior by the driving automation software fast enough.

The Tesla accident is a good example to illustrate these problems. Although the accident occurred on May 7th, only some details have been released. The accident occurred around 3:40 PM on a divided highway 500 near Williston, Florida (view the map). A tractor-trailer turned left, crossing the path of the Tesla. Without braking at all, the Tesla hit the trailer approximately in the middle, went under it, emerged on the other side and continued driving for several hundred feet before coming to a stop at a telephone pole. More info on the accident (including the police sketch). The weather was good, no rain, dry road, good visibility. The road runs straight for miles. At 3:40 PM the sun stood in the West, behind the Tesla. The speed limit on the road was 65mph (104km/h), which translates into a stopping distance of 64 meters. Stopping time would have been about 4 seconds (which would also have been enough time for the truck to clear the intersection). The size of the tractor-trailer has not been made public but it was probably between 65 and 73 feet (20 and 22 meters). Assuming a standard lane width of 12 feet (3.7m), and estimating the distance between both sections of the divided highway based on the Google earth image to be about 20m, the trailer had almost enough available space between both lanes to make the 90 degree turn and could then continue straight on crossing the two lanes of the highway. If we assume that the left turn (the part at the lowest average speed) takes at least 6 seconds (time estimated from a video showing trailer trucks making a left turn) and the truck then passes the intersection at an average speed of 10mph (16km/h), then the truck needs an additional 6 seconds to clear the intersection. As the trailer was hit in the middle by the Tesla driving in the outer lane, the truck must have been about 30 feet (10m) short of clearing the intersection. Thus the tractor-trailer would have cleared the intersection about 2 seconds later.

At the moment, much of the discussion about the accident centers around the driver’s attention. We will never know whether or when the driver saw the truck. There are several possible scenarios: If we take the time horizon of 10 seconds (=6+6-2) before the accident when the trailer-truck initiated the turn, then the Tesla had a distance of about 280 meters to the intersection. At this distance, the large trailer-truck moving into the intersection would have been clearly visible. A driver engaged in the driving task (not on auto-pilot) could not have failed to see the truck and – given the lack of other nearby traffic or visual distractions – would have noticed with enough lead time that the truck is continuing onto the intersection. A step on the brake would have defused the situation and avoided the accident.

The scenario looks very different with auto-pilot. The driver knew that the road went straight for miles, with optimal visibility which translates into a low overall driving risk. The driver may have paid attention, but not as much attention as when driving without auto pilot. When a car drives by itself for many miles a driver won’t be as alert as when he performs the driving function himself. The attention will wane, the truck on the left side may have received a short glance by the driver. The truck’s intent to make a left turn would have been obvious;  but the truck slowed down when he entered the turn about 10 seconds before impact and the driver would certainly have expected that the truck will come to a stop and that the auto-pilot is also aware of the large truck. Thus even if the driver saw the truck initiate the turn, he would probably not have been concerned or inclined to pay special attention to the truck. This was just another one of probably thousands of intersections that Joshua Brown, who used the auto-pilot frequently and blogged about it, had passed. His confidence in the Tesla for handling intersections may have been high. Although he knew that the auto-pilot is not perfect, he probably did not expect that a large truck would be overlooked. In addition, he was probably aware of a Youtube video entitled “Tesla saves the day” which had circulated widely a few months ago. It showed how a Tesla had auto-braked just in time for a car crossing the path from the left.

The critical time window for recognizing the gravity of the situation and acting to prevent the accident was less then 10 seconds; and only 6 seconds before impact was it unmistakably clear that the truck is moving into the intersection instead of coming to a stop. If the driver was not fully focused on the road all the time but was alert in the 3 seconds between 6 and 3 seconds prior to impact he could have prevented the accident. But it is unrealistic to expect that a non-active driver will become fully focused on the traffic at each and every intersection that a car on auto-pilot passes and that he will always be alert for hard to anticipate, extremely rare but very critical short-term situations.

Even if the driver saw the truck and recognized that it was moving into the intersection 3 to 6 seconds before impact, then other problems arise: he has to jump into action and take over from the car. This needs time – both for the decision to revoke control from the car and for physically assuming control of the vehicle. Part of the driver’s brain has to work through the expected behavior of the car: If the car has not yet decelerated does this mean that it has not seen the large truck at all or does it mean that it is not necessary to brake (the car may have come to the conclusion that the trailer-truck will clear the intersection in time). Could it really be that the car does not see this blatantly obvious trailer-truck….? Have I completely overestimated the capability of this car? The shorter the remaining reaction time when the driver realizes the impending crisis, the more dangerous and potentially paralyzing this additional mental load may become.

Developers of driver assistance systems can not expect that drivers are fully alert all the time and ready to takeover in a split second. Moreover, they can not expect that drivers understand and can immediately recognize deficiencies or inadequacies of the software. Who would have expected that Tesla’s auto pilot does not recognize a tractor trailer in the middle of an intersection?

But the key problem is not a software issue. It is the mindset which offloads the responsibility from the driving software to the driver. Developers will be much more inclined to release imperfect software if they can expect the driver to fill any gap. That Tesla uses a non-redundant mono camera is another illustration of the problem. What if the camera suddenly malfunctions or dies on a winding road with the auto-pilot engaged and the driver does not pay enough attention to take over in a split-second? How is it possible to release such a system fully knowing that drivers using these systems will not always be paying full attention. This is only possible because we have standards that let developers offload the responsibility to the driver.

The often-raised counter argument that the level 2 auto pilot has already saved lives is not valid: it confuses two different kinds of driver assistance systems: those – such as emergency braking systems – which only take over the driving function for short periods of time when they are really needed and those that assume continuous control of the driving function for longer stretches of time and thus lead human drivers to take their minds off the road at least part of the time. Short term functions such as emergency braking are not controversial. They do not depend on the auto-pilot and it is them, not the auto-pilot, which is saving the lives.

There is only one variant in which software that assumes the driving task continually, for longer stretches of time can be developed and released to the market: the autonomous driving system must take full responsibility for the driving task and it may not require human supervision when engaged. Thus Levels 4 and up are viable approaches. The Tesla accident does not only show a software problem; it illustrates the dangers of levels 2 and levels 3. Theses levels must be scrapped from the framework!

The left turn problem for self-driving cars has surprising implications

Self-driving car technology advances rapidly, but critics frequently point out that some hard problems remain. John Leonard, who headed MIT’s self-driving cars project at the DARPA 2007 urban challenge, eloquently describes various challenging situations including hand-waving police officers and left turns in heavy traffic.

The hand-waving police officer problem can be solved easily with a simple workaround: The car just detects the hand waving situation. It then dispatches a camera feed to a remote control center and asks a remote human operator for guidance (similar to Google’s patent 8996224).

The left turn problem is more interesting. Such situations occur more frequently and they do present significant challenges. Self-driving car prototypes have been known to wait for long intervals at intersections before they finally made the left turn – heavily testing the patience of human drivers stuck behind them. The video by John Leonard clearly shows how hard it can be to make a left turn when traffic is heavy  in all directions and slots between cars coming from the left and the right are small and rare.

How do human drivers handle such situations? First they wait and observe the traffic patterns. If opportunities for left turns are rare they adjust their driving strategy. They may accelerate faster and will try to inch into a smaller slot than usual. Sometimes they will move slightly into the lane of cars coming from the left to signal that they are intent on making a turn and expect other cars to make room or they will try to find an intermediate spot between the various lanes and break the left turn down into one move towards this spot, come to a stop there and then into a second move from the the intermediate position into the the target lane. Leonard is right that programming such maneuvers into self-driving cars presents a major challenge.

But the problem is more fundamental. When we develop self-driving cars, we gain insights about the domain of driving and extend our knowledge not only about algorithms but also about human driving. To make a left turn, self driving cars have to analyze the traffic situation at the intersection. They are much better than humans at simultaneously identifying the traffic participants in all directions, to detect their speed, and they are quite good at anticipating their trajectories. Current driverless car prototypes also have no problems to decide on an appropriate path for the left turn. When a self-driving car hesitates at an intersection, the reason is not a problem with the algorithm but rather that the self-driving car finds that the safety margins for executing the turn are too small in the current situation: the risk is too high. Unfortunately, this problem can not be solved through better algorithms but only by increasing the level of acceptable risk! The risk of a left turn at an intersection is determined by the layout of the intersection, physics and the range of potential behavior of the other traffic participants, none of which can be changed by the self-driving car.

Left turns are indeed known to be risky. We may not think about it when we make a left turn, but accident statistics paint a very clear picture. An NHTSA study that analyzed crossing path crashes found that police-reported crashes involving left turns (913,000) are almost 10 times as frequent than police-reported crashes of right turns (99,000). If we consider that right and left turns are not equally distributed in normal driving (right turns occur more frequently but exact data are not available)  then the risk of a left turn may be between 10 and 20 times larger than the risk of a right turn. In 2013 crashes between a motorcycle and another vehicle making a left turn cost 922 lives; this amounted to nearly half (42%) of all fatalities due to crashes involving a motorcycle and another vehicle. Arbella insurance reported that in 2013 31% of its severe accident claims involved left turns. Thus human drivers have little reason to be proud of their left-turn capabilities.

As a consequence, UPS has largely eliminated left turns many years ago. Recently the route planning app Waze has rolled out a new feature that allows users to plan routes without left turns. These two examples show that self-driving cars do not even need the capability of making left turns in heavy traffic. It is possible to get along without such turns.

Thus the left turn problem for self-driving cars leads to the following three insights:

1) The left turn problem is not so much a problem of self-driving cars, it really is a problem of human drivers who take too many risks at left turns as we can see from the large number of left-turn accidents and from the risk analysis which self-driving cars carefully perform when making a left turn. Autonomous cars should never make left turns as readily and rapidly as human drivers. As human drivers we need to be more self-critical about our own capabilities and more ready to question our assumptions about driving instead of using our driving behavior as our implicit standard for self-driving cars.

2) We need to carefully consider the acceptable risk profiles for self driving vehicles. Risk profiles are not black and white; there are more alternatives than the high levels of risk that we take every day as human drivers without much thinking and the minimal risk strategies adopted by all current self-driving car prototypes. It would be unacceptable to let self-driving cars barge into dense traffic in the same way as we sometimes consider viable and mostly get away with. But it would be possible to reduce the time that a driverless car has to wait when turning or merging by allowing the car to increase the acceptable risk by a small amount if clearly defined conditions are met.In this area, much work and thinking is required. Expecting a self-driving car to minimize all conceivable risks and then operate as quickly as human drivers is a contradiction in itself. Instead of minimizing all risks, we need to seriously discuss what kind of small risks should be allowed in well defined situations.

3) We should not underestimate the creativity of the market to deal with the remaining problems of self-driving cars. Many of the frequently-cited problems of self-driving cars have practical workarounds that don’t require that much intelligence (triple right-turns instead of a left turn, remote assistance to deal with the hand gesture problem, limit the first self-driving taxis to snow-free regions for the snow problem etc.).

German railways to introduce autonomous long distance trains by 2023

The CEO of Germany’s railways, Ruediger Grube, does not want to fall behind the auto industry with autonomous mobility and has announced that Deutsche Bahn (German Railways) will operate on parts of the railway network with full autonomy “by 2021, 2022, or 2023″. Test are already underway on a part of the German railway network in Eastern Germany.

The technology for autonomous long distance trains differs greatly from the technology for autonomous metro-trains and subways which already operate in many cities of the world. In the latter case, most of the intelligence for autonomous driving is embedded into the railroad infrastructure and a centralized controller that is in constant communication with all trains; the trains themselves, in contrast have little intelligence; they don’t operate autonomously. This approach is not viable for long-distance networks because upgrading thousands of kilometers of the network with controllers and sensors would be much to costly. Therefore most of the intelligence has to be embedded within the locomotive. Fully autonomous long distance trains therefore need to be equipped with sensors and algorithms that are very similar to those used in self-driving cars.

The advantage of self-driving trains does not lie so much in cost reduction but in the ability to increase network capacity because trains can be operated with higher frequencies and at shorter distances. This also increase the flexibility of rail-based transportation solutions and makes new services possible. These capabilities are essential if railroads want to survive against the greatly intensifying competition from fully autonomous self-driving cars, trucks and buses.

German unions immediately criticized their plans. But they fail to understand that fully autonomous road-based transportation will provide an enormous challenge for the railroads. Deutsche Bahn is on the right track. They should do everything to accelerate their introduction of autonomous long distance trains.

Cities around the world jump on the self-driving car bandwagon

Autonomous vehicles will have a major impact on urban transportation. Mayors, transportation companies and urban planners are increasingly taking notice. The number of cities which recognize the benefits of self-driving cars and buses increases rapidly. Below is a list of some cities around the world which have launched or are working to launch activities focused on self-driving cars and buses:

San Francisco, Austin, Columbus, Denver, Kansas City, Pittsburgh, Portland (Oregon):These seven cities strive to be pioneers in integrating self-driving car technology into their transportation network. Each of these cities has already received a 100.000 USD grant from the US Department of Transportation (Smart City Challenge) to refine their earlier proposals on how to transform their urban transportation systems. In June, Secretary of Transportation Anthony Foxx will award a 50 million USD grant to one of these 7 cities to become the first city to implement self-driving car and related technology into their urban transportation system. San Francisco, for example, has proposed phased plans to deploy autonomous buses and neighborhood shuttles. The city has also gathered pledges of an additional 99 million USD from 40 companies in case it receives the 50 million USD grant.

Milton Keynes, UK: Trials of self-driving pods have already begun in this British city. The electric pods will transport people at low speed between the train station and the city center. Additional UK cities which are experimenting with self-driving car technologies are London (self-driving shuttles, Volvo Drive Me London), Coventry and Bristol.

Singapore: This may be the most active and visionary city with respect to driverless transportation. Several years ago it has launched the Singapore Autonomous Vehicle Initiative, partnered with MIT on future urban mobility and initiated several projects aimed at improving urban transportation systems through self-driving car technology. The city has already set up a testing zone for self-driving cars and is conducting several trials in 2016.

Wageningen / Dutch Province Gelderland (Netherlands): A project with driverless shuttles is already underway. The self-driving Wepods aim to revolutionize public transport and provide a new, cost-effective way to bring public transportation to under-served areas.

Wuhu, China: According to Baidu’s head of self-driving cars, self-driving cars and buses will be introduced into the city of Wuhu over the next five years.

Beverly Hills, USA: The city council of Beverly Hills has just passed a resolution aimed at the long-term adoption of self-driving cars. The resolution starts first activities towards achieving that goal but does not yet commit major resources.

Shared autonomous vehicles could increase urban space by 15 percent

A recent UK study has looked at the transformative implications of self-driving vehicles on cities. The authors found that shared autonomous vehicles could increase available urban space by 15 to 20 percent, largely through the elimination of parking spaces. Today central London has about 6.8 million parking spaces and a parking coverage of around 16%! Many large cities have even larger coverage ratios for parking space of up to 30%. Freeing up this space would make our cities greener, increase quality of life and also create the potential for additional housing.

Autonomous vehicles will also make the rural communities more attractive because shared travel to nearby cities becomes widely available, affordable and does not lead to loss of productive time.

The authors also consider autonomous vehicle only development areas and highways that are limited to autonomous vehicles. This could reduce costs as lane markings and signage would no longer be needed, the lanes could be narrower and throughput per lane would be higher.

Overall the authors from a cooperation between professional services firm WSP Parsons Brinckerhoff and architect planners Farrells conclude that autonomous vehicles will be transformational:  Future mobility may be headed to a shared pay-as-you-go transport system. The study provides many key points which infrastructure planners and legislators need to consider!

Source: “Making better places: Autonomous vehicles and future opportunities“, 2016 by WSP | Parsons Brinckerhoff, Farrells

Annual report warns that driverless cars could disrupt AllState’s insurance business

In the annual report for 2015, which was just filed with the SEC, US-insurance company AllState warns that autonomous cars could disrupt their business model. This is the first time that such a risk has been mentioned in the risk section of their annual report.

The following statement appears on page 20 of AllState Corporation’s annual report for fiscal year 2015 as filed with the SEC using form 10-K on 2016-02-19 (link to download page):

Other potential technological changes, such as driverless cars or technologies that facilitate ride or home sharing could disrupt the demand for our products from current customers, create coverage issues or impact the frequency or severity of losses, and we may not be able to respond effectively.

The company clearly sees the combined risk of the introduction of autonomous vehicles – which will significantly reduce accidents – and increased adoption of mobility services (which will become much more convenient and cost-effective through autonomous vehicle technology). The company also realizes that it will be very difficult to compensate for the resulting losses to their business model.

Sources: AllState, ibamag.com, Kargas

Google prepares for manufacturing of driverless car

Google continues to push for the introduction of their self-driving cars on public roads. After positive statements by NHTSA and overtures from the United Kingdom and Isle of Man to test their cars there, job postings show that Google aims to significantly grow their self-driving car team. The 36 job descriptions below show that Google expands activities on all aspects of their self-driving car, including manufacturing, global sourcing, automotive noise and vibration, electrical engineering etc. It remains unlikely that Google intends to manufacture their cars themselves but the job postings complete the picture that Google wants to build a manufacturing-ready reference design of a fully self-driving car which they can either use for having their cars manufactured by a supplier or which can inform licensing and cooperation discussion with OEMs from the auto industry.

The job postings below were obtained from the Google job search engine on 2016-02-13 with a reusable query. All 36 jobs are for the Self-Driving Car team at Google-X:

  1. Mechanical Global Supply Chain Manager
  2. Mechanical Manufacturing Development Engineer
  3. Manufacturing Process Engineer
  4. Manufacturing Supplier Quality Engineer
  5. PCBA and Final Assembly Global Supply Manager
  6. Automotive NVH (Noise, Vibration, Harshnees), Lead
  7. Manufacturing Test Engineer
  8. Reliability Engineer, Vehicle Test Lead
  9. Reliability Engineer
  10. Product Manager, Vehicle 
  11. Global Commmodity Manager
  12. Industrial Designer
  13. Marketing Manager
  14. Technical Program Manager, Vehicle Safety
  15. Operations Program Manager
  16. Policy Analyst
  17. Head of Real Estate and Workplace Services
  18. Product Manager, Robotics
  19. User Experience Researcher
  20. Mechatronics Engineer
  21. Electrical Engineer
  22. Mechanical Engineer, Lead
  23. Systems Engineer, Motion Control
  24. Systems Engineer, Compute and Display
  25. Reliability Engineer, Lead
  26. Vehicle Systems Engineer
  27. Perception Sensing Systems Engineer
  28. Embedded Software Engineer
  29. Electrical Validation Engineer
  30. Systems Engineer
  31. Radio-Frequency Test Engineer
  32. Researcher/ Robotics Software Engineer
  33. Radio Frequency/High Speed Digital Hardward Design Engineer
  34. Camera Hardware Engineer
  35. Mechanical Engineer, Laser
  36. HMI Displays Hardware Engineering Lead


Baidu expects autonomous buses to become first wave of self-driving vehicles

Chinese search engine Baidu has entered the race for self-driving vehicles in 2014. In a partnership with BMW, the company presented an early prototype of an autonomous car at the end of 2015. Baidu’s approach mimics Google in many ways: Like the first Google prototypes of 2010, the car uses the (aging) Velodyne 64 Lidar as its main sensor; Baidu’s approach also relies on detailed mapping which fits well with Baidu’s overall mapping strategy. Baidu also aims to diversify its business model by leveraging its know-how in artificial intelligence and has transferred its auto-related activities into a separate division, a move that Google started last year by restructuring into Alphabet. There are some differences: unlike Google, Baidu does not seem to put much emphasis on the sensors; they don’t seem to experiment with their own sensors and the configuration of sensors indicates that certain situations in which a car may find itself have not been considered yet.

Baidu’s vision of how self-driving vehicles will be adopted also differs somewhat from Google. Whereas Google has focused on individual cars, and is testing electric two-seaters which could easily become robotaxis, Baidu expects the first wave of self-driving vehicles to be autonomous buses or shuttles. In a recent online interview, Andrew Ng, Baid’s Chief Scientist, argued that buses which service a fixed route or a small defined region will be the best starting point. He expects a large number of such vehicles to be in operation within three years (= early 2019) and mass production to be in full swing within five years (= 2021).

Andrew Ng correctly pointed out that such autonomous buses operating on fixed routes or small regions  would have the advantage that care could be taken to ensure that the routes are well maintained, don’t have construction (or the construction site is clearly indicated in the map) etc.

Unfortunately, Andrew Ng’s argument, that driving on predefined routes would enable the vehicles to avoid “corner cases–all the strange things that happen once per 10,000 or 100,000 miles of driving” (source) is flawed. He argues, that machine learning can not prepare for these corner cases and that therefore driving in a restricted well-defined environment is the solution. Unfortunately, corner cases can happen anywhere; it is impossible to guarantee that on well-mapped and well-known routes strange situations can not occur. Pedestrians can suddenly appear in areas that are closed for pedestrians, obstacles may occur on a road, an oil spill can occur, the road can suddenly be flooded etc. Building software that can reliably handle even the most challenging situations is a hard task and needs to consist of a combination of machine learning, an enormous testing program (usually combined with knowledge acquisition and machine learning), careful and very extensive risk analysis and risk modeling, and purpose-built test scenarios which challenge the capabilities of the cars both in simulators and in staged test cases in the real world.

We have pointed out for the past five years that the switch towards shared mobility services based on fully autonomous vehicles will be the great transformation that self-driving car technology will bring. This is the reason why auto makers have been so reluctant to push fully autonomous driving and why it provides avenues for new entrants such as Google, Baidu, EasyMile, Bestmile, Zoox, potentially Apple, and others to capture a significant share of the world’s expenses for personal mobility. There are many reasons why the first fully autonomous vehicles to appear on our roads will be robo taxis or self-driving buses, not the least that many current projects focus on such autonomous mobility services. Examples are: WEPods (Netherlands), CityMobil2 (Greece and EU), One-North (Singapore), Sentosa (Singapore), EasyMile, (USA, California), Google self-driving pods (United States, California and Texas), Milton Keynes driverless pods, (United Kingdom), Ultrapods (United Kingdom), Bestmile (Switzerland), DeLijn, (Belgium), RobotTaxi (Japan), Baidu (China), Yutong Bus (China).

In summary, Baidu’s focus on self-driving buses adds weight to the expectation that shared mobility services based on driverless pods and buses will drive the initial adoption of autonomous vehicles. Both self-driving cars and buses have to solve the problem of autonomous driving and the same technology can applied for both application scenarios. This is why the technology which Google currently refines with their 53 self-driving cars can easily be transferred into self-driving buses and shuttles and why Baidu’s current prototype is not yet a bus but rather a converted BMW. Those pioneers who solve the problem of fully autonomous driving will find enormous business potential for self-driving taxis, self-driving shuttles, self-driving consumer cars, trucks and machines. The race is on!