select search filters
briefings
roundups & rapid reactions
before the headlines
Fiona fox's blog

expert reaction to reports of fatal crash involving an autonomous car in the US

A 49-year old women in Arizona has had a fatal collision with an autonomous car in the US.

 

Prof Neville Stanton, Chair in Human Factors Engineering, University of Southampton, said:

“I don’t know the details about the Uber accident or the particular technologies in the car.

“In my research, we have focused on the role of the driver within the vehicle.  Typically the driver is expected to monitor the automated technology and the road environment simultaneously, and decide if they need to intervene or not.  This is far more work that driving manually, akin to watching over a learner driver.  Humans are not good at extended vigilance tasks of this nature, and typically their minds can wander off.  We need to work on better ways of keeping the driver engaged in the driving tasks, or wait until we can produce SAE level 4 or 5 vehicle automation (which frees the driver up from this extended vigilance task).  We don’t yet know what the factors were in this case, but we know from previous work that the middle ground of drivers supervising automaton is not working out.”

 

Prof Noel Sharkey, Emeritus Professor of Artificial Intelligence and Robotics, University of Sheffield, said:

“Autonomous vehicles present us with a great future opportunity to make our roads safer.  But the technology is just not ready yet and needs to mature before it goes on the road.  Google are a good example in continually testing for over a decade.  Uber, like Tesla, are rushing headlong into this too quickly.  Too many mistakes and the public may turn its back on the technology.  A better approach is to use many of the features of self-drive cars to make our current vehicle less accident prone.  An incremental approach is not so exciting but it will be much safer in the long run.”

 

Prof Martyn Thomas, Professor of IT, Gresham College, London, and Director and Principal Consultant, Martyn Thomas Associates Limited, said:

“The technology of self driving cars is not yet fit for purpose.  There is too much hubris and not enough professional safety engineering and humility.  I hope that this tragedy causes the industry and policymakers to pause and then set detailed criteria before resuming testing.”

 

Dr Gabriel Brostow, Associate Professor (Reader) in Computer Science, UCL, said:

“The accident where a self-driving Uber car killed Ms Elaine Herzberg is a terrible tragedy.  I have some expertise on the computer-vision side of related technologies.  Given today’s state of the art, I’m confident that a good human driver would have reacted slightly better, perhaps anticipating the situation and either slowing or swerving to avoid the collision.  However, like an airplane’s black box, we’ll need to examine all the on-board sensor data, especially the videos.  Only then will we know if a typical driver would have been involved in the same accident.

“That stretch of road (Mill Ave. heading north to Curry Rd.) is four lanes wide, plus a bike lane and eventually a fifth turning lane.  The speed limit there is 45 MpH (72km per Hour).  Also, if the streetlights there are working, it has plenty of night-time coverage.  Assuming the vehicle was obeying the speed limit, I only see one or two places where a person could run/jump out from behind the roadside foliage to surprise a camera-only system (like a human), when this vehicle almost certainly had cameras and additional sensors too.

“Companies like Uber, Bosch, and the UK’s FiveAI are, for the most part, far beyond the simple challenges of making fast fault-tolerant software.  This was almost certainly a scenario that would have challenged a human driver, or was too far outside the normal scenarios used for training the existing machine learning models.  All the information, along with video, will be available to the investigators.  The UK laws on this matter are still emerging, but current guidance for technology developers does hint that what a human would do is part of the consideration, as is the extent of the in-lab and (supervised) on-road testing used for the deployed systems.”

 

Dr Aimee van Wynsberghe, Assistant Professor in Ethics and Technology, Technical University of Delft, said:

“This is an example of a social experiment in which some participants aren’t given the option to provide consent or opt-out and now one of them is dead.

“When industry tells us ‘autonomous cars are safer’ we should remember that this is still a hypothesis premised on an improbable world with predictable human behavior and all cars on the road are autonomous.

“Current policy surrounding autonomous cars neglects recommendations on the interface between car and pedestrians: how to communicate to a bystander that the car is autonomous?”

 

Kate Carpenter, member of the Chartered Institution of Highways and Transportation’s Road Safety Panel, said:
“We don’t know the circumstances of this sad incident yet so we can’t tell why it might have happened.  People often predict collisions can’t happen with the radar/lidar tech of connected and autonomous vehicles, missing that the scenario of a pedestrian stepping in front of car just as it passes can still happen.  No amount of technology will prevent a car stopping within its shortest-possible-physical-stopping distance (i.e. under maximum braking) from the first point at which it became evident an incident was possible (i.e. as a pedestrian steps off a kerb, if a car immediately brakes full force it could still hit someone).

“The technology is already better than humans in many respects; it’s never drunk, drugged, tired, had an argument, distracted, inattentive etc.  However, it can make some mistakes that human’s don’t because for example we have theory of mind: we can observe a human and understand that they have opinions, intentions, perspectives (physically what they see, and conceptually they may view a car as something that will just stop for them at low speed).  AI is already starting to do this, but it’ll take a while.

“The first fatal car crash in the UK (Bridget Driscoll, a pedestrian hit by a car in 1896) died in a maximum impact speed of 4mph, so lower speed is still no guarantee of zero pedestrian fatality.”

 

Prof John McDermid, Department of Computer Science, University of York, said:

“It is very unfortunate that a fatality has arisen, but it serves to draw attention to the need for widely accepted approaches to assessing the safety of autonomous systems in a supportive way that enables the benefits to be realised, rather than blocking advancement of the technology.

Do we or don’t we know what happened in this case; what more do we need to know before knowing what went wrong?

“More needs to be known about the exact nature of the accident. If the pedestrian ran out from behind a parked vehicle very close in front of the Uber car, then it may not have been possible (stopping distances too long) for the accident to be avoided (whether the vehicle was in autonomous mode or manually driven). On the other hand, if the vehicle simply did not detect the person (moving at night), then there is a problem with the vehicle capability, e.g. sensors or data analysis. Thus we need to understand more about the specifics before we can identify lessons to be learnt.

What is supposed to happen when a driverless car encounters a pedestrian?

“The car should take avoiding action. In principle the car should always be monitoring pedestrian behaviour (even trying to predict it) so it can avoid the accident. This should happen even when the pedestrians are not using crosswalks (pedestrian crossings in UK English).

What stage are driverless cars at now in terms of development (in the US and in the UK)?

“Still evolving! One of the problems is that we do not have good frameworks for assessing safety of such systems, especially where they are learning. This serves as a reminder that the safety processes need to catch up with the technology, and also be practical in dealing with the capability of current systems.
What regulation is in place for the development of this technology?

“In the US, individual States have introduced laws, but they vary from State to State. In the UK there has been agreement for testing in a number of areas of the UK, and more is planned. The Department for Transport is considering regulatory changes. However, in general, the technology is ahead of the legislation.

How can (or how has been) the safety of driverless cars been tested and monitored?

“In many cases, companies are required to report incidents and accidents to authorities, e.g. Google have done this in California (for Google cars), but this is not uniform and some companies have stopped producing reports, e.g. reportedly Waymo. More needs to be done to ensure that accidents and incidents are reported and analysed so we can learn from experience and improve safety, as has been done in aviation.

What kind of technology is in place in driverless cars as compared with e.g. the black box recorder in a plane; will we ever know what happened and why?

“In general, no ‘black boxes’, but some manufacturers have included such systems in their development vehicles (and this may happen in production vehicles). Work is needed to define what information should be collected to aid accident and incident investigation, especially where systems learn in operation. This is an area where regulators need to co-operate to produce consistent standards to apply to all classes of vehicle.”

 

Prof Duc Pham OBE FREng, Chance Professor of Engineering, University of Birmingham, said:

“There are a number of questions that must be answered in this case: what was the speed of the vehicle? Was it travelling along a straight line or was it turning? Did the sensors operate properly? Did the brakes function as designed? What were the weather and road conditions at the time? Was the control system working correctly?  The car should have a black box recorder storing all this information.  I am sure collision avoidance was one of the foremost design considerations for the vehicle and plenty of safety features would have been built into it.  Clearly, however, autonomous vehicles are still a work in progress and more research and development is needed to ensure they are safe for all road users in the future.”

 

Dr Matthew Channon, an expert on the legal issues connected to driverless vehicles from the University of Exeter Law School, said:
“This is very sad, and not the first time a person has been killed by a driverless car. Lessons need to be learned so similar tragedies are avoided in future.

“Driverless cars are designed to detect pedestrian, cyclists and other vehicles through sensors and then stop within a reasonable distance. Those vehicles being tested are designed to act cautiously if anything is in their vicinity.

“In the UK and US there are different types of driverless cars being tested on roads – some have automated functions but are mainly controlled by a driver and others are able to handle a variety of different tasks on their own but are still monitored by a driver sitting in the car. Data is being gathered during testing to gauge the safety of driverless vehicles, as well as information from cameras on board.”

 

Declared interests

Prof Neville Stanton: “I am funded by the Jaguar Land Rover and the UK-EPSRC grant EP/N011899/1 as part of the jointly funded Towards Autonomy: Smart and Connected Control (TASCC) project.  I am a Professor of Human Factors Engineering at the University of Southampton.”

Prof Noel Sharkey: “I have no vested interests.”

Dr Gabriel Brostow: “I’m an Associate Professor at UCL.  I lead a research group in Computer Vision and Machine Learning.  I teach Machine Vision to ~130 MSc students each year, who then go work at Amazon, Facebook, Apple, etc.  I’m also co-founder of a new startup (Matrix Mill Ltd), which is not so relevant here, but we do develop Machine Learning technology for training Deep Neural Networks.  The views given here are my views, not those of my employers.”

Dr Aimee van Wynsberghe: “I am an Assistant Professor at the Technical University of Delft in robot ethics with a National grant to study ethical issues and service robots.  In addition, I am co-director of the Foundation for Responsible Robotics, a not-for-profit in the Netherlands.”

Kate Carpenter: “No interests in this to declare; Jacobs designs and manages transport infrastructure for strategic and local authorities but has no interest in this incident; Uber; or the industry specs etc. for CAVs.”

Prof John McDermid:I am the Director of the Assuring Autonomy International Programme, funded by the Lloyds Register Foundation to develop approaches to assurance (gaining confidence in) and regulating robotics and autonomous systems. The points raised above are exactly the sorts of issues that this programme will address. I have no direct financial involvement with Uber or other car manufacturers, but my Department does work with Jaguar Land Rover (JLR), including collaborative research on safety of autonomous systems.
Prof Duc Pham“I have no competing interests.”

None others received.

in this section

filter RoundUps by year

search by tag