select search filters
briefings
roundups & rapid reactions
before the headlines
Fiona fox's blog

Expert reaction to news that factory worker in Germany died after being crushed by robotic machinery

A man has died after being crushed by a robot at a Volkswagen plant in Germany, with initial reports suggesting that human error was to blame.

 

Dr Ron Chrisley, Director, Centre for Cognitive Science, Department of Informatics, University of Sussex, said:

“Even if safety standards continue to rise, meaning that the chance of an accident happening in any given human/robotic interaction will go down, we can expect more and more incidents like this to occur in future, simply because there will be more and more cases of human/robotic interaction.

“Although there is a sense in which it is legitimate to refer to this as a case of “Robot kills worker”, as some reports have done, it would be misleading, verging on irresponsible, to do so. It would be much better to express it as a case of “worker killed in robot accident” or similar. Why? Because robots, despite what one might be encouraged to believe from fiction or recent alarmist worries, themselves have no real intentions, emotions, purposes, etc. They can only kill in the sense that a hurricane can kill; they cannot kill in the same sense that some animals can, let alone in the human sense of murder. It is precisely because of misunderstandings on this point that public commentators have an obligation to use language which minimises the chance of false anthropomorphising.

“As robots become more prevalent in society, more and more it will seem like they actually have their own autonomy, allowing them to form their own purposes, goals and intentions, for which they can and should be held responsible. Although there may eventually come a day when that appearance is matched by reality, there will be a long period of time, which has already begun, in which this appearance is false. Robots are not autonomous in this sense, and are not responsible for what they do. But we are already tempted to think of human-robotic interactions in terms of “what humans are responsible for” vs “what robots are responsible for”, despite the latter class being empty. This raises the danger of scapegoating the robot, and failing to hold the human designers, deployers and users involved fully responsible. For example, one report quoted VW spokesman Heiko Hillwig as saying that “initial conclusions indicate that human error was to blame, rather than a problem with the robot”. This choice of words, whether Hillwig’s or the reporter’s, unfortunately could invite the impression that in some situations, the robot might have been to blame, just not in this case. There is a purely legitimate sense in which this incident might have involved a “problem with the robot”, but it should not be contrasted with human error; rather, it should be seen as a special case of it. If there is a “problem with the robot” (be it faulty materials, a misperforming circuit board, bad programming, poor design of installation/operation protocols) those faults, and/or not anticipating them, are, in some sense, a case of human error. Yes, there are industrial accidents where no human or group of humans is to blame; but we mustn’t be tempted by the appearance of agency in current and near-future robots to see “problems with the robot” as therefore not a case of human error or responsibility.”

 

Dr Blay Whitby, Lecturer in Computer Science and AI, University of Sussex, said:

“It’s important to understand that with present technology we cannot ‘blame’ the robot. They are not yet at a level where their decision-making allows us to treat them as blameworthy. This unfortunate accident is technically and morally comparable to a machine operator being crushed because he didn’t use the safety guard. In this case it’s more complex and therefore more forgivable because ‘the safety guard’ was provided by computer software and he was in the process of setting it up.

“It is important for journalists to take an interest in this sort of event because in an increasingly automated world where we delegate more and more decision-making to machines of various sorts there should be much more public awareness of the technology and public scrutiny of the ethical issues involved.”

 

Prof Noel Sharkey, Emeritus Professor of AI and Robotics, University of Sheffield, said:

“Robots do not act of their own volition and would not attack a human unless programmed to do so. Industrial strength robots can be very powerful and usually have safety protocols. But of course we have human errors in operation or programming as well as break downs and accidents happen. We could see many more of these as the current robotics revolution progresses.”

 

Prof Alan Winfield, Professor of Electronic Engineering, Bristol Robotics Laboratory, University of the West of England (UWE), said:

“Without knowing the details I can only speculate, but my guess is that the robot was an industrial robot “multi-axis manipulator” of the kind that have been used for decades in car assembly plants. Such robots are very dangerous, and are normally operated in safety cages or manufacturing cells designed to keep humans out. Safety cages are normally fitted with devices that automatically cut the power to the robot if, for instance, the door of the safety cage is opened. My guess is that the safety devices – which are not part of the robot – either failed, or were deactivated, with tragic consequences.”

 

Prof. Duc Pham, Chance Professor of Engineering and Head of Mechanical Engineering, University of Birmingham, said:

“Fortunately, serious accidents involving robots are rare. Nevertheless, work must continue to develop and apply robots that can work safely alongside people. A big step in this direction has been made in recent years, with the creation of collaborative robots (or “cobots”) having built-in systems to limit the amount of force they can exert so that they can share their workplace with people. I envisage that such cobots will be adopted widely in the future.”

 

Declared interests

Dr Ron Chrisley: “I have no conflicts of interest with respect to this news.  I have published research on robot ethics, I am one of the directors of EUCog (the European Society for Cognitive Systems), have organised conferences on Cognitive Systems in society, have published research and edited journal issues on machine consciousness, I published research with robots etc.”

Dr Blay Whitby: “I’m an expert on robot ethics. I have published a number of relevant papers and served on a number of ethics committees in this area but I don’t have any interests, paid or unpaid, that affect my ability to comment on this accident.”

Prof. Noel Sharkey: “I am an independent with no interests to declare.”

No other interests received.

in this section

filter RoundUps by year

search by tag