The ‘robot nurse’ Twendy-One, made by Waseda University in Tokyo, demonstrates how safe it is by helping a student out of bed and into a wheelchair. | Image: Keystone/AP Photo/Koji Sasahara.

Cooperative machines work side by side with humans on the factory floor, robots care for patients in retirement homes, and they weed crops for farmers. Artificial intelligence is increasingly coming into direct contact with people. And accidents occur on a regular basis. For example, in 2015, a robot crushed a worker to death in a VW factory in Germany. And several people have already died in crashes with self-driving cars. “That’s just the beginning”, says Nora Markwalder, an assistant professor at the Law School of the University of St. Gallen, who is researching into new technologies and criminal law. Autonomous systems are getting more independent and entering into ever more spheres of our lives. This raises legal questions. Who is to blame if a robot nurse lets an elderly person fall, if a police robot beats up a passer-by so badly he’s hospitalised, or if a chatbot instigates a murder?

“Most people would find it absurd to punish a robot”. Nora Markwalder
Off with its processor!

“Up to now, it’s not been clarified who bears responsibility if a robot commits a criminal act”, says Markwalder. Together with the legal scholar Monika Simmler, she has been investigating the criminal responsibilities of robots and artificial intelligence (their recent article on the topic is available here, in German only). “Most people would find it absurd to punish a robot”, says Markwalder. And at present, this would not even be legally possible. Clever machines are regarded as things, and so cannot bear criminal responsibility under Swiss law. What’s more, artificial intelligence isn’t yet so advanced that it can take independent decisions of its own accord, and it cannot be made responsible for its actions.

“But if that becomes the case in future, it would be entirely reasonable to hold a robot criminally responsible”, says Markwalder. Not necessarily with the goal of getting the machine to mend its ways. In her opinion, punishment should rather aim to stabilise currently valid norms. That means demonstrating to society that no one is allowed to kill and go unpunished, not even a robot.

Markwalder and Simmler have also given thought to the type of punishment that could be imposed. It would have to be something that hurts the machine. “Of course, these are not the same things as with people”, says Markwalder. Instead of sending the robot to prison, for example, you could restrict its computing capacity. Or send it to the scrap heap as a kind of death sentence. “But that’s all still in the realm of science fiction”, admits Markwalder. Nevertheless, she thinks it’s important that we engage in good time with difficulties that could arise sooner or later.

Whodunnit?

Sabine Gless, a professor of criminal law in Basel, believes that there are more pressing issues to deal with today. As part of the National Research Programme Big data (NRP 75), she is researching into data privacy and self-driving cars. She is dealing with current problems to do with autonomous vehicles and industrial robots, which include data protection and liability issues. It’s clear today that if an autonomous system has an accident, it’s not the machine, but always people who are held accountable: the programmer who wrote inaccurate code, the manufacturer who committed a production error, or perhaps the user who fail to operate the machine correctly. A court of law has to decide in each individual case whether or not someone was at fault. But all this becomes more difficult with the increasing complexity of the systems. “Even when you look carefully, in some cases it’s already impossible to find where the fault lies”, says Gless. You might not be able to identify any guilty party at all, which means no one can be punished.

Monika Simmler is of the same opinion. In order to find out how our society would deal with these issues, she’s currently carrying out a study at the University of St. Gallen. Test subjects without any specialist legal knowledge have to decide who is guilty in several example cases: the person, the autonomous system, or both. One example is that of a train that derails because the engine driver has turned on the automatic pilot. The machine’s degree of autonomy is divided into five levels, from minor support to complete control, at which point a human being cannot intervene any more. Simmler expects that the more the system assumes control, the less her test subjects will want to punish the human being.

The debate about e-people

Susanne Beck is a professor of criminal law at the University of Hanover. She has no problem with the idea that accidents with autonomous machines might result in no one being punished. It’s similar to other technologies, such as road traffic systems or atomic power stations. These are regarded as so useful that we accept the potential dangers they bring with them. Instead of contemplating how to punish robots, we should first engage in a societal discourse about whether and where we want to employ artificial intelligence. “If we decide to use it, then we have to live with the possibility that something can go wrong”. But this doesn’t mean that an injured party has to go away empty-handed. In order for humans to get compensation after an accident with a robot, the civil liability has to be regulated clearly, says Beck. This is why she and other researchers have been discussing whether robots should get their own legal status: that of so-called electronic people, ‘e-people’. The European Parliament has already debated this. An e-person would be comparable with a legal person. It’s not yet clear just what this construct would look like. One possibility is that the manufacturers, owners and operators of a robot would all be compelled to deposit a sum of money that would be forfeit in the event of it causing damage. But robotics researchers have written an open letter to the EU to express their opposition. They believe that creating e-people would be premature, given the current status of artificial intelligence, whose capabilities are overestimated. But they probably also fear that such a legal provision could serve to put a brake on innovation.

Claudia Hoffmann is a freelance journalist and works for WSL in Davos.