Photo: zVg

Yessays Johan Rochel.

Photo: zVg

Nosays Manuela Lenzen.

It makes me jump every time I hear how ethics must be integrated into technological tools. Ethics are already there, thanks to them. During the design, manufacturing and communication phases, teams conduct a multitude of assessments, weighing interests, strategic choices. There are just as many ‘crossroads’ of rules, where we make ethical decisions, i.e., on the basis of goals and values.   

The challenge isn’t therefore to bring ethics from the outside in, but to spell out the impressive number of ethical choices already made. Among them, the desired behaviour of a robot is the centrepiece. The ability for it to respect certain rules is a prerequisite for use in sectors where it comes into contact with humans. Bringing this about is a three-part challenge. First there is the choice of the general ethical approach, e.g., between an ethics of consequences or an ethics of duties. From there, the rules and methods for decision-making must be defined in a way that will achieve consensus. The final part of the challenge is ensuring the robot is technically capable of abiding by the rules.  

“Ethics are already there, thanks to them”.

Take the example of robots used in conflict zones: they must respect the principles of the law of war as an absolute minimum. But consensus on paper is not a guarantee of technical feasibility. We must be sure a robot identifies the relevant consequences, evaluates them and then acts accordingly. These issues are ethical challenges – value choices, for example – with a technical component. We want to extend endlessly what is technically possible. They must not be approached using ‘true’ or ‘false’ categories, but rather with the ambition of making a robot’s behaviour explainable and predictable, all on the basis of justifiable and documented rule choices. This calls for strengthened collaboration between experts in robotics and in ethics.

Johan Rochel is a postdoc at EPFL, author of the book Les robots parmi nous – pour une éthique des machines (The robots amongst us – for an ethics of machines) and co-director of the ethics and innovation laboratory ethix.

“A robot may not injure a human being or, through inaction, allow a human being to come to harm”. In his stories, the science-fiction author Isaac Asimov explored whether laws like this would suffice to get robots to behave as they should. But time and again he came to the conclusion that morals are too complex to be summed up in rules.  

But can’t algorithms use big data to learn morality, and robots too? Just like they learn to classify images and to answer questions? To be sure, some algorithms have meanwhile learned that it’s OK to kill time, for example, but not to kill humans. But like all systems that learn from huge chunks of data, they don’t really understand what it’s all about. After all, our words and our sentences only make real sense if they are taken with a hefty dose of common sense. 

“Artificial morality is likely to multiply confusion in our world. In a worst-case scenario, it might seduce us into using robots in areas where they could wreak havoc”.

It’s a bit like the King Midas of legend, who wanted everything he touched to turn to gold. Of course, he didn’t mean for his food to turn to gold as well! Taking someone at their word was presumably just a way for the ancient gods to have a bit of fun. But it’s a fundamental problem with algorithms. They aren’t necessarily going to solve the tasks we set them using our desired way. Rigorous human supervision and intensive re-training can keep them on track. But even this is only a weak substitute for a real moral compass. 

The morals that algorithms could learn would resemble the results of large linguistic models. These might sound good, but they’re often banal and sometimes completely off the mark. Robots of this kind will only serve to deceive us even more about what they really are: mere technological tools that do not understand the world as we do. More than anything else, artificial morality is likely to multiply confusion in our world. In a worst-case scenario, it might seduce us into using these systems in areas where they are out of their depth and could sooner or later wreak havoc. 

Manuela Lenzen is a science journalist and works at the Center for Interdisciplinary Research at the University of Bielefeld in Germany. She is the author of the book Künstliche Intelligenz. Fakten, Chancen, Risiken (Artificial intelligence. Facts, opportunities, risks). 

Foto: zVg

Yessays Johan Rochel.

It makes me jump every time I hear how ethics must be integrated into technological tools. Ethics are already there, thanks to them. During the design, manufacturing and communication phases, teams conduct a multitude of assessments, weighing interests, strategic choices. There are just as many ‘crossroads’ of rules, where we make ethical decisions, i.e., on the basis of goals and values.   

The challenge isn’t therefore to bring ethics from the outside in, but to spell out the impressive number of ethical choices already made. Among them, the desired behaviour of a robot is the centrepiece. The ability for it to respect certain rules is a prerequisite for use in sectors where it comes into contact with humans. Bringing this about is a three-part challenge. First there is the choice of the general ethical approach, e.g., between an ethics of consequences or an ethics of duties. From there, the rules and methods for decision-making must be defined in a way that will achieve consensus. The final part of the challenge is ensuring the robot is technically capable of abiding by the rules.  

“Ethics are already there, thanks to them”.

Take the example of robots used in conflict zones: they must respect the principles of the law of war as an absolute minimum. But consensus on paper is not a guarantee of technical feasibility. We must be sure a robot identifies the relevant consequences, evaluates them and then acts accordingly. These issues are ethical challenges – value choices, for example – with a technical component. We want to extend endlessly what is technically possible. They must not be approached using ‘true’ or ‘false’ categories, but rather with the ambition of making a robot’s behaviour explainable and predictable, all on the basis of justifiable and documented rule choices. This calls for strengthened collaboration between experts in robotics and in ethics.

Johan Rochel is a postdoc at EPFL, author of the book Les robots parmi nous – pour une éthique des machines (The robots amongst us – for an ethics of machines) and co-director of the ethics and innovation laboratory ethix.

 


Foto: zVg

Nosays Manuela Lenzen.

“A robot may not injure a human being or, through inaction, allow a human being to come to harm”. In his stories, the science-fiction author Isaac Asimov explored whether laws like this would suffice to get robots to behave as they should. But time and again he came to the conclusion that morals are too complex to be summed up in rules.  

But can’t algorithms use big data to learn morality, and robots too? Just like they learn to classify images and to answer questions? To be sure, some algorithms have meanwhile learned that it’s OK to kill time, for example, but not to kill humans. But like all systems that learn from huge chunks of data, they don’t really understand what it’s all about. After all, our words and our sentences only make real sense if they are taken with a hefty dose of common sense. 

“Artificial morality is likely to multiply confusion in our world. In a worst-case scenario, it might seduce us into using robots in areas where they could wreak havoc”.

It’s a bit like the King Midas of legend, who wanted everything he touched to turn to gold. Of course, he didn’t mean for his food to turn to gold as well! Taking someone at their word was presumably just a way for the ancient gods to have a bit of fun. But it’s a fundamental problem with algorithms. They aren’t necessarily going to solve the tasks we set them using our desired way. Rigorous human supervision and intensive re-training can keep them on track. But even this is only a weak substitute for a real moral compass. 

The morals that algorithms could learn would resemble the results of large linguistic models. These might sound good, but they’re often banal and sometimes completely off the mark. Robots of this kind will only serve to deceive us even more about what they really are: mere technological tools that do not understand the world as we do. More than anything else, artificial morality is likely to multiply confusion in our world. In a worst-case scenario, it might seduce us into using these systems in areas where they are out of their depth and could sooner or later wreak havoc. 

Manuela Lenzen is a science journalist and works at the Center for Interdisciplinary Research at the University of Bielefeld in Germany. She is the author of the book Künstliche Intelligenz. Fakten, Chancen, Risiken (Artificial intelligence. Facts, opportunities, risks).