Morals and the machine

It’s the title of an article on The Economist regarding machines and morality, the sub-heading reads

As robots grow more autonomous, society needs to develop rules to manage them

Seems pretty reasonable, except for the use of the term ‘rule’, which I’ll get to in a minute. After framing the problem, the author proposes the following agenda

First, laws are needed to determine whether the designer, the programmer, the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car has an accident…

Second, where ethical systems are embedded into robots, the judgments they make need to be ones that seem right to most people….

Last, and most important, more collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different types of rules if they were left to their own devices.

Starting with the last point, like any legislation it should be something that concerns all of society, and it is well noted that it’s a very cross-disciplinary matter, hence experts from different fields will be required.

The first point is also important, but it makes an assumption that dodges a central problem. The assumption is that responsibility always lies outside of the “robot”,  and it’s a matter of deciding which of the “creators” is to blame and how. But will there be a moment when the moral agency is transferred away from the creators to the robot? In the short term the answer is clearly no, except for science fiction. But if we’re speaking of increased autonomy..

Lastly, I find the most interesting and difficult point to be the second one. It’s one of those cases where a very deep problem can be stated in a deceptively small number of words.

..the judgments they make need to be ones that seem right to most people..

In fact, the trouble comes from one word, right, which carries a huge amount of hidden complexity. We intuitively know what is meant here, but just try formalizing this into something explicit that can be programmed. Good luck. That’s why I remarked about the use of the word ‘rules’ above, I’m very skeptical that right can be translated into programmable rules.

Rather it seems more likely that any sophisticated morality that a robot follows will include an element of learning, similarly[1] to how children learn what’s right and what’s wrong. The problem with this approach is noted in the article on machine ethics in wikipedia, that sometimes learned representations (i.e. how the knowledge is encoded in the robot) are hard or impossible to understand, inspect, debug and correct.

But it’s not impossible, it would be pretty ironic if formalizing human morality and the meaning of right finally came about by looking into the “brain” of something that wasn’t human!


[1] There is a danger of anthropomorphizing artificial intelligences and making hidden and unwarranted assumptions that may not be applicable to machines. But my point here is on using learning rather than programming, not on the specifics.