Friday 30 October 2015

SMARTS Update: On Moral Machines



My mother, who now calls herself the old lady, did many good things in her life or she couldn`t have made it to 98. She may be hauling an oxygen bottle around in her wheelchair, but she still dresses well and wins at bridge which as far as she's concerned is the best revenge. In the course of her life she has acquired a hoard of aphorisms. Those who are embroiled in building intelligent war-fighter robots would call them social heuristics, which is a high-faluting word for moral rules-of-thumb. Most of my mother`s aphorisms deliver a hidden moral message wrapped up in something much more practical. For example: when one of us can't find something she says, "take your eyes in your hands and look." It means you can find it if you try, but also that it`s not fair to ask someone else to do what you can do for yourself.


Robots that work alongside humans, especially robots made for war, are going to have to understand human morals and human politics, and be moral and political themselves. We're social animals so the robots that serve us will have to go along to get along.

Morals machines you say? Robot Prime Ministers?

Oh yes.  The European Commission is funding a study of human behaviors so that they may be better predicted and reduced to algorithms that robots can deal with. The US Office of Naval Research is funding a big study on morals for robots. Morals are tools for solving social problems on the fly. Politicians must be very careful to stay on the right side of public morals, and yet public morals change with circumstance. Canadians in 1935 would have thought it mad and utterly immoral to allow gay people to marry. Canadians in 1940 insisted that the government must shut the doors to Jewish refugees fleeing Hitler. Canadians now think it's immoral to discriminate on the grounds of sexual orientation or religious belief and just threw out a government that made an issue of religious garb and did not move fast enough to bring in refugees fleeing war in Syria. 

Our changing morals will shape our robots, and robot morals will change us too.



Researching my new book SMARTS brought me face to face with autonomous robots for the first time. Being introduced to  machines without brains that can make decisions without a specific program  changed everything I thought I knew about what intelligence is made of. Read SMARTS if you want to know more. But here's the twitter version: human intelligence is just one facet of the vast array of smarts to be found in Nature. Human capacities are shaped by the bodies we inhabit and the crises we face. The human brain, like all the others out there, is an electrochemical organic machine for solving problems. But you don't need a brain to be smart. Plants are smart, sociable, yet brainless. Microbes are smart, sociable, yet brainless. It is possible to make smart machines out of silicon that mimic how human brains work. (See Chris Eliasmith's brilliant work). It is also possible to make intelligent machines that mimic the smarts of plants (see the work of Stefano Mancuso's group in Italy) or slime molds (see the University of Western England's work). Here's the crucial point. The chief value of smart machines is that they learn, as we do, from their experiences and can put that learning to work. When they work for us and with us, they'll have to learn from us how to adapt to our ever-shifting realities. They'll have to learn to deal with the great gap between what we say we do, and what we actually do.

Some major thinkers have lately signed on to a petition arguing that we need to rethink the development of artificial intelligence before we unleash any more autonomous machines, especially on the battlefield. You've heard of the Singularity: that's the point at which all the machine smarts in the world will add up to more than the combined smarts of all the humans. The fear is that one day non organic machines will be able to replicate themselves and evolve better versions which will decide they can do very nicely without us. At this point, humans will be rendered extinct. The concept of the Singularity is interesting, but it doesn't go to the heart of the real problem of intelligent machines. We are building robots to help us with our problems, to live side by side with us, even to fight for us and to kill our enemies. How will we keep ever smarter robots on an ever tighter leash? The real problem is that intelligence is adaptive behavior. If the situation changes, so do the problems and the solutions. Any intelligent machine has to be flexible by definition. In other words, robots are going to need free will. 

Morals are all we have to limit the choices we can make. We learn these limits from our parents, grandparents, siblings, teachers, friends and in general from the societies we live in mainly through literature, plays, movies, songs--from the arts, in other words. In spite of constant reinforcement, moral constraints don't work all the time. Ask those who work in the justice system. Ask First Nations' women set upon by police who are supposed to be their protectors. Ask the millions fleeing Syria to get away from the barrel bombs dropped on them by their own government and from the religious killers known as IS. It seems obvious to me that we can't have it both ways, we can't create intelligent machines and also keep them on a tight leash. Every human society has tried and failed to keep humans on a tight leash.Why would we succeed with robots?

Besides, who will teach robots  moral behavior? And what moral behavior will be taught? At first, the teachers will be the humans that robots work with, but then, as the Internet of things connects every smart thing to every other smart thing, the robots will teach each other, and us.



As SMARTS makes clear  politics are common to just about every living community, from  plants to birds to microbes.Psychologists such as Frans B.M. de Waal have established that moral and political behavior-- a sense of justice, the need for fairness, the need to make common cause -- can be seen in our primate cousins. Morality can be described as heuristics for life crises. Shared rules permit individuals to live in groups in which conflicts concerning needs and desires always arise. Thou shall not kill. Thou shall not covet thy neighbor's wife. Thou shall not steal. Thou shall not have any other Gods before me. These rules are common to most human communities along with lesser rules such as love your neighbor as yourself and don't do to others what you would not have done to you ( not to mention much lesser but often more violently imposed rules about dress and deportment). We don't look up these rules in a law school library in order to figure out what  to do when faced with a situation where a moral choice must be made. We carry rubrics in our heads and hope they fit the circumstance. Politics help us solve group problems that are too big for any individual to deal with alone. Moral values underlay political solutions.

Robots already work and learn from humans on the factory floor, and as last week's blog makes clear, cute robot helpers will be coming soon to a nursing home near you where they will work with your parents to help them cope with their frail bodies, their frail minds, and their need for support and affection. To function at all in our social contexts, intelligent robots will need to be able to understand human politics and morality as expressed by gesture, facial expression and multi layered and seemingly contradictory speech.

Nowhere will this be more difficult than on the battlefield.

An autonomous drone loaded with bombs and sent out to find and kill an enemy will find itself perplexed by moral choices. If killing humans is allowed in some circumstances, Asimov's laws of robotics will not suffice to guide it. How will robots distinguish human enemy from friend, good human from bad?  How will we instruct the autonomous "war fighter" robots being designed now in the skunkworks funded by DARPA and the US Office of Naval Research Office to be both moralists ( thou shalt not kill) and killers (drop that bomb on my enemy)? The rules of war say soldiers must not shoot or bomb civilians, or kill children, or kill the unarmed, or kill those who have surrendered. How will "war fighter" robots cope with armed children, bombs strapped to women, people who look like civilians but who are actually combatants in disguise? It took billions of years  of evolution to generate the human ability to live with the sustained moral hypocrisy known as war.  Nothing in the world of war lends itself to simple heuristics.

Turns out the Office of Naval Research is worried about this. It has created something called the Machine Learning, Reasoning and Intelligence Program.  This office has given a significant grant to Chris Eliasmith's group, through his colleagues at Stanford, to sustain their efforts to build a machine intelligence that can handle human kinds of inferences, recognize patterns,  interpret and cast scenarios.  The program has also made grants to build computational models of human behavior and decision making as well as "multi-modal, multi- participant, Human-Agent dialogic systems for seamless interactions that are natural to humans."  The ONR understands that these agents ( cyber or robot) have to be able to understand our decision making to be able to work with us.

The ONR has another grant making program that is restricted to universities that are doing things that might have both military and commercial application. Last year it gave a big, multi-year grant to a group of universities concerned with "moral competence in Computational Architectures for Robots."  The universities include Tufts, Brown, Rensselaer Polytechnic Institute, Georgetown and Yale. The purpose of the grant is " to identify the logical, cognitive, and social underpinnings of human moral competence, model those principles of competence in human-robot interactions, and demonstrate novel computational means by which robots can reason and act ethically in the face of complex, practical challenges."

In about four years, the scholars involved hope to make a good start on how to construct thinking machines that will be moral actors able to find their way through the swamps of complex situational ethics. Reading the work of the philosophers in the group is disheartening. They seem to believe they can reduce morals and ethics to a logic that will capture moral principles and then be reducible to algorithms to run different kinds of intelligent machines-- from the ones that do surgery on our hearts, to the ones who care for our aging parents, to the ones that pick up guns and go to war.

Good luck with that.

It may be that logic will not be much help when it comes to pairing moral robots imbued with principles and human war fighters imbued with a determination to survive.  Humans have an uncanny ability to set principle aside when they believe their lives are threatened. Perhaps ONR should look instead to work done at DARPA on narrative software. We learn our morals from complex narratives, from fairy tales, short stories, novels, movies, plays. For a general primer on how humans behave, I also suggest that the researchers reread Chaucer's Canterbury Tales and try to reduce that to software. 

 Just pray that nobody introduces smart robots to Machiavelli's The Prince.

1 comment:

  1. This comment has been removed by a blog administrator.

    ReplyDelete