Friday 6 November 2015

SMARTS Update: The Moral Imperative for Killer Robots


I hate to say it but journalists who work in what the right wingers call the lamestream media tend to be about ten years behind the leading edge of anything. Editors wait until something gets published in a responsible journal, say Nature or Science, before even thinking about putting it on the front page or on television news. Even then, if it requires a lot of complex explanation, forget about it.  Mainstream communicators live by a media version of the ethical doctrine of utilitarianism: the greatest story is the one that hauls in the biggest audience. Journalists take great pride in providing information that will affect many lives. Their publishers are focused on hooking lots of readers too -- in order to entice advertisers. Not surprisingly, stories that don't already have a proven high level of public interest will get a pass from an editor trying to build an audience.

It's like a high school dance: some stories are always popular, others languish on the margin until something goes wrong in a big way. And that's why one day you will be hearing lots of mea culpas from our governors about how they were just too slow to grasp the dangers of smart machines until it was too late.

Our governors are drowning in information like everybody else, and mainly respond to public pressure. This is the wonder and the joy of a functioning democracy. It is terrific to watch a new government sweep into power promising to address all the issues they were buttonholed about during a campaign. But it takes time to build that kind of public pressure. It often begins with news stories that fuel moral outrage. For example: when Canadian Thalidomide victims were plain sick of trying to get the Conservative government to help them deal with their infirmities -- suffered as the direct result of the failure of governments to demand proof of safety of a bad drug-- they turned to the media. There was a strong moral tone to the stories published about their plight. They highlighted the government's lack of fairness and its failure to empathize, they showcased the human suffering its policies or lack of them had generated. We all share the belief that democratic governments have a moral duty to protect the weak. The Thalidomide story showed that the government had in effect been taking advantage of Thalidomide victims by paying them as little as they could get away with. That brought a great number of eyes to the front page. The government, thinking of the election to come, was forced to respond.

The original reporting on Thalidomide and the 1972 campaign for compensation led by the Sunday Times under the direction of Harold Evans is one of the great examples of how public service journalism can exert moral pressure. The drug was developed in the late 1950s by a German company previously known for making soaps. It was offered over the counter as a sleep aid, but also to control nausea in pregnant women. In Germany, Thalidomide became the second highest selling drug after aspirin. A few years after it was introduced, physicians in Germany and Australia reported a possible connection between Thalidomide and the births of deformed children, and also reported nerve damage in those who used it as a sleep aid. They were ignored-- until they couldn`t be any more.  The company had lied about tests done to ensure Thalidomide could not harm a foetus: it hadn't done any. Before it was withdrawn, the drug was on the market long enough to result in the births of thousands of deformed children. (It never made it onto the US market due to the stubborn insistence on evidence of safety by an FDA scientist, a Canadian. Later it was discovered to be useful in the treatment of leprosy and for some forms of cancer.) There was a law suit for compensation in the UK. In 1969, there was a criminal prosecution in Germany of the responsible company executives. As Evans recounted recently in The Guardian, thanks to determined reporting and the truncated trial, we learned that Thalidomide had been developed by people who had previously demonstrated a stunning lack of empathy. Some of the founders of the company were Nazis: they hired as leading scientists some who had helped develop Sarin gas during WWII, helped IG Farben develop the compound used to gas Jews, conducted cruel and deadly medical experiments on people in labor camps.  As Evans explains, due to political interference only now coming to light, the criminal trial was halted in 1970 with the result that only the victims were punished. German Thalidomide victims were forced to accept an unfair and insufficient settlement.

Why do I tell you this story? To remind you that Thalidomide was first a boon before it became a scandal that affected thousands. Governments could have stopped it from being sold in the first place if they had demanded proof of safety, or stopped it from being sold after the first reports of problems were published. Instead, governments made things worse by stalling on compensation for victims, prolonging their agonies for many years.

Thalidomide will seem like a small hiccup when compared to the civilization-wide scandal of the unregulated rise of autonomous smart machines.

Peter Singer raised the subject of autonomous military robots in Wired for War in 2009 because he could see that drones were being produced and weaponized without much in the way of a public debate about what this meant for the future of war. But no one like Harold Evans has followed up with stories that would pressure governments to manage this development, not even when Google bought up a large number of robot companies and talked publicly about back engineering a human brain.  Instead we got and get stories in the press that essentially praise the cleverness of the unfolding technology but don't ask whether it's going to be safe. There were plenty of mainstream news stories -- with video clips -- about how Google's 330 pound humanoid robot (developed by Google-owned Boston Dynamics) can now run unaided through a forest. They tell us that Google is aiming at a humanoid robot that is dynamic, unpredictable in its movements, faster, more agile, and stronger than humans.  When that humanoid robot is outfitted with Google's back engineered version of a human brain, Google will have the makings of a very scary robot warfighter. But where are the stories about regulators and governors inquiring as to whether they should ever be deployed? My new book SMARTS explains the history of ideas behind these developments and introduces some of the people who are bringing autonomous machines into your life.  (See: Geoffrey Hinton,Chris Eliasmith, Ray Kurzweil.)  But a book can only start a small conversation. It takes a national newspaper or a television network to get the public to pay attention and to light fires under the bums seated at cabinet tables.

Those talking to our governors about autonomous machines now are mainly lobbyists for large companies with an interest in shaping the conversation their way. They talk about how  innovation will create the jobs of the future (not about how autonomous machines will slash jobs in the future.) They hope to hold off unduly restrictive regulatory frameworks and to get major grants to push their work forward. They tell the public two kinds of stories about autonomous machines. First, that our lives will be much better when, for example, children in hospital are entertained by little autonomous machines they will quickly learn to love. See how cute they are? How dangerous could that be? The second story line is darker. It goes like this: if we don`t develop smart military machines, our troops will be overwhelmed by other countries' faster, better, stronger, smarter robots. Military research agencies like DARPA have been pushing the development of autonomous drones and humanoid war-fighters for many years so as not to be surprised by an enemy's  autonomous war machines.

The arguments being developed now in favor of autonomous war machines are definitely leading edge so you won't see many of them reported in the mainstream press, at least not yet. Roboethics is a brand new term invented in 2002 by an expert at a robotics school in Genova, Italy.  Roboethicists argue that the development of autonomous robots has pushed the study of ethics beyond the confines of dusty old philosophy departments into the spanking clean and oh-so-modern robotics and computer science labs. Their papers provide clues as to how war robots will be sold to us in future. The main argument is that they will be much more ethical than humans, especially on the battle field, because they will not be swayed by emotions like vengeance, envy, rage.  At the recent International Conference on Robot Ethics ( ICRE 2015) held in Lisbon last month, about 100 attendees heard an address from Ronald Craig Arkin, an American robotocist and roboethicist at Georgia Institute of Technology. He argues that robot warfighters are a necessity because they will be more honorable than humans. As he has put it: “A warfighting robot can eventually exceed human performance with respect to international humanitarian law adherence, that then equates to a saving of noncombatant lives, and thus is a humanitarian effort. Indeed if this is achievable, there may even exist a moral imperative for its use.” 

Don`t you love it? We will be morally required to build autonomous killer robots.

You heard it here, first.

No comments:

Post a Comment