Friday 27 November 2015

SMARTSUpdate: We, the Disposables, and DARPA


Okay, I confess, I was bored the other day. What to do?

Peruse DARPA's website, I said to myself, that's always good for a shock to the head. 

Someone revised it since the last time I looked. It's much more approachable now (yet no more open). There are at least 200 projects running or planned. I scroll and scroll and scroll. One that catches my eye is about teaching computers to understand and communicate using human languages. NSA and all the other agencies in Five Eyes would be thrilled to have something like that, an artificial intelligence able to understand all the cell conversations they've been hauling in. Think of the savings: they could get rid of all those annoying young analysts who take up space and insist on getting paid and sometimes walk out of contractors`offices with the Crown Jewels.

Another project concerns a system for remote sensing across the Arctic, a system that would vastly reduce the need for human surveillance.

Wait a minute, I say to my screen, most of the Arctic belongs to Canada, you Americans only have Alaska.  Canadian Inuit in the ranger program stand on guard for us, so what the hell is this?  I am relieved to see that one member of the DARPA team involved in this project is a man who grew up just down the street from me, a Canadian (though he is now a learned professor at an American university). With Jeffrey on the case, I surely don't have to notify my MP, the new Canadian Minister of Indigenous Affairs, about DARPA's plan for smart machines on our turf, in our Arctic waters? Or do I?

I keep scrolling. Another project pops up that stops my breath. It's called the ICARUS program.

DARPA, like other military institutions, displays such a grim determination to spew forth acronyms no one can figure out that serve some obscure PR purpose. ICARUS, I read, stands for " Inbound Controlled Air-Releaseable Unrecoverable Systems."  The acronym is meant to bring to mind the story of Icarus, the Greek boy who flew on wings made from feathers and wax which melted when he got too close to the sun. Icarus's adventure ends badly in the ocean.

See what I mean about obscure PR? This seems like a very odd choice for a project name: normally, one would want one that suggests an upside in return for the expenditures of public funds. Something tells me that DARPA's version of the Icarus story is a tad skewed, possibly because they stripped it down for use in their narrative software project (see previous blogpost). Here's what I found in the online encyclopaedia of Greek Mythology.

" Son of Daedalus who dared to fly too near the sun on wings of feathers and wax. Daedalus had been imprisoned by King Minos of Crete within the walls of his own invention, the Labyrinth. But the great craftsman's genius would not suffer captivity. He made two pairs of wings by adhering feathers to a wooden frame with wax. Giving one pair to his son, he cautioned him that flying too near the sun would cause the wax to melt. But Icarus became ecstatic with the ability to fly and forgot his father's warning. The feathers came loose and Icarus plunged to his death in the sea."

Icarus. (zoom)

Why would DARPA invoke doomed Icarus instead of clever Daedalus? Because, the point of the Icarus story is that the wings disappear. What DARPA wants to build is a small, low cost, disposable "aircraft".

"The objective of the ICARUS program," DARPA explains, "is to create a vanishing platform for the airborne delivery of small payloads. Supply and re-supply of small military and civilian teams in difficult-to-access territory currently requires the use of large, parachute-based delivery systems that must be packed out after receipt of the payload both for operational security and environmental concerns...."

Last month, DARPA proposers were invited to a meeting on ICARUS, a meeting closed to the media. (The media of course visit the DARPA website and so a number of stories about disappearing drones soon made their appearance in the usual locations.) The parameters of the project were laid out as follows: DARPA asked interested parties to design, prototype and demonstrate "autonomous, air delivery vehicles capable of gentle...delivery [of] a three lbs. payload with 10 m accuracy to a GPS-programmed location. Upon payload delivery the vehicle must rapidly physically vanish, i.e., the vehicle's rapid physical disappearance, or transience, is part of its mission specific."

Well wow, I said to myself. Talk about planned obsolescence. What sort of material exists out there that can be fashioned into a plane that will carry a load while making its way without human aid to a particular location, and then just disappear? Doesn`t that sound more like a genie than a technology? And yet, DARPA explained, the ephemeral part of the problem has already been solved in a previous DARPA program with an equally silly acronym, VAPR, which stands for Vanishing Programmable Resources Program. VAPR " developed self-destructing electronic components" and other kinds of vanishing materials such as "structurally sound transient materials....[including] ephemeral... polymer panels that sublimate directly from a solid phase to a gas phase, and electronics-bearing glass strips with high-stress inner anatomies."

These glass strips, when hit with the right radio frequency, shatter into a lot of tiny wee particles of silicon. IBM has been issued a DARPA contract to make the electronics under glass.

Why, you ask, does DARPA want these things?

" A goal of the VAPR program is electronics made of materials that can be made to vanish if they get left behind after battle, to prevent their retrieval by adversaries."

Of course the products designed in the ICARUS and VAPR programs will have many other uses beyond military applications.  Amazon would love to have a cheap little drone that could make its way autonomously to your doorstep, deposit your book, and then disappear instead of having to return to base.  Detroit would be thrilled if it could make cars disappear on command or merely stop working by means of a radio broadcast: that would really drive sales, wouldn't it? The security agencies would appreciate autonomous flying machines that can deliver something unpleasant to an unwitting enemy and leave no trace of origin. And never mind the security agencies, consider the Mob. They already use drones to drop goodies like drugs and cellphones into prison compounds.  Disposable drones would be so much better. And by the way, if you can make a disposable autonomous drone from a polymer, why not a gun brought to you by a 3D printer, that can shoot, kill, and then disappear?

Which brings me to disposable humans. There is a certain underlying aim to these DARPA programs which reaches its apotheosis in one more project that caught my eye. They all involve eliminating the need for human effort, in fact, for humans. The Big Mechanism project takes this to a whole new level. It was begun last year, so if you heard about it then you've probably forgotten about it by now, which is part of the problem it might well solve.

Here's what it's about.

The idea is that science, that grand edifice of new knowledge, produces way too many discoveries for anyone to keep track of. This is why specialties evolved, yet nowadays,  specialist scientists cannot manage the information flow in their own fields let alone discoveries made in other areas that might be helpful. So DARPA thought: what if we make a machine intelligence that keeps track of all the data sets out there in a certain area, a machine smart enough to read everything in the literature and to note and to cross collate the important information in all published papers so as to discern patterns of meaning, which will lead to experiments, which will lead to important discoveries? DARPA's team leader in this area, Dr. Paul Cohen, a computer scientist with a special interest in intelligent systems, believes that the causes of disease will be found in this way-- the big mechanisms, not just the relationships or associations we derive now from big data.

The Big Mechanism project will pay specific attention to everything published with regard to a certain gene group that might play a role in-- ta dah!-- cancer.

The point here is to machine the practice of science, to push beyond what mere humans can achieve. If you think I'm exaggerating, consider this ad for post docs to help with this work posted by University of Chicago's Knowledge Labs (found on Google). The job is described as: "post docs will work with a cross-disciplinary team of computer scientists, linguists, biologists, applied mathematicians and social scientists to build an automated system that extracts and integrates information from the literature on cancer biology, electronic medical records, and experimental data; models the evidence and assigns confidence to each cancer-related claim; then builds algorithms to reason over that data and propose new hypotheses; which will subsequently be sent to automated ' robotic'  experiments."

Last week I described how the professions are under pressure from onrushing developments in artificial intelligence. Someone challenged me about that: what about original thought, one person asked in a comment on Google+.  He meant that in his view it was unlikely that a machine intelligence could ever be original, could ever replace that spark of human insight that has led us from scratching obscure marks on cave walls in southern Africa to all that we can do today.

You can't get more original, more insightful, than Darwin or Einstein.

This Big Mechanism system is aimed at doing what they did, better.

If it succeeds, humans in science will also become disposable, in favor of machines.

This will not be an unintended consequence.

Does anyone in the US government ask DARPA to show why such developments are in the public interest?

Most of us have no idea how far and how fast this is moving until we start looking.

Saturday 21 November 2015

Egg Head News: Water on Mars?

Be sure to check out Egg Head News on Youtube. This new comedy puppet news series will cover a variety of worldwide news stories with new episodes every week. Check out episodes 5 and 6 below -- these episodes go together as they cover the same story (Water on Mars?) in a two part series:

DISCLAIMER: The following videos were not made by me.


Friday 20 November 2015

SMARTS Update: When They Praise the `Smart` Future, Follow the Money



Last Sunday morning, I was puttering around in the kitchen with the radio droning in the background.  Jim Brown of CBC`s The 180, produced out of Calgary, began talking to a British man, Daniel Susskind, who is promoting a book he has co-written with Richard Susskind (his father according to The Times of Israel) about the reinvention of the professions by smart machines. It's called The Future of the Professions, published by Oxford University Press, priced at £18.99.

Brown opened the segment by quoting an unnamed US study which says 45% of all the jobs humans do now could be done as well or better by machines using current technology.

Anything to do with machine intelligence catches my attention. My new book SMARTS traces out how, for millennia, we described ourselves as the only intelligent beings on the planet; then realized that all of nature is intelligent; and now mimic all sorts of different forms of intelligence in all kinds of smart machines. The two main thinkers behind this SMARTS revolution are Charles Darwin and Alan Turing. More on Turing later.

Brown described Susskind as a Lecturer at Balliol College, Oxford, one of the world's finest educational institutions. When academics are introduced in the media or in law courts as experts, we assume that their positions mean that their work and their opinions are independent of corporate interests, especially when their employer is as prestigious as Oxford. Oddly, Susskind's specific expertise was not mentioned. (It's economics.) Brown plunged instead to the book`s substance. Apparently the Susskinds argue that most of the professions will be utterly transformed by smart machines and algorithms and we should not fear this, no, we should welcome it. Why? The professions exist to solve problems, not the other way around. While smart machines that learn on their own will solve these problems differently than humans, they will certainly do it at least as well (and definitely cheaper).

Brown said but hey, won't that mean that radio interviewers and even university Lecturers may soon be out on the street? Susskind responded with a certain upper class tone that silently screamed ' you don`t even know what questions to ask,' though his words went in another direction. This won't happen so fast, he assured Brown. At first, lawyers and doctors will just do high level things as machines take over the grunt work. Human professionals will remake themselves as " knowledge engineers" whose expertise will be mined as the machines get up to speed. This has already been done with the accounting software that most people use nowadays to file taxes.

Neither Brown nor Susskind mentioned that machines that learn will not need knowledge engineers for long, they'll become their own knowledge engineers in short order. This is when I started to yell at the radio. Ask him about that, Jim, I said. But Jim couldn't hear me.

The real issue, Susskind continued, is that doctors and lawyers and accountants should not be motivated by a desire to earn their livings solving people's problems, but rather by the desire to solve the problems.

Should not is not the same as will not, I yelled, thinking of my Dad who threw himself into the famous Medicare strike in Saskatchewan to maintain doctors' rights to earn fees for services and hold onto their independence.

If there are better ways to solve problems involving robots and algorithms, why not use them, continued Susskind.

Brown did not respond with hey wait, sometimes the way you solve a problem creates new ones that are worse, like the way we solved the problem of Saddam Hussein. Instead he moved to the next issue: what of the emotional support professionals give clients? Wouldn't you want a human doctor to tell you you're dying instead of a machine? Won`t machines fail at empathy?

Susskind dismissed this concern on the grounds that machines can identify real human pain better than humans can. Then he turned his own argument around 180 degrees and asserted that the choice is rarely between the best and the lesser, but between the lesser and nothing at all. As an example, Susskind pointed to the billions of dollars invested by Japanese conglomerates to build smart robots to take care of its huge population of the infirm elderly. Susskind finished off with a quote from Voltaire that Perfection is the enemy of the Good. [According to Wikipedia, Voltaire was quoting an Italian proverb]. In other words, it's better to get the job done badly then not to care for these old people at all.

Hah, I said to the radio. Another famous thinker, Bertrand Russell, showed that from any false premise he could prove a false conclusion. (It's the Pope and I are one proof, see it here). The false premise in this argument is that there are no alternative caregivers to be found. But the Japanese wouldn't have a caregiver problem if they allowed foreigners from poorer countries to immigrate, people who would be happy to do these jobs. The unnamed problem here is that the Japanese dislike of foreigners is so profound, machines are preferred.

Now why wouldn't he mention that? I asked my husband. He ignored me. He was buried in the New York Times' coverage of the terrorist killings in Paris.

That's when I realized that Susskind had not answered Brown's first question: what has happened, or will happen to those pushed from jobs by the onslaught of the smart future? What will result when the young professionals who owe a fortune in student loans have to line up at the Food Bank?

I was still stewing on this the next day when " the economy of plenty"  was mentioned in the comments section of my previous blog post. A technical writer named Ben Williams wrote that my post had implied it. I thought this idea had vanished with the long defunct Canadian Social Credit Party which used to argue that the Canadian economy would be just dandy if the government printed more money. 

What are you talking about, I wrote back.

His response boiled down to this: in the coming age of smart machines, it will not matter that machines write stories and make art. This will not prevent any humans from making these things too. There will just be more interesting works out there to enjoy and that will be terrific.

A nice premise, unfortunately false. Most economists will be happy to explain that  scarcity of something in demand will drive up its value, while ubiquity will reduce it.  If a commodity isn't actually scarce, some people will try and make it so. (The Hunt brothers famously tried and failed to corner the market on silver which for a time drove up its price beyond all reason.) The same holds true for services. This is why self-regulating professions try to control the number of entrants to their fields. Journalists are not professionals but we used to be both scarce and necessary, and so we could earn a reasonable wage as our publishers made money by selling advertising  and subscriptions. With the rise of free platforms, free distribution systems like the Internet and Wi-fi enabled phones that  upload and download text and video, and the consequent rise of citizen journalists, the economic value of professional journalism began to collapse. This is what lawyers and doctors can expect when smart machines become available at low cost to perform the services they provide for significant fees. Just last week,  IBM took out an eight page ad in the New York Times to announce the dawn of this new 'cognitive' age and the availability of IBM machines with the software to manipulate Big Data which will out-think humans in almost every area of endeavor.

The people who will make money from this economy of plenty will be giant international firms that can afford to get rid of human staff and buy machines instead. Those who will gain the most are the owners of the corporations bringing smart machines to market first, companies like IBM, Google, Apple, Microsoft. Historically, the bigger the company, the harder it is to keep innovating, which is why Google, Apple, etc. have been buying novelty at a furious pace (in the form of startup companies with the good robotics patents and professors with clever artificial intelligence algorithms).  Google has an endless supply of cash with which to make such purchases. According to its last annual report, it sits like a dragon on a pile totaling more than $60 billion.

They are trying to buy control of the future, to corner the market on artificial intelligence and the smart robots they plan to sell to us. Selling involves experts telling you how great something is. Selling involves the publication of arguments in favor of one particular future, one that has to be made to appear inevitable as well as desirable.

Hey, I said to my husband: you don't suppose that either Susskind is connected to companies with interests in this stuff?

Jim Brown had not asked that question on air.



And so I got down to it.  First I went to Richard Susskind's home page. It carries bios of both authors as well as information about their book. These two are, to put it mildly, connected.  Daniel has two degrees in economics from Balliol College, Oxford. He has worked in the Prime Minister's Strategy Unit and in the Policy Unit of 10 Downing Street, as well as for the Cabinet office. Richard, the father, has a deep and abiding interest in machining the law. He too is an Oxford product, with a doctorate in law and computers from Balliol College. He has been an IT adviser to the Lord Chief Justice of England and Wales, written many articles and books, is an Honorary Bencher of Gray's Inn, a Fellow of the Computer Society, the Royal Society of Edinburgh, etc. And oh yes, he serves as Chairman of the Advisory Board of the Oxford Internet Institute (more on OII below.)

His bio also records that " although Richard is self-employed and works independently, he does not claim to be a dispassionate analyst or to be free of commercial interests."

Ah, I say to self. 

The next page reveals that Richard works in the commercial as well as the academic world, that he consults with major law firms, "and to in-house legal departments," that he has been "an active member of an advisory board of Lyceum Capital, a private equity firm that is committed to investing in the legal profession; and he was chairman of the advisory board of Integreon, a legal and business process outsourcing business."  Richard apparently discloses conflicts whenever they arise.

Integreon is part of a huge conglomerate (which controls 20% of the value of the Manila stock exchange) owned by the Ayala family who have helped run the Philippines, with its extreme divisions between rich and poor, since the 19th Century. Integreon offers machined office processes, including legal processes, world wide. Lyceum makes no mention of Susskind but I take his word for it that he has advised them.

Oxford itself is knee deep in funding from a major party involved in the development of artificial intelligence products -- Google. In 2014, Google purchased Deep Mind, a UK startup which had published some startling artificial intelligence results in Nature, for $400 million. Deep Mind`s work is similar to that of Chris Eliasmith at University of Waterloo. It is trying to capture the nature of general human intelligence with algorithms based on how neurons work. Other investors in  Google Deep Mind include Peter Thiel and Elon Musk. In exchange for a million pound plus donation to the Computer Sciences Department and Engineering Sciences Department at Oxford, Google Deep Mind gets the benefit of working with some of its leading professors. Engineers at Oxford involved with Google Deep Mind are world leaders in computer vision. The company also purchased Dark Blue Labs, a start up by Oxford computer scientists working on natural language (so that computers and people speaking English or French or Chinese might better understand each other). To the displeasure of the principals at Google Deep Mind, this year Musk warned publicly about the alarming speed with which artificial intelligence is advancing. Musk says he bought into Deep Mind to keep an eye on it.  

And what of the Oxford Internet Institute, whose advisory board is chaired by Richard Susskind? It is a graduate school, set up in 2001 to study the social  implications of the Internet. No doubt with advice from Susskind Senior, it has broadened its reach to include many things related to the use of Big Data, such as the analysis of issue based conversations on Twitter and "novel crowd sourced intelligence services." There is also interest at OII in teaching computers to understand human languages, statistical modelling and inference, machine learning, etc.  It is also one of the founding partners of a brand new entity called the Alan Turing Institute.

The Alan Turing Institute is the British government's attempt to locate itself firmly at Center Ice of Big Data. Other founding partners include four other British universities such as Cambridge (Alan Turing`s alma mater), Edinburgh, Warwick and University College London. According to its website, the Turing Institute "will attract the best data scientists and mathematicians from the UK and across the globe to break new boundaries in how we use big data in a fast moving competitive world."  Half of the Institute's start-up funds -- 42 million pounds-- comes from the British government's Engineering and Physical Sciences Research Council. The other half comes from its partners, including the universities, Lloyd's Register, GCHQ, and other businesses. Governments in various countries have put up most of the risk money for the artificial intelligence industry. Now, to speed things up, they are embroiling formerly independent scientists and their employers-- the universities-- in the generation of smart ideas that will generate market applications. Why? Governments argue that this will lead to more and better jobs in the future, which in this case-- the advance of intelligent machine research --gives irony a bad name.

They mainly do it to keep ahead of enemies and frenemies.

GCHQ is the UK's equivalent of the NSA, a leading member of the Five Eyes Consortium which has been shown by documents purloined by Edward Snowden to have secretly gained access to personal, private and foreign government communications world wide. Turing basically invented computing when he worked for GCHQ (then called the Government Code and Cypher School) at Bletchley Park to break the NAZI enigma codes. In those days, the School was under the direction of MI-6. (Please see SMARTS chapter 8 for this story).

The Alan Turing Institute is physically located at the British Library, where Karl Marx was once a Reader dreaming of a post capitalist economy of plenty. There is nothing post-capitalist about the Alan Turing Institute. The new director is Professor Andrew Blake, formerly Microsoft Distinguished Scientist and Laboratory Director of Microsoft Research UK. Last month, when the UK's Minister of Science opened the Institute, he announced that Intel has also agreed to partner.

Next week the Alan Turing Institute will hold a workshop on creating an ethical landscape for the study of Big Data.

Good luck on that.

Friday 13 November 2015

SMARTS Update: Writers Beware, the Machines will Supplant Us



In the West, for millennia, intelligence was assumed to be something displayed by humans alone, a gift from God who made us in his image. Rene Descartes asserted that intelligence and the capacity to reason are properties of our immortal, ephemeral, God-given souls, nothing whatever to do with our bodies which corrupt and die. As Western science began its slow development from methodology to ideology, these notions were simply incorporated into it.  For one thing, in many places in the 18th Century it wasn't safe to question them in public. Thinkers who differed had to worry about being tried for blasphemy. David Hume, the diplomat/philosopher/father of psychology who led the Scottish Enlightenment didn't buy any of it, and said so, but very carefully. His last book debunking religion had to wait until he was safely dead before it could be published and even then, his best friend, Adam Smith, wanted no part of it.  It fell to Darwin, the greatest of all ground breakers, to begin a systematic investigation of intelligence in living things other than humans, everything from worms to twining plants. Yet the idea of intelligence as a property particular to humans alone refused to die. Darwin's work in this area was shoved to the back shelves of science for many, many years.

SMARTS details how modern scientists finally broke out of this religious straitjacket.  Breaking free led eventually to software and chips designed by philosopher-engineers to manipulate symbols, cope with representations, and use logic as we do. It also led to tiny robots that can make decisions and solve problems without a specific program. It will lead to warfighter robots that are smarter, faster and have access to much more information than we do. First came the recognition that intelligence exists in animals other than humans, specifically our closest relatives, the Chimpanzees. Next came the understanding that intelligent behavior is exhibited by every living system, and so is as variable as the individual life forms shaped by four billion years of evolution. Regretfully, science had to set aside the lovely idea of immortal souls. Intelligence is not a God given property of humans alone, it is a phenomenon that emerges as bodies struggle to live in all kinds of difficult environments. Artificial intelligence is already emerging in the machines we build to cope with dangerous environments or to do jobs more cheaply than humans can or will.

Very few of these developments are reflected in current literature. There is a deep chasm that divides those of us who read and write stories, from those of us who practice science. To judge by the jury's award of the 2015 Giller Prize to Andre Alexis for his novel Fifteen Dogs, the literary world is just not familiar with about 80 years of scientific and philosophical exploration of intelligence. (Yes, philosophical. The mathematical logic that underlies all of our computer systems is a product of philosophical as well as mathematical imaginations.) We, who read it, think of literature as a means to travel to a zone of enlightenment: we think it carries us into the hearts and minds and experiences of others, it enables us to learn about tragedy and terror and joy and triumph without ever leaving our armchairs. One writer, John Gardner, even had a character say that only in literature can we find the answer to every problem, an example of every facet of important experience. So how can it be that this huge revolution overtaking us now, the embodiment of new forms of intelligence in machines, the expansion of intelligence and consciousness into the things we build, is so dimly reflected in literature? It is dealt with in movies. Why not in literature?

The Giller jury citation says this of Alexis' book:" What does it mean to be alive? To think, to feel, to love and to envy? AndrĂ© Alexis explores all of this and more in the extraordinary Fifteen Dogs, an insightful and philosophical meditation on the nature of consciousness. It’s a novel filled with balancing acts: humour juxtaposed with savagery, solitude with the desperate need to be part of a pack, perceptive prose interspersed with playful poetry. A wonderful and original piece of writing that challenges the reader to examine their own existence and recall the age old question, what’s the meaning of life?"

While I was researching SMARTS people often asked: aren't you really talking about consciousness when you say you're interested in intelligence? Aren't humans the only conscious creatures? No and no. Consciousness is awareness of self, but awareness can have many different levels from high alert to vegetative state. Consciousness exists even in brainless plants, which lose consciousness just as any animal will do when under the influence of molecules that suppress certain kinds of electro-chemical activity.

Yet Alexis, when interviewed, sometimes used the words intelligence and consciousness inter-changeably. The premise of the book includes the notion that language is something unique to humans which shapes us entirely, that the gifts of language and intelligence/consciousness that humans enjoy/endure will make dogs very unhappy. Or not. There's a bet involved between two Greek Gods. Alexis tells us that these fifteen dogs transformed by the Gods became a useful vehicle for him to explore human nature.

This is an exploration that would have been more meaningful about 100 years ago.

Alexis tells us that he became interested in dogs when, for several months, he took care of eleven belonging to a friend. He tried to inveigle himself into their pack by joining them as they howled at night, and felt he was accepted and then felt he was intruding. He obviously understands that dogs, like people, employ sounds and gestures to communicate with each other, and dogs are well able to understand basic human words, though possibly not human syntax. We know, now, that language is but one means of communication that may or may not be specific to humans and that humans are well able to communicate without it. We don't yet know if different life forms have languages of their own because we have just begun to investigate that question. For years, scientists tried instead to teach obviously intelligent animals like dolphins, chimpanzees, bonobos, and gorillas to use human languages (such as sign languages and even computer languages invented for this purpose) as if only human languages could convey meaning. When plants communicate with each other and with the animals that they use to reproduce (like us) or to protect themselves from predators, they make particular molecules to convey particular information (come hither, or hey, get off me!). Some fish use electrical signals. Octopuses use physical gestures and incredible control of various pigments in their skin to camouflage themselves, to project anger, to deter predators, even though they're color blind. Birds use tones and songs to tell competitors to take a hike. Language in other words, is just one method to convey information but there are many more, most of which we humans miss altogether (such as the voltage communication system used by tiny Amazonian fish, such as the sonar systems used for exploration and communication by dolphins and whales.)

So language or lack thereof says something but not everything about intelligence and consciousness. 

One can be conscious without any sensory perceptions at all according to Christof Koch who has worked on this problem for years, along with Francis Crick (co-discoverer, with James D. Watson, of the molecular structure of DNA). And that brings me to conscious machines. If machines can reason, and machines can sense and move around in the world because we give them sensors and actuators, machines will have consciousness of some sort and may only lose it when we pull the plug or take out the battery. Some machines may have it already.

You will not find this terrain explored well in current literature.  With a few exceptions (Faber, Atwood, Gibson's early work) it appears only in genres on the margins of literature -- in science fiction or fantasy. Genre stories are generally poorly imagined, badly written, with speculation to drive a plot, and wooden posts pretending to be characters. Yet it has fired the imaginations of the scientists and technologists and entrepreneurs currently creating intelligent machines, folks like the founders of Google and the designers at Apple. Since the turn of the last century, literature has mainly confined itself instead to describing the interior drives and desires of human protagonists coping with their personal experiences. You know the themes. The Marriage Plot. Love and longing. From misery to triumph and back again. The personal terrors wrought by war, evil States, parents from Hell, psychopaths. The literary world has shrunk to I and Me.

I loved 19th and early 20th Century Russian novels because their characters grappled with  philosophy, geography, biology, physics, history, slavery, religions, morality, social science, war. You could read Tolstoy and Turgenev and Dostoevsky and learn what their contemporaries thought about everything. Their characters were fully imagined human beings tortured by the onslaught of both technological and ideological revolutions just as we are: these writers exhibited a passion to understand the new things coming at them even as they coped with the social strictures bursting all around them as the State they lived in unraveled. But after the Russian Revolution, literature in the Soviet Union disappeared into Samizdat, and in the West began a process of narrowing itself to simulations of human consciousness, often in harrowing circumstances. Oddly, in the same period, psychologists began to argue that the brain is too complex to study, that the mind should be treated as a black box and only behavior examined. Ethologists and comparative psychologists argued the same about animal behavior: it should be treated as a product of hardwired reflexes, not flexible like human thinking.

And maybe that's where science and literary story telling began to split, a rupture that continues to this day. Or at least it continued until DARPA got interested in how narratives might be reduced to algorithms which can be used to whomp up great propaganda in record time. DARPA funded projects which produced software now sold by companies that can  write news stories without much human input. Now too scholars at the Max Planck in Leipzig and at McMaster University in Hamilton are studying the neuronal patterns of brains making art in various formats, from music to dance to storytelling, patterns which will no doubt be reproducible as algorithms useful in intelligent machines.

In other words, one day, not too far into the distant future, machines will generate stories, perhaps stories that will examine machine consciousness, machine dreams. They will most definitely write stories about us.

Writers, take note. Science is already invading our turf. If we don't write about the way the world turns soon, the machine children begat by poorly imagined science fiction will do it for us. Literature will find new Subjects whether we like it or not.

Friday 6 November 2015

SMARTS Update: The Moral Imperative for Killer Robots


I hate to say it but journalists who work in what the right wingers call the lamestream media tend to be about ten years behind the leading edge of anything. Editors wait until something gets published in a responsible journal, say Nature or Science, before even thinking about putting it on the front page or on television news. Even then, if it requires a lot of complex explanation, forget about it.  Mainstream communicators live by a media version of the ethical doctrine of utilitarianism: the greatest story is the one that hauls in the biggest audience. Journalists take great pride in providing information that will affect many lives. Their publishers are focused on hooking lots of readers too -- in order to entice advertisers. Not surprisingly, stories that don't already have a proven high level of public interest will get a pass from an editor trying to build an audience.

It's like a high school dance: some stories are always popular, others languish on the margin until something goes wrong in a big way. And that's why one day you will be hearing lots of mea culpas from our governors about how they were just too slow to grasp the dangers of smart machines until it was too late.

Our governors are drowning in information like everybody else, and mainly respond to public pressure. This is the wonder and the joy of a functioning democracy. It is terrific to watch a new government sweep into power promising to address all the issues they were buttonholed about during a campaign. But it takes time to build that kind of public pressure. It often begins with news stories that fuel moral outrage. For example: when Canadian Thalidomide victims were plain sick of trying to get the Conservative government to help them deal with their infirmities -- suffered as the direct result of the failure of governments to demand proof of safety of a bad drug-- they turned to the media. There was a strong moral tone to the stories published about their plight. They highlighted the government's lack of fairness and its failure to empathize, they showcased the human suffering its policies or lack of them had generated. We all share the belief that democratic governments have a moral duty to protect the weak. The Thalidomide story showed that the government had in effect been taking advantage of Thalidomide victims by paying them as little as they could get away with. That brought a great number of eyes to the front page. The government, thinking of the election to come, was forced to respond.

The original reporting on Thalidomide and the 1972 campaign for compensation led by the Sunday Times under the direction of Harold Evans is one of the great examples of how public service journalism can exert moral pressure. The drug was developed in the late 1950s by a German company previously known for making soaps. It was offered over the counter as a sleep aid, but also to control nausea in pregnant women. In Germany, Thalidomide became the second highest selling drug after aspirin. A few years after it was introduced, physicians in Germany and Australia reported a possible connection between Thalidomide and the births of deformed children, and also reported nerve damage in those who used it as a sleep aid. They were ignored-- until they couldn`t be any more.  The company had lied about tests done to ensure Thalidomide could not harm a foetus: it hadn't done any. Before it was withdrawn, the drug was on the market long enough to result in the births of thousands of deformed children. (It never made it onto the US market due to the stubborn insistence on evidence of safety by an FDA scientist, a Canadian. Later it was discovered to be useful in the treatment of leprosy and for some forms of cancer.) There was a law suit for compensation in the UK. In 1969, there was a criminal prosecution in Germany of the responsible company executives. As Evans recounted recently in The Guardian, thanks to determined reporting and the truncated trial, we learned that Thalidomide had been developed by people who had previously demonstrated a stunning lack of empathy. Some of the founders of the company were Nazis: they hired as leading scientists some who had helped develop Sarin gas during WWII, helped IG Farben develop the compound used to gas Jews, conducted cruel and deadly medical experiments on people in labor camps.  As Evans explains, due to political interference only now coming to light, the criminal trial was halted in 1970 with the result that only the victims were punished. German Thalidomide victims were forced to accept an unfair and insufficient settlement.

Why do I tell you this story? To remind you that Thalidomide was first a boon before it became a scandal that affected thousands. Governments could have stopped it from being sold in the first place if they had demanded proof of safety, or stopped it from being sold after the first reports of problems were published. Instead, governments made things worse by stalling on compensation for victims, prolonging their agonies for many years.

Thalidomide will seem like a small hiccup when compared to the civilization-wide scandal of the unregulated rise of autonomous smart machines.

Peter Singer raised the subject of autonomous military robots in Wired for War in 2009 because he could see that drones were being produced and weaponized without much in the way of a public debate about what this meant for the future of war. But no one like Harold Evans has followed up with stories that would pressure governments to manage this development, not even when Google bought up a large number of robot companies and talked publicly about back engineering a human brain.  Instead we got and get stories in the press that essentially praise the cleverness of the unfolding technology but don't ask whether it's going to be safe. There were plenty of mainstream news stories -- with video clips -- about how Google's 330 pound humanoid robot (developed by Google-owned Boston Dynamics) can now run unaided through a forest. They tell us that Google is aiming at a humanoid robot that is dynamic, unpredictable in its movements, faster, more agile, and stronger than humans.  When that humanoid robot is outfitted with Google's back engineered version of a human brain, Google will have the makings of a very scary robot warfighter. But where are the stories about regulators and governors inquiring as to whether they should ever be deployed? My new book SMARTS explains the history of ideas behind these developments and introduces some of the people who are bringing autonomous machines into your life.  (See: Geoffrey Hinton,Chris Eliasmith, Ray Kurzweil.)  But a book can only start a small conversation. It takes a national newspaper or a television network to get the public to pay attention and to light fires under the bums seated at cabinet tables.

Those talking to our governors about autonomous machines now are mainly lobbyists for large companies with an interest in shaping the conversation their way. They talk about how  innovation will create the jobs of the future (not about how autonomous machines will slash jobs in the future.) They hope to hold off unduly restrictive regulatory frameworks and to get major grants to push their work forward. They tell the public two kinds of stories about autonomous machines. First, that our lives will be much better when, for example, children in hospital are entertained by little autonomous machines they will quickly learn to love. See how cute they are? How dangerous could that be? The second story line is darker. It goes like this: if we don`t develop smart military machines, our troops will be overwhelmed by other countries' faster, better, stronger, smarter robots. Military research agencies like DARPA have been pushing the development of autonomous drones and humanoid war-fighters for many years so as not to be surprised by an enemy's  autonomous war machines.

The arguments being developed now in favor of autonomous war machines are definitely leading edge so you won't see many of them reported in the mainstream press, at least not yet. Roboethics is a brand new term invented in 2002 by an expert at a robotics school in Genova, Italy.  Roboethicists argue that the development of autonomous robots has pushed the study of ethics beyond the confines of dusty old philosophy departments into the spanking clean and oh-so-modern robotics and computer science labs. Their papers provide clues as to how war robots will be sold to us in future. The main argument is that they will be much more ethical than humans, especially on the battle field, because they will not be swayed by emotions like vengeance, envy, rage.  At the recent International Conference on Robot Ethics ( ICRE 2015) held in Lisbon last month, about 100 attendees heard an address from Ronald Craig Arkin, an American robotocist and roboethicist at Georgia Institute of Technology. He argues that robot warfighters are a necessity because they will be more honorable than humans. As he has put it: “A warfighting robot can eventually exceed human performance with respect to international humanitarian law adherence, that then equates to a saving of noncombatant lives, and thus is a humanitarian effort. Indeed if this is achievable, there may even exist a moral imperative for its use.” 

Don`t you love it? We will be morally required to build autonomous killer robots.

You heard it here, first.