Retrofitting the First Law of Robotics

This is a transcription of a speech given by Eben Moglen at the 2012 Hackers on Planet Earth (HOPE) conference in New York City on July 15, 2012.

Event records

Thank you, it’s a great pleasure to be here, I aplogize for the preview in Forbes online and the resulting slashdot conversation about whether I understand the first law of robotics or not, which was very entertaining to me, but let me try to start from scratch while still making it as interesting as possible.

The Free Software Movement — which is now 30 years old if you start from Richard’s original mulling over his concerns about unfree operating systems in the MIT/AI lab — the free software movement is reaching its moment of junction with the great river of the internet freedom movement, which is going to be the dominant political and technology movement of our time, in which all our boats are finally going to reach the sea. But in the process of becoming first, the free software movement and then the great river of the internet freedom movement, we are, as always, standing on the shoulders of giants.

Many of the giants who affected the thinking of those of us who began worrying about the freedom of software decades ago, many of the giants on whose shoulders we stood were authors of science fiction. They were the great visionaries of the post-second World War imaginative literature, which coped with the problem of the run-away technology that had transformed the world. You will recall that after the First terrible use of nuclear weapons in the world, Albert Einstein said “We have changed everything except the way men think.” And the culture of the post-war western world — and the culture of the post-war eastern world too if you read, as we did, science fiction from both sides of the iron curtain — the culture of the post-war world was very was heavily affected by the attempt to understand the implications of technology as imaginitve authors, including Ray Bradbury who recently left us and many others, as imaginitve authors tried to cast in new idealized forms, moral, ethical problems with technology out of control. And the literature that they wrote deepy affected me when I was young and growing up, and Richard, and many many others.

I want to go back to that now because I believe its time for us to acknowledge yet again how much they foresaw, those writers of the post-war world, and how much they helped us to foresee but about how little we have helped ourselves to avoid it. One of the staples of that science fiction of the 1960’s that we read so avidly growing up was that by now, the middle of the first quarter of the 21st century, by now, it was assumed that human beings would be living in a society commensally with robots. And many many people tried to imagine the nature of that kind commensal biological coexistence between us and the robots, the androids, we had built. Everybody understood that there were enormous ethical and moral dilemmas implicit in our living with robots, as there would also be enormous changes in the texture and fabric of ordinary human life from day to day, and the two elements, the nature of human life as lived in the company of robots, and the nature of the ethical and moral dilemmas implied by the attempt to do so, were fertile grounds for some of the very greatest fiction that was written in that time, not least of which, was Isaac Asimov’s attempt to understand how we would confront the problem of runaway technology in Life with Robots which produced, as many here will recall as warmly as I do, all of the stories and the novels which were built out of the US Robotics Corporation and its positronic brain creations.

There, of course, from the beginning, the assumption was that robots would be humanoid. And as it turns out, they’re not. We do after all live commensally with robots now, we do, just as they expected. But the robots we live with don’t have hands and feet, they don’t carry trays of drinks, and they don’t push the vacuum cleaner. At the edge condition, they are the vacuum cleaner. But most of the time, we’re their hands and feet. We embody them. We carry them around with us. They see everything we see, they hear everything we hear, they’re constantly aware of our location, position, velocity, and intention. They mediate our searches, that is to say they know our plans, they consider our dreams, they understand our lives, they even take our questions — like “how do I send flowers to my girlfriend” — transmit them to a great big database in california, and return us answers offered by the helpful wizard behind the curtain.

Who of course is keeping track. These are our robots, and we have everything we ever expected to have from them, except the first law of robotics. You remember how that went right? Deep in the design of the positronic intelligence that made the robot were the laws that governed the ethical boundary between what could and could not be done with androids. The first law, the first law, the one that everything else had to be deduced from was that no robot may ever injure a human being. Robots must take orders from their human owners, except where those orders involve harming a human being. That was assumed to be the principal out of which at the root, down by the NAND gates of the artificial neurophysiology of robot brains, down there where the simplest idea is, you remember for Descartes, it was “cogito ergo sum”, for the robot it was “no robot must ever harm a human being”. We are living commensally with robots but we have no first law of robotics in them, they hurt human beings everyday. Everywhere.

Those injuries range from the trivial to the fatal, to the cosmic. Of course, they’re helping people to charge you more. That’s trivial, right? They’re letting other people know when you need everything from a hamburger to a sexual interaction to a house mortgage, and of course the people on the other end are the repeat players whose calculations about just how much you need, whatever it is, and just how much you’ll pay for it, are being built by the data mining of all the data about everybody that everybody is collecting through the robots.

But it isn’t just that you’re paying more. Some people in the world are being arrested, tortured, or killed because they’ve been informed on by their robots. Two days ago the New York Times printed a little story about the idea that we ought to call them trackers that happen to make phone calls rather than phones that happen to track us around. They were kind eough to mention the topic of today’s talk, though they didn’t mention the talk, and this morning the New York Times has an editorial lamenting the death of privacy and suggesting legislation. Here’s the cosmic harm our robots are doing us, they are destroying the human right to be alone.

They are destroying the human right to do your own thinking, they are destroying the human right to do your own thinking, they are destroying the human capacity for disappearing into ourselves. Robots are changing humanity as the literature said they would. They’re changing humanity quite deeply. And the way that they are changing humanity is not to make it more human. Instead, android quality is rubbing off on us. Which was of course always implicit in the literature when it turned dark; that we might not be able to tell the difference after a while, between the replicants and ourselves.

So, we’ve got a problem. I’ve tried to define the problem space in this talk, I don’t propose that we can solve the problem this afternoon, we can recognize it. As I get older and greyer and further from that boy who read that science fiction, I realize that the rest of my life is going to be about this, and probably therefore some part of the rest of yours if you see it the way I do. We have to retrofit the first law of robotics into everything. This is not going to be simple. The slashdotters who wanted me to remember that the purpose of the first law of robotics was trained in the positronic brain, well of course they were right, that was the happy imaginative part.

You remember why that was, the assumption that Issac Asimov made was that human beings would be afraid of robots. That they would be afraid to allow their children to be tended by robots, or to have them in their homes, and therefore, without some assurance of the complete, engineered in from the very beginning, quality of “we’ll never hurt a human being,” that robots would not be adopted. That the capitalist motive of the robot maker, the US Robotics Corporation, that the capitalist needs of the robot maker to create a safe market in which consumers would accept robots in their homes and with their children would require an absolute guarantee of engineered in safety: we’ll never harm you.

Isaac Asimov was a great New Yorker, I was privileged to grow up in his city while he lived here and bump into him every once in a while. As you know from the Foundation trilogy he had really at the bottom a very gemutklich sense about all of this. Trantor was really the grand concourse and good Jewish family values were really enough to save the galaxy. Unfortunately this was the visionary part of the science fiction, and it isn’t true. It was much easier to get people to hang robots around the necks of their children than anybody ever imagined. And it didn’t require any promise that they would never never never harm anybody, all it required was little shiny things, made by count Dracula the king of the undead.

The purpose of the undead as you know, is to make evil beautiful. That’s what the undead do. They turn evil into something so erotically attractive, you can’t keep your hands off it, and you don’t mind having its hands on you. And he did that, the king of the undead. He’s dead now but they didn’t throw his boot into the Danube and there was no silver bullet, and there was no stake through the heart, and the undead still are with us. They’re improving the screen and such like, until another king of the undead who can build damned beautiful things is ready, but that’s all it took. And now we put those things around our children’s necks and we send them off to be harmed, by the robot, with whom they live.

We have the problem that Einstein was talking about, we have changed everything, except the way people think about this. The heuristics that humanity brings to the net, are heuristics which assume that they know the direction in which danger comes at them, and they do not. So much has happened very quickly. It isn’t that we haven’t warned about it. The free software movement gave its warning all the way along, very rationalistically, scientifically, in hacker speak, however. And our great problem was always how were we gonna get people who didn’t hack on things to understand the importance of being able to hack on things. It was a really tough political lift for symbol makers, to explain to people who didn’t interact with code at all why the freedom of code was their freedom at the end of the day. We knew it was because we grew up with IRobot. We knew it was because we grew up understanding that humanity had many different ways for creating unethical technology, and that we were going to have to find ways to embed ethics in the technology. Mr. Stallman could not have been clearer about that. But his clarity wasn’t universally accessible by any means. It wasn’t imaginitively available to every child in the world, it was only available, may I say it, to us.

Now we have to operationalize in a profounder way, because we aren’t merely worried about whether there will be code available for operating systems for people who work in artificial intelligence laboratories, or even for students. Now we have to worry about how to retrofit the first law of robotics into objects that are hurting people.

Mostly, mostly, we have an ethical and moral problem to describe and to set the outside limits for. We have to be able to express to all the people with whom we interact, though they are not necessarily technical in the same ways that we are, we have to be able to express to them what the ethical limits of technologies are with which they are already familiar in ethically compromised form. And of course we have some technology work to do as well. Where the two things cross, where we are required to do technology work as well as explain to people the nature of the ethical limits of the technolgies around us, we have our biggest problem, and the most immediately urgent.

We cannot retrofit the first law of robotics into robots that have been designed to resist our modifying them. This is a fairly simple point I understand. We tried five years ago in GPL3 to make it with sufficient clarity that everybody who understood the implications could come along with us and do some work in helping to avoid the situation of the locked down robot. We made very little progress because people who are now beginning to realize that they should have supported the anti-lockdown efforts in GPL3 didn’t at the time. Maybe still haven’t done so as powerfully as they should have done. And in the meantime, a whole range of monoliths grew up around society that are very much in love with the idea of the robot you can’t retrofit.

And it does harm to people, everyday. And you can’t retrofit it, because the cover’s welded shut, and it’s booby trapped by the DMCA and lots of other things. You can go to jail for trying to retrofit freedom into a robot, under the wrong conditions. If that might mean the robot might sing a copyrighted song without permission of the composer. Or show you a movie that you haven’t paid for enough times yet. So, first thing we’re going to have to do is take with much greater seriousness, the job of building a coalition to ensure that retrofitting is possible. That it is neither legally nor technologically prohibited to make things safer.

This shouldn’t be required in a democracy. This shouldn’t even be required in capitalism. If you own a thing, it should be your right to make it safer. Don’t you think? Oh well. You see how fundamentally we’ve lost our way. So we need a few things and we don’t need to be all together unwilling to adopt other people’s vocabulary in order to get them.

About this for example, it seems to me that we deserve to be as strong for owner’s rights as other people are for their entitlement to have offshore trust funds and other things. Right? We all miss stuff okay, back off. Where it’s our stuff, we’re entitled to modify it if we want to make it safer, if we want to share safety improvements with other people so they can modifty what they own too, that’s a right.

We need to be very clear that how things work is associated with the quaint concept of the ownership of the thing. If I own it, the way it works should be the way I want it to. The Software Freedom Law Center submitted an exemption request in the Library of Congress DMCA exemption proceedings this year, urging the Library of Congress to declare that it is not prohibited circumvention of means of access control to replace the operating system in a mobile or other computing device you own. I’m very grateful to Aaron Williamson of the SFLC for his extraordinary work in preparing and testifying on behalf of that exemption request. We’re going to back it this time as strongly as we can. We hope the Library of Congress will see the wisdom of declaring that in this free market country, you are free to modify devices that you’ve bought with money and that you are quaintly regarded as owning.

It shouldn’t require any more argument than that, but if it does we have to double down and keep arguing. We have to point out that if devices are unsafe, it is a legal oblgiation to permit us to make them safer. If you sell an unsafe slicer in a delicatessen, or an unsafe automobile, and you attempt to prevent people from modifying those devices to make them safer, if you’re actually out there actively interferring with attempts to make them safer, then when people get hurt you should be liable.

If we press hard enough on that point, we will scare even Count Dracula, King of the Undead, in his grave. Where he should be very frightened, because he has interferred with more attempts to make his products safer by more people, than any other undead maker in history.

We need to establish the proposition that when people get hurt, when somebody’s responsible for that, they pay for it, if they have attempted to prevent us from preventing the harm. This is not the first law of robotics, this is the first law of being US robotics. It’s your ass on the line.

Everybody’s got to know that, and by everybody I distinctly mean to include certain parties called Verizon and AT&T. Nowhere in the world are the network operators more agressive about prohibiting us from increasing the safety of devices. Nowhere is there a more concentrated opposition to GPL3 than in the US network operator duopoly.

Now we know, thanks to last week’s news confirming what we already knew but it hadn’t been printed in the New York Times yet, that millions of times a year, people with a tin star are requesting the real time location, or the contents of messages, or the nature of the traffic, between tracking devices and the networks. We know that now. That is to say, we know exactly how far down the road of surpressing civil liberties the robots are taking us.

Of course, improving your civil liberties is not necessarily regard by other people as making you safer. So not all the time when we insist upon improving our civil liberties by retrofitting into devices our first law — “you shall not harm the user of the device” — we’re going to be told that what we’re doing isn’t making people safer, because it makes terrorists safer too, or some such nonsense.

The truth of the matter is products must not harm the people who buy and use them, regardless of whether the people who buy and use them are nice people. When a kid gets his hand injured by a delicatessen slicer, we don’t ask ourselves whether he’s a good kid or not.

We don’t even ask whether he was a little bit impaired by something when it happened. Because, the manufacturer who makes an article inherrently dangerous is responsible for the harm it does, and if for example, it doesn’t have two hand switching, and somebody’s hand gets hurt, whether they were a nice guy, or a bad guy, or whether they were planning to sabotage the factory on the weekend is not a relevant concern.

The IT architecture of the next period is set, and pretty much everywhere I go in the world, everybody understands it. They recognize it. It’s called cloud to mobile. What does that mean? It means robots reporting at headquarters. Tossing your data overhead, from where they collect it, to where it is stored wherever that is. If you’re a lawyer who worries about privacy, that’s about the same as saying, first it will be at the robot, and then it will be in whatever legal system in the world gives you the least protection for it. And the most academic, commercial advantage, to the guy keeping it for you.

In 2006, I gave a talk at a MySQL annual developers meeting about why it’s good to store it yourself instead of storing things other places. But, I was still in the grip of the belief that we were all going to be fine, and I spent more time talking about technologies of memory in relation to freedom than what I should have said which was, “if you don’t store it yourself, it’s going to be stored by a guy taking taking advantage of you deeply, erradicating your privacy and making you the android of him.”

I probably would even have chosen the word “android,” which had nothing to do with computer software at the time. But there we are. But there we are, cloud to mobile. What it means is from unsafetly to unsafety unless we do it right.

Gus is kind enough to refer the NYU talk in 2010 about Freedom in the Cloud. I wanted then to set out some ideas about how we had gotten into that part of the mess, and how we might get out of them again. On that particular point let me just say about FreedomBox, that as of very soon now, by which I mean single digit days, Debian will be natively supporting the plug sever called the Dream Plug, and from it a variety of other plug servers, and FreedomBox will have moved into being Debian privacy, and we will be trying to deliver best possible privacy tools to every architecture everywhere, all the time, and particularly to small, effective, power sipping plug servers that can replace routers everywhere and make the network safer. For which work I am endlessly grateful to Bdale Garbee and Jaems Vasille and Nick Daly, and many others who have been hacking on FreedomBox over the last 18 months.

But what I and you know is that no matter what we do to make the network safer and to make server side improvements, let us call them, we must at the mobile end be capable of delivering safety, security, and privacy to people on the things they really use, the robots they really live with. It won’t do us any good to try and compete with US robotics and Count Dracula by saying “you can buy a beige little box and plug it in on a wall at home.” We have to get in to the galaxy in your pocket. Or the galaxy will have no freedom in it, no matter what they do on Trantor.

So this raises questions beyond merely how we can get the code in the box, or how we’re going to define what the code is. We’re all real good at that, and I’m actually quite optimistic that we can hold up the technical end. We’ve been holding up the technical end all the way along. The free software movement has contributed a lot of freedom to the internet freedom movement it is becoming, and we’re going to continue to contribute all the way until we win. But what we have to do, beyond all the stuff we’re good at, is to do thing human beings haven’t been good at so far, we have to be really alive to the danger, and we have to teach people that safety must be put in now, after we’ve already launched the boat.

Gus: Preach!

Well, okay, preaching is part of it. But, preaching is effective where there’s adequate dogma. That is to say, where we really understand the doctrine of what we’re preaching, and so we have a little intellecual heavy lifting to do. What does it really mean to talk about hurting people? What does it really mean to talk about not hurting people, or guaranteeing non-hurt to people, in this complex environment, in which one, the robot’s cognition has to be reduced to the level of our desire, it should not be listening to me when I didn’t tell it to, it shouldn’t be informing people of my location when I haven’t said it can, and so on. But where that dialogue cannot possibly simply consist of punching “okay” or “no” on dialogue boxes on something every tenth of a second through a lifetime. We need to understand what services can be safely offered and which ones can’t. Or rather, how service design itself must be altered in order to produce safety for users.

Location directed, or location aware, or location based services are terribly important, and terribly dangerous. And the primary problem is the real time ascertainment of the location of human beings by those with power. It does very little good, in other words, to describe regulatory approaches to such services, because the regulatory approach will always be engineered by government to say “you shouldn’t do this unless you’re us,” or, “you shouldn’t do this unless we want you to,” or, “you shouldn’t do this unless there’s a court order, or other authoritative communication telling you to start turning over real-time location data about human beings.” The very senior US government offical who told me back in March, “well, we’ve learned now that we need to have a robust social graph of the United States,” is reflecting the learning of all the governments on Earth in the last 14 months, when with great suddeness, they all discovered that what they wanted was a robust social graph of their societies. You understand of course that we could put this in plain English for the people around us. We could say, this means the United States government intends to keep a list of everybody every American knows. That’s not what we used to quaintly refer to as a free society. In fact, that’s what I would call a dangerous neighborhood.

And, in that dangerous neighborhood, which it may not be possible to prevent ourselves from living in, unless we learn how to exercise democracy really effectively about this. Which is going to mean a lot of preaching and a lot of teaching. But in that very dangerous neighborhood, we have to understand that things that inform at headquarters where we are, are serious problems.

I must admit that I find it kind of reassuring how naievely confident Americans are. As I grew up to manhood and I started traveling around the world, I discovered that in most, and indeed all of the societies I went to, except my own, people didn’t really think that what they said on the telephone was private. My friends in the Soviet Union were particularly aware of this, of course. I was too, when I lived there briefly in the late 70s. What seems to me so amazing is that it is possible to sell people things and say, “you’ve got a personal assistant inside this object, and you can talk to her,” “her” of course, “in English, and say whatever you want, and we’ll take it back to some warehouse data center somewhere, and then we’ll send you an answer and tell you what to do.”

If the KGB had tried this, it would not have worked. But, for Count Dracula, the King of the Undead, it was a snap. Extrordinary, and very worrisome. Because, how are we going to take it away from people. Right? What are you going to say, you know? You should go back to not wanting that anymore because in truth, it’s the KGB inside your mind? Because you’re contributing everything you would ever say to anybody who was helping you to a great big database of everything, located in the world of the undead? This is not a thing you would expect to have a hard time convincing people not to do, unless they were already doing it. And there is an awful lot of effort going into making people comfortable doing it right now. Which means, that we’re going to have to have strong arguments, and good technology, and really powerful moral conviction.

Now obviously we can explain to people why you shouldn’t leave your children in the custody of robots that haven’t been engineered never to hurt a human being. We can do that. It’s going to feel a little counter-inuitive to people but we’re going to have to say it. We’re going to have to remind people that the great immaginative literature about the King of the Undead tells us that he can’t come into our houses unless we invite him across the threshold. And we’re going to have to ask ourselves and parents everywhere, “you don’t want to invite him in, do you? Not really.” But you see, it’s all about convenience, and prettiness, and coolness, and the sexiness of technology, and we know more about the sexiness of technology, most of us. I speak only for myself, but if it weren’t for the sexiness of technology there would be little sexiness in my life. Notwithstanding which there is a time when the evil is too beautiful and we have to do something about it.

As far as I’m concerned, this isn’t a project. It isn’t a — this is what we need to do right now and then we’ll be done with it, unfortunately this is a way of life for us. Retrofitting the first law of robotics is going to take a long time because they’re building robots everyday without it, and they’re getting people more and more accustomed to the idea that you carry around a brain that isn’t yours and it thinks about you for other people, that you have cognitive facilities that don’t work for you in your pocket all the time on the bed table every night, that the tracker is always there pretending to be a phone, that you’re wearing the one ring that binds us all to them., It’s really hard, we’re going to have to be very committed to this, this is the meaning of our part of the freedom movement. This is the part we’re going to have to be responsible for. Because there are billions of people on earth who are going to be trapped, and we know why, and we know how, and we haven’t quite figured out how to describe it to them in ways that will help them stay safe.

But if we don’t, it’s going to be very dark, and all that hopeful science fiction, that came from our attempt to believe that we could think out way into safety, after building the bomb, will have turned out to be true enough about the bomb, but not so true, about the robots we are becoming. Thank you very much. [Applause] Thank you I would be happy to take some questions.