Software Governance and Automobiles - Session 4a
Self-Explanation and Self-Driving, by Leilani H. Gilpin.
EBEN MOGLEN: So then, here we are. This is the time in a Friday conference when you begin to want to make up all the minutes that you left behind because people are shifting from foot to foot and wondering about their airplanes and so on. But we did save the very important part for the end of the day as you see, because it’s not good enough to talk about software governance and cars. You have to talk about software governance in cars that are operated entirely by software which means the governance problem is indeed pretty severe.
I do think that the Free Software movement idea in the 20th century that if you’re going to use a program, you should be able to read the source code, was a pretty good idea which is of declining relevance in the 21st century.
Jeremiah pointed out this morning it has to be your copy of the program for the law to get anywhere, but more to the point, in the 21st century if you want to think about what is the equivalent of being able to read the source code in the program, it’s being able to ask an autonomous system what it is doing. And autonomous systems that can’t explain themselves, if I could quote myself being quoted by Jeremiah, are unsafe building material because if we don’t know how the thing works, how can you possibly be alone in the room with it, let alone a room which is hurtling down the highway 80 miles an hour? And this is Nicholas’s point. You have to have some baseline somewhere, and in setting a baseline, it would be useful if the machines could talk, as it were.
So this is the beginning point of this last crucial session on autonomous driving.
Leilani Gilpin is a Ph.D. student at CSAIL at MIT working for my very favorite MIT professor, Jerry Sussman, without whom free software would not exist. And her work after a life spent in the sunny climates where the roads are just fine– in San Diego, in Palo Alto, and places like that, after UCSD and Stanford University and the Palo Alto Research Center, which I still think of as the Xerox Park, because it was when I wanted to work there– her work now at MIT is about how to get autonomous vehicles to explain what they are doing, a crucial subject which I want to learn about, so Leilani, please.
LEILANI H. GILPIN: Great. Thank you everyone. Thank you for staying this afternoon. I know it’s that midday lull, so I appreciate you all being here.
So today I’m really going to talk about how do we get machines to explain themselves? And how we get machines to tell stories of what they’re doing to their operators but also to their internal parts? And this is very important in many different areas of machinery. In fact, there are three different cases where explanations are just not good enough. So if we start with the left hand side– deep neural networks, deep learning, this big AI front– really can provide no explanation at all what they’re doing. They are completely opaque to humans, and those are the same mechanisms that are making a lot of the decisions that were previously entrusted to humans. So it’s really important that we make sure that these AI algorithms are able to explain themselves, for accountability and also for software development, so we can make sure that we fix the errors when they do happen.
The second type of limited explanation we have is an explanation to the human expert. So not to rag on Java too much, but there is a Java exception that can tell you sometimes what happened and can sometimes point to the error of what occurred, but it’s not human readable, and you really need the most expert of experts to be able to understand it.
And finally, the purpose of this talk, is moving past the check engine light. So I don’t know how many people here drive, but I’m sure if you’ve encountered the check engine light it didn’t actually mean to check the engine but meant that something might be wrong, and it might be a good idea to take your car to the mechanic.
And so our goal with this research is to have machines able to provide human readable explanations, both for auditability and also for engineering. And one of the purposes of that is to figure out when a machine acts badly, when a vehicle does something that we can’t explain, who’s at fault? I’m sure everyone in this room has been frustrated with vehicles. They are unable to relate to us in a way that we would like to be related to, and they can’t communicate their internal state and thought processes.
So I want to go through a few different circumstances that we’re trying to explain in our research and talk about what happened in my personal life as I started driving.
So here, we see my car overheating in the desert in Southern California. I wasn’t sure what was happening. All I knew was that the car wasn’t acting as expected, and I deemed that it was really hot outside, so I opened up all the windows. That turned out not to work well, but it would have been really great if the vehicle could have told me, “Hey, I’m overheating. A better way to deal with this would be to take the car to the mechanic or otherwise.”
One of my favorite errors here is when I first tried to put chains on my tires. So from California, again, rookie errors. So I put the chains on the tire, and obviously it was not tight enough, and so the chains kind of came to the outside of the vehicle, made a terrible noise which is the only reason I figured out what went wrong, and ended up having to actually cut that with a wire cutter which was very unfortunate for my first ski vacation.
But again, if the machine was aware– if we had sensors that could tell us what was going on, that it could sense that the chains were loose, that the weather was snow, that it knew that this was happening– this could be completely avoided.
And then my favorite error, here I am moving down to UC San Diego from my home in northern California with my mother who is very concerned as the car starts overheating on the side of Highway 5. I cut her out of the photo, because she was so concerned. But all we knew was that the car was hot, and when we came out we saw a flat tire and we saw liquid coming out of the car. So what makes this case a little more interesting is the past two errors that I talked about were my own error, right? They were things that I did that I didn’t do right. And so the vehicle was not at fault. But for this particular case, what actually happened was the antifreeze cap was loose. It caused the pressure in the cooling system to go down. Very hot liquid came out of the vehicle and caused the tire to melt, and then we had to pull over. So actually this was a vehicle error that was caused by the maintenance that we had just the week before.
But what would have been great is if we had this sort of explanatory system that could have told me when I started the car that something was wrong instead of getting into this error about halfway to San Diego and freaking out my mother on the way of me moving to college.
But these are errors with just vehicles without any sort of self-driving capability. Right? And this only gets worse when we start to think about self-driving vehicles. There was just the self-driving Uber crash which I’m going to be talking about in detail, and no one really knows who’s to blame. And in fact no one really knows what happened, because no one actually has the data of what happened. All we have is a video feed where we can see a few things happening but we don’t know about the different sensors or the different internal workings. And what’s more important is that this is not going to be the last error that happens.
We have to think what are these sorts of faults as Nick was saying. What exactly is a fault, and how do we get that data, and how do we get those sorts of explanations? And there have been many other accidents with Teslas and other things that are coming into view now, so it’s really important that we think about how we can use explanation to be able to figure out who’s at fault, what happened, and possibly how to fix it.
So I’m going to go through a detailed scenario of the Uber accident because this is something that we’re trying to re-simulate in my lab and then use our self-explaining software to explain different scenarios of what happened.
What’s interesting about this is that we only have a couple facts. So we know that the self-driving Uber was going north at about 40 miles per hour. Now I’m not sure how many of you are well-versed in self driving car literature, but for anyone who’s been in Palo Alto, I’ve never seen a self-driving car in test mode go anything over possibly 15 miles an hour. So this initially struck out at me as something fairly anomalous.
And the other thing that we know is we know approximately where the pedestrian crossing the street was. And we do know that there were some trees in the background, which may have caused some false negatives with sensors and different things. So we basically know two things in this sort of scenario.
And so when we start to think, OK, so we know the facts, now what went wrong? And who exactly could be at fault? There are three main cases.
So a main blame in this case has been the human safety driver. And if you see the accident video, which I encourage you to look out if you haven’t seen it yet, the safety driver is obviously distracted in some way, shape, or form. And some people have said that person is at fault and some people have said that they’re not.
There have also been recent reports that have said it is the pedestrian error, that they made a very reckless decision, and there is no way that the car could have stopped in time. There was no crosswalk, and so that is a pedestrian error.
But the interesting thing that we really want to pursue in our research is was it the vehicle’s error? And if it was the vehicle error, what part was at fault and how do we fix it?
And so if we concentrate on trying to figure out if the vehicle was at fault, right? There are three things that could have gone wrong.
So, in the first case, which it looks like right about now, although we haven’t tested this fully, it seems unavoidable. So it seems that there is no way to detect the pedestrian with enough time to swerve out of the way given the constraints that we have in the scenario.
Now there’s a second case. If you assume perfect sensor information– so every sensor hit happened at the exact right moment, the video was processing at full speed, the computations were as fast as possible– then maybe the sensors could have detected the pedestrian with enough time to swerve out of the way. Now I say swerve out of the way because if you do a back of the envelope calculation of the stopping distance it would take to stop in time, it was just not possible given the current constraints of the system.
But the third case that we’re really interested in is pinpointing what internal errors caused this thing to happen. So, were the sensors, or the perception mechanism, or other parts of the car not working as we expected? And because of that, this fatality occurred.
And now this is how we try and do basically ex post facto explanation. So we basically take a behavior, a log of something that has happened, so this is not in real time yet. This is all after the fact. And those are CAN bus logs, so that’s the Controller Area Network. I’m going to go a little into that later, what that means, but we have basically simulated data that we create.
Then we do on ontology classification. So we basically find the specific intervals of interest that we’re looking at. So, the internal network of the car records things very, very often. So we want to find particular intervals that we’re interested in, where something happened that could be anomalous or not. And then we built in a bunch of constraints into the system. So things that are reasonable or not. These are safe intervals, these are sensor hits, these are hits happening between different intervals. And then finally we keep track of all our dependencies to build up a story, to build up an explanation.
And for that we use a propagator system that was developed by my advisor Jerry Sussman, and we use that to build up an explanation. But using all four parts of the system, what I want to get to, is that we build a coherent story. So we start with data that we know, we find the information that we’re looking for, and then we keep track of the dependencies to be able to build that story of what happened and why.
And now I want to emphasize that this storytelling can really be used to ensure that autonomous vehicles are safe and they’re secure and they really understand what they’re doing. Before we move forward towards taking full autonomous vehicles on the road, we really need to be able to have them explain themselves. We’ve obviously seen a bunch of different cases today where these black box mechanisms cannot explain what they’re doing and it’s getting to be really, really scary.
And if we’re really going to call these these vehicles intelligent, they really need to understand the actions and behaviors of their underlying parts. And so what I’m trying to get towards here is that the storytelling is not just to say what happened, to present that in the case of a legal proceeding or what happened after an accident, but it’s also internal stories. Every part needs to be liable to explain what they’re doing to each one of its neighboring parts, so that they’re aware of what’s going on especially in the case of these anomalous situations.
And just to hone in on that, what we’re what we’re looking for is explanations to be audited to provide an understandable, coherent story which justifies their actions, so that when something goes wrong we can figure out exactly what happened: was it that the weather sensor wasn’t working correctly? Was it that a sensor was fogged over in bad weather?
And further, we really want to use these explanations to keep going in terms of development. If we get an explanation that’s inadequate or it’s inappropriate or it’s not right, then that agent, that part, needs to be corrected or disabled.
And so this is the part of our research that we work on in my lab. I mainly work on the explanations both of machinery and software, and I’m also going to be talking a little bit today about the machine perception that works in self driving cars, and how we can start to try and explain their actions. We also work on different aspects of security, which I’m not going to talk about as much today, but there are people in my lab at MIT that are interested in strengthening the vehicle network for security.
But with explanations hand in hand is the idea of accountability, so simulating what are the actual likely vehicle scenarios, so moving past the trolley problem and actually thinking, OK, what are the scenarios that are actually going to happen and what should the car say that happened in those cases? How do pedestrians react? So pedestrians are part of the system itself. Pedestrians should be able to explain what they’re doing to the car, and also the car should be able to explain that to the pedestrian.
And finally, how can we use the technology as evidence in the case of an accident. Even if we take a step back and not think about self-driving cars. If we think about regular vehicles, how great would it be if when you got in an accident your vehicle could just provide the evidence for you instead of a he said, she said liability scenario?
So going on to exactly what we’re doing with our research. So, the first thing that we did was we built a simulation. And this is joint work with my collaborator Ben Yuan, and obviously my adviser Jerry Sussman. We adopted a game simulation to output a CAN bus log. And that’s really important because these logs, this network that’s recording everything that goes on in the car, no one has that data. The accident data is not available, and I think it’s really important that it is.
And then we move toward the reasoning part. So we start by doing edge detection. So things like, when did the operator apply the brakes? At what time? And then we think about how that relates with certain intervals. When you press the brake, what does that mean for your steering? When you’re steering, what does that mean for your brakes? Should they be occurring at the same time or not? You probably shouldn’t be pressing the accelerator when you’re making a sharp right turn. Those sorts of things.
And we use those, and we keep track of those dependencies on things that happen, to tell a story of what happened, and then from there we use our propagator system to begin to tell a story of why that happened and why that occurred. And we use different types of causal reasoning to be able to do that. But just to elaborate a little bit on the data that we’re using. We really believe that the availability of code both for self-driving cars and also the data that these cars record needs to be available so that it can be evaluated.
One of the things that we’ve done a lot in our research is we’ve recorded some of this CAN bus data, which you can do in your car. You can put in a plug and get your own data. But obviously we’re not going to get ourselves into an accident, but those are the really interesting scenarios that we’d really like to look into.
And also, we really want the software to be available for accountable development. So all of our software is open source, and it is a publicly available, and we’re talking in terms of both the simulation and the error detector and reasoning.
So moving towards that, I’m going to talk a little bit about the simulated data that we get. But again, it’s really important that we have real world data. We can’t evaluate these sorts of processes without having the real data, especially the ones that occurred in accident-type scenarios.
So, what does our data look like? So we try and simulate basically a Controller Area Network, a CAN bus. So this is the robust standard that allows for the different microcontrollers and ECUs on the car to communicate without a host computer. As some of you may know, it’s pretty easy to hack, so it’s not authenticated. It’s also not encrypted, and it’s a pretty simple schema, so we cleaned it up a little bit here, but it’s basically a time stamp. A CAN bus code and then some extra information that’s telling you what you’re doing, and it is standard. So there is some form of CAN in all vehicles mandated since the mid-1990s.
So, what this data looks like up close? So we start with a time stamp here that is in seconds, and then we have a CAN bus code, and I want to emphasize that these CAN bus codes are also not available, so they have to be completely reverse-engineered. And that is a huge effort. They are different for every make and model of the car. It is a huge process to try and do that. So this is specifically for a Toyota Prius simulation that we were running, but this will not work on any other vehicle simulation. And so by reverse engineering these we got that B1, B3, and B20. B1 is the front wheels, B3 is the rear wheels, and 120 is the drive mode.
And then after that CAN bus code is a number of parameters. It varies in length depending what the CAN bus code is. And so, just for this specific example, for the wheels, it’s the right and left wheel rotation in kilometers per hour. Again that can be different for every make and model of the car. And then 13 and 50 for the line 120 correspond to the drive modes that corresponds to the fact that you are in drive, not in reverse or neutral, and then 0 4 in this case corresponds to the fact that you are empowered, not standby. Another thing I want to emphasize with this data is we only have 10 different CAN bus codes that we can put into our simulation. So there is only a limit of data that we can do.
So I wanted to go a little bit into the specific modeling that we do. So we do a bunch of different expert models to be able to tell stories. So the first thing we do is mechanical modeling. And so we have a bunch of different parts of the car here and how they interact with each other. The operator is at the top. Note that that can be an autonomous operator or a human operator. But if you think about how this sort of system would work in the case of the tire example that I showed before, basically you would have some system that says, Okay, you’re right wheel is not acting as you would expect. That information would propagate through your system, and you would get a explanation that would say that the tire sensor is anomalous given the certain state that you’re in. Check on the right back wheel. And this is the sort of stuff that we can model right now in our system.
But obviously, mechanics is not the only part of that. So we’re also really interested in what happens in different sorts of physics scenarios. One of our big achievements in the last calendar year was we were able to explain and simulate different sorts of skid scenarios. And again this is using the propagator framework, but in a real world example, what you would want is in the case where you had some weird tire pressure that was causing differentials between the different physics parts of the car, you would want that friction force to be illuminated in you’re expert model. And then you can get an explanation that the wheel force will decrease and the tire pressure is low. So the recommendation, what the system comes back and says, is you should check on the medical, the mechanical system for anomalies.
And just a final modeling capability that we’ve had is we’ve been starting to work on explanatory parking. This is joint work with one of my undergrad students, Zoe Lu, and Ben Yuan, where we can basically explain the perfect parallel parking scenario. So one thing that we’re working to do is to try and have a parking assist do that maneuver and see if our system can explain that in real time and when it makes errors.
But I’ve been talking about mainly what happens in the case of regular vehicles, so vehicles that stand now, but I’m sure a lot of you guys are wondering, OK well what about self-driving? How do we start to explain self-driving scenarios? So what’s great about that is that we have the same sorts of mechanics and the same sorts of physics in self-driving cars. But there are some additions that make it a little bit more complicated. Obviously there’s new perception capabilities, and there are a lot more sensors, but the basic driving capabilities are still the same. So we use our same explanatory capability that we had for a regular car, and then we start adding more capabilities for the perception and the sensors.
And that’s what we’re striving for for system design. We’re looking towards modeling and explaining each individual component of the car. As I said before, the explanatory capability, the storytelling, is two part: the car itself needs to be able to provide an explanation, but the individual parts also need to be able to explain to themselves.
One of the things that we’ve been trying to do in starting to explain machine perception is we’ve started to wrap that mechanism into a monitor that constrains the system to being reasonable or not. And that’s very important because we get into a lot of scenarios where the perception mechanism will provide something that is absolutely unreasonable given the certain state of the system.
I’m going to talk a little bit about how we’ve been explaining perception in two ways. And the motivation here is that a first step towards understanding machine perception is to constrain the output to be reasonable. And then we’ll move towards second steps of maybe understanding the certain individual parts that underlie it. And there are two main ideas here.
The first thing we looked at was just looking at data, and what I’m going to show with that is that data is just not enough. So if we use a purely data representation concept, which is a common sense database, and we make those constraints of the system, it works pretty well but not entirely. But what I’m going to hit home today is that you also need a novel structural representation, so you really need a set of conceptual primitives that represent the things that you’re expecting your car to do in different circumstances.
So the methods that we did for explaining perception for the self-driving car. Again, we started with behaviors or logs, and that’s a scene description. So that’s a natural language. And then we do a similar ontology classification. That is a hierarchy and a set of anchor points. So anchor points are classifications that are reasonably close to what you’re looking for. So what I mean by that is an anchor point would be something like an animal but not something like a baboon, or it would be a plant, a living thing. So we try and create an ontology of things that we’re looking for that have certain constraints. The constraints in the system, our relations that are close enough in our common sense database. And I’m going to show where that fails.
And then again, we do the same dependency tracking where we explain conflicting relations. And again, we’re trying to tell a coherent story. So we use the behavior of the logs. We use the different anchor points. We use the relations that are close enough to build up a coherent story.
When we first started doing this work, it was in the wake of hurricane season and a lot of things were crossing the street that we didn’t expect. One of the examples that we started with was, OK, can we can we tell the perception mechanism that a mailbox crossing the street is absolutely unreasonable? And so you can see how we store our data. Their main premises, there are many of them, but the main ones are that a mailbox is a heavy object, so that means it can’t necessarily move. Although a mailbox is located near the street, that’s not enough information for us to deem that this is reasonable. And so what we were able to produce was a human readable explanation of why this certain state is unreasonable.
So although it finds that a mailbox is typically found near a sidewalk, mailboxes cannot cross the street because mailboxes are objects that don’t move on their own. So this is really great because we have a reason in human readable form that can be used for people, and also this evidence is stored symbolically, so it could possibly be passed back into the algorithm. That’s something we’re working on now.
But I also wanted to talk a little bit about the limitations of this sort of system. So when you’re just using data, you might not know exactly what you’re looking for. So an interesting corner case that we got into that shows that unstructured knowledge is not enough to classify these sorts of things is that if you put into the system a penguin eats food, you get a kind of cute reason why that’s unreasonable. So it would say something like, oh, a penguin is an animal that lives in Antarctica and eats enough to eat, and food is an animal that lives in the refrigerator and eats food.
You can see where it went wrong. But what’s important to see here is that the constraints of the system is that you have to eat something that’s close to you. Right? But that’s not really the constraint you’re looking for in this sort of system. Right? And especially when we’re talking about vehicles. There are different sorts of primitive constraints that we’re interested in, and not all contradictions are equal.
And so this is the focus of my Ph.D.: is how do we deal with inconsistent information and how do we find what the most important part is? And so that drove us towards a second method in which we use more primitive representations and structured knowledge. So we start with the same scene descriptions. We do the same ontology classification, but now the constraints are built into the primitive. So once we know the specific action that you’re trying to do, those constraints are built into that primitive, and I’ll show you how that works. We do the same dependency tracking, and again this is all to build a coherent story of what’s reasonable or not.
So if we go back to the mailbox crossing the street example, it looks very different. So basically what we come to is if we think about a mailbox crossing the street, that’s really a move primitive, and there are certain things that can move and certain things that can’t. So if you’re talking about an object, the only way that an object can move is if it is propelled, right? And so when we parse out this sentence, we find that a mailbox is an object by searching through a concept net. We find that crossing is a type of movement, and we find that street is an object. We get a very nice explanation, and it’s very synced, of why a mailbox crossing the street is unreasonable. So now it knows the key reason is that a mailbox is an object that cannot move on its own. It doesn’t find anything that can propel it, and so it deems it’s unreasonable for a mailbox to cross the street.
But what’s really interesting is that if you add more context, if you add it in the context of a hurricane, the system is able to look up and find that a hurricane has the property that it can propel stationary objects. And so, in this case, we know that although a mailbox cannot move on its own, a hurricane is able to propel a mailbox in this scenario.
And so, we are working on now putting this monitor into development. So can we have this monitor go alongside with a self-driving car so that it can check the reasonableness of vehicle actions? So, for example, if you have different system states– so here we have what the vehicle is seeing, if it knows it’s in a red light, if it knows there’s a pedestrian, it knows what driving tactics it’s able to do– it’s able to tell a story of whether it moving is reasonable or not. I know it’s a little hard to see, but in the case that you have a red light and you’re waiting, it deems that the perception is reasonable. And so a red light means stop and that means it’s reasonable to wait. And so we’re working on putting this into a simulation to be able to verify and check that the perception algorithm is doing the right thing in the context of the vehicle’s world.
So I wanted to finish off a little bit about talking about the internal stories that I’ve been alluding to in this talk. So what we really want is we really want these explanations to happen between components. So we really want to be able to take the software, to take the mechanisms that are working in self-driving cars, wrap them in these sorts of reasonableness monitors, and then have them figure out the internal stories that provide evidence for their actions.
If we think about a weather sensor and a perception mechanism working together, you’d want them to somehow figure out that the premise that a hurricane has high winds and a mailbox can’t move are related in this case. So what we’d really want is for them to come up with an internal story. So the internal story that high winds can cause heavy objects to move. And that’s really what we’re working on right now is how do we get these components to tell internal stories? And then use those stories to be able to provide explanations for different types of autonomous vehicle actions.
This is what we’re working on now, we’re working on explaining these sorts of non-local inconsistencies and figuring out how you explain them, and how you deal with these different cases. And we’re also working on incorporating this into full system design.
I just showed a small step that we’re trying to constrain systems to be reasonable, but that’s just a first step to making sure that what they’re doing is relevant given the context of what you’re looking at. And we’re looking at applying this in many different domains, so self-driving is obviously a very relevant and meaningful contribution. But we’re also thinking about how we do that in a virtual reality space where it’s safe to make anomalous decisions. We’re obviously doing that in our vehicle simulation and we’re also looking at that in terms of hardware.
So what we’ve been contributing to the space, so we have a multitude of ex post facto, after the fact explanations that we can generate from simulated CAN bus logs. We have these real time explanations of reasonableness for language descriptions of perception, and we’re working on incorporating that into an actual open source simulation as we speak.
But what I really want to hit home today is that explanations are really to make driving less frustrating. So, in these sorts of cases, it would have been great to say, if the car could have come back and said to me, in the upper right, “It’s way too hot to drive.” It would have been great if the chains could have told me they are too loose, and in this case, if we could have been able to tell what was going on in this situation, especially for my mom. Thank you. [Clapping]
MOGLEN: I think what we ought to do, Mike, is take your talk too and then we’ll all get together and take all the questions at once.
Previous: 3b-mcguire | Next: 4b-milinkovich | Contents