A week ago, I read an article by Will Knight at the MIT Technology Review called The Dark Secret at the Heart of AI. Knight makes the case that we should be concerned about a central element of modern AI: we don’t really know how our AIs ‘decide’ what to do.

Knight focused on a specific driverless car project by Nvidia, which exploits so-called ‘deep learning’ approaches to AI, based on the use of self-learning of neural nets. As Knight characterized it, this approach poses unique problems for us to trust the AI’s decision making:

The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected — crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

I think this version of AI is simply an edge case of a broader question of trust, though. Even if a driverless car, an autonomous back-hoe, or an AI controlling the traffic grid in metropolitan New York City was programmed by a specific developer or team of developers, why should we trust that the AI’s algorithms or logic are as we expect? How can we be certain that a drone delivering pizza in Cleveland will follow its instructions, or that a surgical robot will snip off a patient’s enflamed appendix and not their gall bladder?

I confess that my attitude to the question is ambivalence. I’m acutely aware that human beings do not actually make decisions in a rational fashion, courtesy of Daniel Kahneman (Thinking, Fast and Slow) and Dan Ariely (Predictably Irrational). So even when people tell you how they arrived at a decision you can’t rely on their descriptions. And of course, we all operate in a world where we manage to muddle through, anyway, without knowing how others are making decisions, or even how we do. So, candidly, I start out with very low expectations.

However, a company deploying a fleet of driverless dump trucks in a huge mining operation would like to know with some certainty that they aren’t likely to run over hundreds of miners per year. Or at least their insurance company might. So there are many reasons why we would like to get more certainty about the dump trucks’ inclinations.

I posed the question to readers using a survey (you can take it yourself), and got enough of a response to make it illuminating, I think, even if the specific percentages of responses may not be reliable.

The first question I asked was ‘Do you believe we need to understand how an AI makes its decisions before we deploy it?’. Here are the results (at left).

Combining the yes’s with the qualified yes’s yields 60%, and the qualifications were fairly mild, such as a requirement for testing or requiring gaining the trust of someone else that is trusted. And some of the qualified no’s really are yes’s, such as ‘depends on the application’ which means in at least some cases we require that understanding.

The second (and last question) was ‘What would make us trust an AI?’

The leading response was ‘Demonstrating performance in a broad spectrum of operational trials, such as 10,000 hours of driving for a driverless car’, with 38% of the votes. Very close behind was ‘The AI being able to explain its reasoning’ with 33%. Smaller numbers of respondents would accept the judgment of human experts (10%) or other AI’s as monitors (10%), and some wanted combinations of experts and demonstration.

I’m a fan of empirical results, so I lean toward demonstration of performance. After all, if I don’t trust an AI, why would I trust its explanation for its behavior, either?

My sense is that we will see a widespread convergence on demonstration of AI performance in realistic settings — like the autonomous dump trucks in a real mining environment, minus real human beings, perhaps — along with continuous monitoring of AI-managed gear by additional AI’s, and those provisions are likely to be enough to satisfy people in general that the AI’s are trustworthy.

A few of the additional comments from respondents are illuminating:

  • ‘Might be spitting hairs but; having some kind of review of the data used in training, not just the inner workings of the resulting “intelligence.”’
  • ‘Some AI is inherently black box, but often the proxy for an explanation is to provide a series of illustrations of what happens to the output (recommendation, or score, or whatever we ask the AI to give us) when we change the input variables. We trust the algorithm if those changes are in line with what we would expect.’
  • ‘Human built systems obviously have vulnerabilities as well so we need to figure out how to weigh that up against an AI we do not understand. For example driverless trucks are a big risk if someone can hijack hundreds of them and drive them into crowds. We need to work out how to assess the risks versus the existing risks that trucks and cars already pose.’
  • ‘I have been in the field for 30 years and believe that AI is lower on the scale of threats to humanity than the rogue autonomous self-reproducing code that we cannot explain and do not understand what it is doing or where it originated.’

Those insights raise questions that place trust of AI in the context of trust and security at a more general level: how do we trust any software, and how do we guard against the threat of insecure or potentially malevolent software and systems?

These will be themes that we will return to again and again, in the years to come, regarding AI and almost all pervasive software and systems deployed in the world. The threat comes with the benefits, and they can’t be neatly unharnessed, alas.

Interested in AI? Request an invite for our upcoming Velocity Network session on AI and Machine Learning, May 25th in NYC.