I think I’ve mentioned before that I listen to a number of different podcasts. One of them is Writing Excuses, a podcast about writing science fiction. One of the recent episodes featured Nancy Fulda to discuss writing about AI realistically. In the discussion, she made an observation that I thought was insightful. What we call “artificial intelligence” is basically what computers can’t do yet. Once computers can do something, it ceases to be something we label artificial intelligence.
Consider that in 1930, the very idea of a machine that could make decisions based on inputs would have been considered a thinking machine. By the 1950s, when we had such machines, it was no longer considered to be a thinking entity, but simply just one that followed detailed instructions.
In the 1960s, the idea of a computer that could beat an expert chess player would have been considered artificial intelligence. Then in 1997, the computer Deep Blue beat Gary Kasparov, and the idea of a computer beating a human being at chess quickly got reclassified as just brute force processing.
Likewise, the idea of a computer winning at something like Jeopardy would have been considered AI a few years ago, but no more. With each accomplishment, each development that allowed a computer to do something only we could, we simply stopped thinking of that accomplishment as any kind of hallmark of true artificial intelligence.
So what are some of the things we currently consider to fall in artificial intelligence that might eventually make the transition? Pattern recognition comes to mind, although computers are constantly improving in all aspects of that. The increasing difficulty of Captcha tests are testament to that.
One of the things people often assert today, is that computers can’t really understand anything, and until they do, there won’t be any true intelligence there. But what do we mean when we say someone “understands” something? The word “understand”, taken literally, means to stand under something. As it’s customarily used, it means to have a thorough knowledge about something, perhaps to have knowledge of how it works in various contexts, or perhaps of its constituent parts.
In other words, to understand something is to have extensive knowledge, that is extensive accurate data, about something. It’s not clear to me why a sufficiently powerful computer can’t do this. Indeed, I suspect you could already say that my laptop “understands” how to interact with the WordPress site.
Another thing I often hear is that computers aren’t conscious, that they don’t have an inner experience. I generally have to agree that currently they aren’t and they don’t. But I also strongly suspect that this will eventually be a matter of programming. There are a number of theories about consciousness, the strongest that I currently know of being Michael Graziano’s Attention Schema Theory. If something like Graziano’s theory is correct, it will only be a matter of time before someone is able to program it into a computer.
In an attempt at finding an objective line between mindless computing and intelligence, Alan Turing, decades ago, proposed what is now commonly called the Turing Test, where a human tries to tell the difference between a human correspondent and a machine one. When they can’t, according to Turing, the machine should be judged intelligent. Many people have found the idea of this test unsatisfactory, and there have been many objections. The strongest of these, I think, is that the test really measures human like intelligence, rather than raw intelligence.
But I think this gets at the fact that our evaluation of intelligence is intimately tangled up with how human, how much like us, we perceive an entity to be. It may well be that we won’t consider a computer intelligent until we can sense in it a common experience, a common set of motivations, desires, and impulses. Until it has programming similar to ours.
Getting back to Nancy’s observation, I think she’s right. With each new development, we will re-calibrate our conception of artificial intelligence, until we run out of room to separate them and us. Actually in some respects, I suspect we won’t let it get to that point. Already programmers, when designing user interfaces, are urged not to make the application they are writing too independent of action, as it tends to make users anxious.
Aside from some research projects, I think that same principle, along with perhaps some aspects of the uncanny valley effect, will work to always keep artificial minds from being too much like us. There’s certainly unlikely to be much of a market for a navigation system that worries about whether it will be replaced by a newer model, or self driving cars that find it fun to drag race.
Related articles
- Computer robots will outsmart humans within 15 years, Google director claims (and a giant laboratory for artificial intelligence is already planned) (dailymail.co.uk)
- From disembodied bytes to robots that think and act like humans (robohub.org)
- How long before computers are smarter than humans ? (drakeequation.wordpress.com)
- 2029: the year when robots will have the power to outsmart their makers (theguardian.com)

Tagged: Alan Turing, Artificial intelligence, CAPTCHA, Garry Kasparov, Google, IBM, Jeopardy, Philosophy, Turing test, Writing Excuses
