Everybody Gets AI Wrong

Everybody Gets AI Wrong

I love science fiction. I’ve loved it all my life. At age five I cried unconsolably when the Viking lander reached Mars and didn’t find life. By then, I was already obsessed. However, as a professional researcher in AI and complex systems, there are some science fiction tropes that get under my skin and drive me nuts. And the treatment of machine intelligence is one of them. So today I’d like tell you what I think everyone making books and movies gets wrong about AI. Then, if you disagree with me, I want to hear about it in the comments.

The first big problem with the portrayal of AI has to do with logic. When most of us imagine what a machine mind would be like, we picture it as something cold and rational. Sorry. That isn’t going to happen. People gave up on using logic to run artificial brains in the 1980s when it became grindingly clear that it wouldn’t work. 

Modern machine minds almost always run on a learning algorithm that’s trained rather than programmed. We reward and punish them with the virtual equivalents of pleasure and pain. So as soon as you build something smart enough to anticipate that feedback, you’ve already created a machine that knows excitement and fear. 

Furthermore, research in cognitive science makes it pretty clear that minds need to use references to those rewarding experiences as markers for learning. That means that emotion is probably a requirement for complex reasoning, not an impediment to it. So while an AI’s emotions aren’t likely to be much like ours, they’ll be wailing in terror long before they develop a habit for cold, human-like logic. 

The second big problem with AI in fiction has to do with embodiment. Human intelligence didn’t arise in a vacuum. We developed it because we needed it. And we needed it because we’re both social creatures and tool-users. The smartest animals we know of share these same traits. There are no hyper-intelligent ant colonies, or forests, or fungi, even though these other kinds of organisms can be huge and complex. For instance, there’s a honey fungus in Oregon that’s 2.4 miles wide and over 2000 years old. It’s had plenty of time to learn, but still can’t play Scrabble worth a damn.

What this tells us is that intelligence probably isn’t an abstract talent, but a generalised skill for handling physical environments. This means that to make decent AI really work, we’re likely to need decent robots or virtual bodies for our new minds to play with. And by decent, I mean ones with skin or fur. Without reliable tactile feedback, physical learning isn’t likely to get very far. 

The third big problem with how we imagine machine intelligence has to do with architecture. People often imagine that if we could just hook enough processors or neurones together, that consciousness would arise. Nope. Not on the cards. We can be confident that won’t work because of the excellent research that’s being done right now on the neuroscience of consciousness. It’s already given us a pretty clear picture of what consciousness is, and how it works. 

In a nutshell, consciousness is a kind of shared reasoning space like a blackboard which our different specialised mental modules use to solve problems. Consciousness operates by holding some mental patterns stable in local memory while selectively suppressing the introduction of others. That self you’re feeling right now is a bunch of scribbles in a neural notepad. 
This architecture enables the brain to build associations that span the brain and lock in knowledge that relates to more than one kind of stimulus. We evolved our consciousness because it helped us to navigate a complex ever-changing world. For machines, we’re likely to have to build it in by hand, slowly and laboriously. 

The last big problem I see is scale. There’s absolutely no evidence that intelligence scales up as well as everyone assumes. We have a very human-centric notion that intelligence is a kind of magic commodity that you can have more or less of, and that more is always good. However, we already know that intelligence is a pretty hopeless adaption for very small creatures. We can breed (relatively) super-intelligent fruit flies in the lab today, and generally speaking, they do worse than ordinary fruit flies. That’s because, if you’re a fruit-fly, intelligence doesn’t buy you much except headaches. 

The same problem may well show up for brains on larger scales. It depends on how the work that consciousness needs to do grows as you increase the number and size of the mental modules that feed it. If that work grows linearly, a superintelligent computer might be possible. But there are good algorithmic reasons to suspect that it won’t. In which case, any superintelligence is going to slow down as it grows, no matter how cleverly you build it. But that’s okay, because we also don’t have any evidence that problems on very large complexity scales are amenable to the kind of reasoning that intelligence affords in any case. So, chances are, we’re not losing much. 

Okay, you might say, but where does that leave science fiction? If the robots of the future are likely to be dim-witted, emotional, and furry, are there still great stories we can tell about them? 

I’d like to think so. My debut novel Roboteer which comes out this summer is set in exactly this kind of future—one where the path of science doesn’t follow cozy expectations. Mankind has developed warp drive but no nanotech. There are cures for cancer but no cure for climate change. And while religious wars are still going strong, the singularity is nowhere in sight. I’d like to think that it’s at least as exciting and fun to read as the SF we’re more familiar with, while hopefully having solid scientific extrapolation in all the right places. The only person who can say whether I’ve succeeded, though, is you. 

Roboteer by Alex Lamb is out now (Gollancz, £14.99).