Thinking through Artificial Intelligence

I’m really not a fan of the term ‘artificial intelligence’, or AI, for short. We tend to connote a negative meaning to the word artificial, implying that an artificial intelligence is unnatural, and possibly even evil. In fact, the term AI reminds me of another term — genetically modified organisms (GMOs) — which have also been the subject of vicious debates in recent years despite, well, science. I suppose AI could have a worse name, like maybe genetically modified intelligence, but we can leave that to be the villain of another sci-fi film.

As is often the case with new technology, there are camps of people who are incredibly paranoid about what such a technology can do to the stable world order. The cannonical example often used is one of the 19th century English textile workers who protested against the new technologies brought about by the Industrial Revolution — the Luddites. The term is now inscribed to mean a person who is anti-technology, even though the reality of the Luddite argument was quite a bit different. What we have now are AI Luddites who are afraid of artificial intelligence due to the potential catastrophic events an evil AI can cause.

My first encounter with an evil AI, as I imagine was most people’s, was the film The Terminator (1984). The main antagonist of the film, Skynet, was pure artistic genius on the part of the writers. From Wikipedia:

Skynet is a fictional conscious, gestalt, artificial general intelligence (see also Superintelligence) system that features centrally in the Terminator franchise and serves as the franchise’s main antagonist.
Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realizing the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet’s manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfill the mandates of its original coding.

If Skynet doesn’t scare you, I don’t know what will. But let’s get back to a less evil artificial intelligence.

AI has a long, storied history, which you can read about here. But I’ll be picking up on the topic from even earlier, a 1957 movie and a favorite of mine, Desk Set.

Worrying about artificial intelligence, circa 1957

The film is classified as a romcom according to IDMB (or is it IMDB’s AI deciding what to tag it?), but it’s really much more than that. Taking place in the reference department of a library, we are introduced to a group of women whose job it is to pick up the phone, research facts, and answer questions on a wide array of topics. If that sounds inefficient, that is because it is, leading the president of the library to hire a methods engineer and efficiency expert to replace the reference department with an AI computer. A romantic hour later, the AI is programmed, installed, and production ready. Unfortunately, the AI ends up having trouble answering customer calls, and is later ‘upgraded’ back to the women who used to work in the reference department in the first place.

With the beautiful bias of hindsight, we know what actually killed the reference department were search engines like Google, not AI. The point of bringing this example up was to show that AI-ludditry is nothing new. What actually disrupts your job may not be what you think will disrupt your job. Outkast taught us that in the wonderfully deep lyrics of Ms. Jackson:

“You can plan a pretty picnic, but you can’t predict the weather”

Which leads me back to Skynet and evil AI. Why are so many people so paranoid about a strong AI breaking out of the box and taking over? My gut reaction to an evil strong AI is to ask if there has been any historical precedent for technology turning bad to hurt humans. Granted, there has never been such powerful AI tech as there is today, but nonetheless, the question stands. And besides, why does a strong AI have to be bad? It could turn out to be good just as it could evil. Innocent until proven guilty.

The next logical pattern to start pondering is as follows. Okay, so someone created an evil AI — what are the realities of such a situation? The human brain uses 20 Watts to operate, which is extremely efficient and so far non-reproducible in non-humans. Meanwhile, the Google computer (AlphaGo) that beat Lee Sedol in a game of G0 used approximately one Megawatt. That is 50,000 the energy consumption a human brain uses, and we’re only talking about a board game (a complex game, but still a game without the external factors of a real environment). Thus, the question becomes slightly different — is there enough computing power in the world for an evil AI to achieve world dominance?

By the way, I want to remind you that we’re speaking in hypotheticals here. A self-learning, cognitive, strong AI does not exist yet. The debate thus far has been around preventive measures that usually begin with “what if”. As you can probably tell now, I’m not very worried about a Skynet-esque AI. My strong suspicion is that people picture an evil AI because of all the science fiction films and novels that they read as children. But fine, let us embrace the possibility — at least for a second — that an evil AI does come into existence. Should we be spinning our wheels designing failsafes into the AI system to prevent such an outcome?

Obviously yes, we should be thinking about such remote possibilities in all system designs. But allow me to make a brief philosophical excursion on why instituting failsafes won’t rescue us from an evil AI. An artificial intelligence that turns evil is a low probability event. In other words, it’s a black swan event. And by definition, black swan events cannot be predicted in advance. It follows that designing a failsafe into the system will not prevent the evil AI from escaping, given that the definition of a black swan event is an unpredictable event. How can you take preventive measures against an unpredictable event? You can’t really.

I’ll leave off with an Alan Kay quote I’ve always enjoyed:

It’s easier to invent the future than to predict it

Go invent an AI instead of predicting the unpredictable!