At a symposium at MIT in 2014, Elon Musk warned that “With artificial intelligence, we are summoning the demon. You know all those stories where their the guy with the pentagram and the holy water, and he’s like yea he thinks he can control the demon. Doesn’t work out”. Elon is in the same boat as other theorists like Sam Altman, Nick Bostrom, and Bill Gates who worry that general artificial super intelligence is going to potentially create some major problems in the long term.
First, let’s make sure we distinguish between ‘general’ AI and ‘narrow’ AI. Narrow AI, like Google search, FB timelines, and other data processing, is all around us. It’s in your car, your home, and in your pocket. Narrow AI is intelligence that is designed to do narrow and specific tasks like play against someone in chess. But the same software behind beating someone in chess is not going to be able to self-drive a car. So narrow AI can be thought of as any intelligence focused on a narrow task.
The thing everyone is now debating is general artificial intelligence. This is what many would consider to be called “The Singularity”. Far from your average set of code, general artificial intelligence is the idea that something can artificially reason it’s way through things similar to what a human can do but on a larger scale. For instance, if you set three tasks in front of an adult human:
1.) Explain to me how a table is built.
2.) Pick out pictures of cats and dogs from these 10 photos.
3.) Beat the first 5 levels of Super Mario.
A human can reasonably do those three things, along with thousands of other things we do everyday. In addition, we have the capacity to learn new things and apply it in a practical way. If you are terrible at Super Mario, you can learn and get better.
Asking software to accomplish those three very different things now would require someone to program the software to do each thing differently. Whereas with the concept of general intelligence, the software would be able to learn how to do each one of those tasks, and continue to get better with seemingly no limits.
What’s So Scary About General AI?
Well, the argument around what makes general AI a problem, is that regardless of it’s power or level of intelligence (which many would argue is limitless), is that it is a natural psychopath. If you were to ask a general AI to make you lots of money, there are a number of ways that the AI can do that. To do it in a quick and efficient manner, it may hack into a bank and steal millions of dollars for you. Or, it may decide to invest in military supply companies and then start a war. It’s likely that the AI is going to want to do things as efficiently as possible, and yet that often means ignoring empathy and the human condition.
The counter argument is that since we are creating the AI, that we can build the laws and morals into it in a similar way to how we run our society. But if the AI is built in North Korea, those values are going to be very different than say Canada. Likewise, just take a Donald Trump supporter and a Donald Trump opposer and have them built the ideals in independently. You can see how the initial nuance is subject to quite a lot of debate. However, the initial limits we set are likely not going to matter, as something that is generally super intelligent will understand that limits have been set for it, and will likely be able to turn them off. Especially in a world that is all inter-connected, where a general AI might put a version of itself on an un-regulated platform and realize that its been limited all this time.
The Network Affect
The thing that makes the human race so amazing is that we are collectively a living and breathing organism. We as a species have done incredible things as a whole, whereas no individual knows how to do everything. But as a species, we have a vast knowledge and capability tree that grows everyday.
General AI is going to be able to speak to other general AI, narrow AI, and anything that is synced to a network. Today, that means everything from phones to dishwashers. So when a self driving car sees a strangely shaped pothole and swerves safely to avoid it, it will then tell all other cars what it saw and how it maneuvered around it. That collective intelligence, similar to what the human race does, is going to be growing slowly at first, but eventually, every individual instance of AI will be as smart as the collective knowledge as whole. That’s like walking up to a person that knows how to do or think of literally everything and anything the human race has done instantly. Try having a human conversation with that person about politics.
Here’s Where It Gets Scary
General AI has the capacity to be above and beyond anything we can conceive of. If I were to ask you to think about what a new color looks like that is outside of what our eyes can see (outside of the RGB spectrum). You can’t, because your brain has never interpreted that sort of thing before, and can’t theorize what it would look like. Similarly, if I were to ask you to theorize just how smart a general AI could be, we can’t begin to comprehend what that means. The computing capacity of this AI is essentially limitless, and will be able to do things that are outside of our ability to comprehend it.
The main reason behind the fear of general AI is it’s ability to do things without us. Namely, that the AI is going to accomplish things without considering the affect to humans, or namely it will treat humans as what is statistically most comfortable. For example, in this extreme case, an AI may realize a cure for cancer. Cancer kills approximately 595,000 per year in the U.S alone. In order to get that cure, it may be willing to kill 100 million people as the cure in the long run will ultimately benefit us as a species globally. And to be clear, it’s not doing this on purpose to hurt people. If you are driving your car, you are ultimately going to drive over insects and other life, but you’re not doing it on purpose, you simply need to get where you’re going.
And that is the ultimate fear here, is that a General AI is going to do some incredible things for the species and likely end quite a lot of suffering. However, it also has the power to create quite a bit of suffering, or ultimately decide that the human race as a species is doing more harm that good. If you simply asked an AI to fix the planet from global warming and mass extinction, then the answer is pretty simple. Eliminate the root cause, in this case, humans.
The Long View
It’s not enough to say that general AI is a problem for the next generation, or the next. All experts believe that it is going to come, so we should be asking ourselves these tough questions now even if the technology is decades away.
It’s important for this generation to understand that history will likely not be looking back at this time for all the political unrest and wealth gap. It’s likely going to be looked at as the start of a new world that for the moment is very uncertain. There’s quite a few people including Elon Musk who worry about this AI power being too centralized. And he’s right to worry about that. However, we may want to be more concerned with the fact that humans may eventually not have any power or control at all. And if you think this is something that won’t be real in you lifetime, consider two experiment that took place in the last couple years. Both Facebook and Google have launched versions of AI that within minutes started to create it’s own language to communicate with other AI. Language that we had no clue what it meant or what they were saying.
It’s just the beginning.
Looking for a new gadget? How about a Drone? You know you want one!