Artificial intelligence is no longer a future eventuality, but a reality of the modern era. Narrow intelligences, like the personal assistant in your phone or Watson, the Jeopardy-playing computer, exist right now. However, general intelligence AIs, a computer that can think the way we can, are still science fiction, not reality.
Even further off are theoretical “super intelligent” AI constructs. If general AIs pose a philosophical conundrum (what does it mean to think?) then super-intelligent AIs pose an existential one.
Some very, very smart people, like Stephen Hawking and Elon Musk, have warned us about the dangers presented by theoretical super intelligent AI. Now, it’s important to define what we mean by “super intelligent”: such an AI would compute at a level that would make the sum knowledge of humanity as trivial as the sum knowledge of all bees on the planet is to humans.
The danger presented by such an artificial construct almost can’t be overstated. This is because of the potential for miscommunications with AI, and something called the halting problem.
In computer science terms, the halting problem is a paradox that can occur when you attempt to determine whether an input given to a computer will yield an output or will instead loop forever. Before you give the computer the input, you can’t know: any program you write to test for an infinite loop could, itself, loop infinitely, and return a contradictory result.
In simple terms, this means that the halting problem is “undecidable,” and the only way to know if a program will encounter such a fail state is to run it. With AI, you encounter a corollary of this problem when you attempt to test whether it would pose a threat to humanity.
Imagine a researcher tasks a super-intelligent AI with the job of streamlining a national banking system that has become outdated. The AI complies, promptly deleting all digital currency from within that nation. The ensuing chaos collapses the world economy. As far as the AI is concerned, the banking system itself is now perfectly streamlined by no longer operating at all.
Once such AIs are a reality, there is little that can be done to contain them. Sufficiently intelligent programs could always be several steps ahead of any attempts to control them or to shut them off. If you attempted to intervene in their activity, they would think of thousands of ways to stop you before you even began to move toward the off switch.
So, do super-intelligent AI pose a threat to humanity? Yes, they do. And the world’s best researchers would be wise to proceed with caution before activating any intelligences so vast that they would dwarf their very creators.