What Is The \'Terminator Conundrum\'?

Courtesy of Pixabay

Tech Trends: What Is The 'Terminator Conundrum'?

by Simon Jones | Life & Times | Apr 21, 2017

Billionaire and software engineer Eric Schmidt once said, “The internet is the first thing that humanity has built that humanity does not understand.” What he means is that the internet is so deep, interconnected and multifaceted that we as a society cannot accurately diagnose its effects and its importance.

We might have front seats to the latest addition to this group of tech developments. Artificial Intelligence is coming into its own in 2017. We are seeing AI applied to more areas of life, such as robotics, self driving cars, space exploration and industrial mechanics. All are benefitting from an infusion of thinking computers. And overall it has been a boon to us. The ability for computers to think and reason more efficiently make them better at their jobs and make our lives easier.

But as AI has gotten more and more advanced, a new problem has come to light, and it’s getting harder to ignore. Simply put, we really don’t know how AI works.

Forward Woman Artificial Intelligence Robot, courtesy of Max Pixel

We do know how artificial intelligence works in the sense that we know what it was designed to do and we see what it does. AI is a system that allows computers to take in information, interpret it and react reasonably without any human interference. We use AI for many tasks that require problem solving where a normal person would not be as fast or as accurate as a computer. Lately, we have played with the use of computers in the processing of fake news, and even giving them some creative outlets in the arts.

As the technology develops and the system’s computers use to store and process information become more advanced, the more complicated AI becomes. The unforeseen consequence is that now we are not sure how these smarter computers make the decisions that they do. We can monitor their input and their results but everything in between is lost down such a deep rabbit hole of computerized intelligence that it is becoming much harder to see how an AI gets from point A to B to C.

So why is this a problem? Well, when a computer system becomes hard to understand, it becomes that much harder to predict and treat irregularities. We can’t see the decision making behind an AI’s thinking so when it makes a poor choice, we won’t know why and won’t be able to fix it. Furthermore, AI is integrated into more and more systems, without a way of interpreting these thought processes the consequences could be life threatening. It also becomes much more difficult to tell when an AI has been altered or compromised.

This leads to something called the “Terminator Conundrum.” A term used by scientists and military personnel, the terminator conundrum describes what happens when we combine AI we can’t understand or control with tech that can cause catastrophic disaster. Because we can’t keep up with the pace of AI development, some scientists are reluctant to use these new systems in important computers.

It may not be up to them, though. Like the nuclear arms race of the last century, nations are now competing with one another building bigger and better computer systems and developing AI to outpace their neighbors. Perhaps the only way to end the proliferation will be for Skynet to finally arrive, but we’ll have to wait and see.