Press "Enter" to skip to content

Beware: Taking artificial intelligence too far


Staff Writer

Technology has improved at a rapid pace in the past few centuries.

We went from Morse code to smartphones, horse-drawn carriages to mini-vans, fire places to indoor-heating systems.

The list of technological advancements is so long that someone wouldn’t even be able to write them all down because there would already be a new invention by the time they do.

There is one advancement that mankind has been curious about since John McCarthy coined the term in 1956. Artificial Intelligence (AI) refers to a computer or machine that has intelligence that is similar to the kind of intelligence that humans have.

There is no absolute consensus as to how intelligent a machine has to be in order to be considered intelligent, however many people believe that a machine that truly contains AI would be able to trick a person into believing that they are human and not a machine. That’s exactly the purpose of the Turing Test.

The Turing Test was created by Alan Turing, who is commonly referred to as one of the pioneers of computer science. It involves one person (the judge) asking questions through a computer that connects to two other computers.

One computer is operated by a person; the other operates on its own. The judge has to decide which answer came from the human, and which came from the computer. If he guesses wrong more than he guesses right, the computer would technically be considered intelligent.

Most people that create a prototype AI to take this test simply make it in such a way where it answers questions with a question or talks about a keyword from the question in order to fool the judge. If any of those prototypes were to beat the test, I don’t think you’d be able to consider them truly intelligent because they’re programmed to act a certain way.

A truly intelligent computer would be able to logically reason and respond directly to questions without any misdirection. While a computer like this could be beneficial to society in a lot of way, it’s a high-risk, high-reward scenario.

AI optimists might tout possibilities such as robots working dangerous jobs or self-driving cars, and while those ideas are sound, the potential negative consequences of AI far outweigh the benefits.

While AI machines working dangerous jobs could potentially save lives, it would take many jobs away that people rely on to make a living. It could also be a slippery slope to AI machines taking other jobs from people, exacerbating an already weak job market.

That’s the least of humanity’s worries.

If a robot was created that was truly as intelligent as a human, it would be able to rationalize information and react to it. While most people would program these intelligent robots to be friendly and caring, if this technology gets into the wrong hands, the consequences could be apocalyptic.

Even the robots programmed to be friendly and caring could end up becoming corrupt. Just like normal people surrounded by negativity can lose track of their moral compass, an AI robot that can rationalize information and react to it could realize that it is stronger than humans and react by trying to overthrow the human race.

It might sound like science fiction, but some of the most intelligent people in the world like Stephen Hawking and Bill Gates believe that there is some danger involved with artificial intelligence.

Hawking has been quoted saying he thinks that artificial intelligence “could spell the human race.” Gates agrees, saying, “If I were to guess like what our biggest existential threat is, it’s probably that.”

Despite all of the potential consequences, we should continue to develop artificial intelligence. As long as we never allow these machines to become more intelligent than humans, there is definitely some benefit with advancing this kind of technology. It is important to understand the potential consequences of AI in order to safely advance into the future.

Be First to Comment

Leave a Reply

%d bloggers like this: