How Dangerous Is Artificial Intelligence?

© Geekswipe. All Rights Reserved.

© Geekswipe. All Rights Reserved.

Thanks to our sweet memories of HAL 9000 and Terminators, we now have a serious trouble understanding the real dangers of artificial intelligence. The concerns are mostly dystopian in nature, not because of some evil traits that machines develop themselves, but because of what they learn from humans.

While that sounds uncomfortable on a scale, the real dangers will be the combination of human traits with the ones that machines develop for themselves. So in this edition of Geekswipe, I will try to clear some things up and help you comprehend the real dangers of artificial intelligence. I hope at the end of this article, you might be able to look AI from a different perspective.

The current dangers of artificial intelligence

All the artificial intelligence systems in this world as of 2016 are just simple systems that are specifically programmed to do stuff and help humans. You can call it as a weak AI for now. All they can do is train on some specific data sets and do some narrow set of stuff, which they are programmed to do.

Your ‘not so intelligent’ autocorrect is one such narrow AI. Why does it autocorrect a word to ‘ducking’ every time? It’s because the human saints who programmed it wanted to keep it that way. So yeah, such narrow AIs are little controlled and not that intelligent in a practical sense. These weak AIs can win chess matches, beat the experts at other human invented games, but they cannot figure out what’s happiness or depression, yet!

They can do anything. If these weak AIs are programmed exclusively to cause a catastrophic event, it will do its job, just like any program. But in the current world, machine learning is applied to automate and improve digital services mostly.

For example, the US technology conglomerates like Google, Microsoft, Apple and Facebook has their own weak AIs that help improve their services, by mining and learning from the free data from their users. While this may sound benign, most of these AIs are behind closed doors. It is imperative to raise concerns over this, especially when these systems deal with our data. The scary part is about their future plans to automate everything. We have no idea how our data is used. It gets scarier!

In the past, all these companies have participated in the NSA/GCHQ mass surveillance scheme, PRISM, for the governments involved. If history chooses to repeat itself, there is a certain possibility that they will agree to a collaboration if it benefits their projects with shared user data sets. They will be able to manipulate global digital data and even exploit the data they collect for political reasons.

Most of us fall into the category of using one of their services, making the particular service as one go-to product on a global scale. For example, the Google search services and Windows operating system are the two massively used go-to products that rely heavily on such closed source AI systems. People only use these services widely because there are no good efficient alternatives that work better. There are no good efficient alternatives because those new alternative services don’t have the data sets or public interest to improve their own algorithms. This leaves no better options for the majority of the common users, and so they are forced to trust in one particular entity. Too much trust in a corporation is lethal.

Deploying AIs to help users can sometimes lead to unexpected situations, doing the opposite of helping. Remember the time when Google’s image recognition AI for their Photos app tagged black people as gorillas? While it is a genuine innocent mistake by an AI, it does not mean that this would be the only mistake that would hurt a human. Sure, Google has fixed it. But only after, they did the damage. There is the real problem. We are not capable of seeing the consequences before it is too late.

Since the dawn of calculators, we have transcended and achieved remarkable feats, abusing and improving the AIs with all our collective resources. Right now, at this point in human history, we are looking forward to another transcending point that would transcend the current chaotic weak AI itself into something that we wouldn’t understand yet – the strong general AI.

The future state of artificial intelligence

With extensive investments in deep learning and mining of massive free user data sets, creating a strong AI that is capable of human-level intelligence is not so far now. The only limiting factor here is the processing power needed for such strong AI to reach the human level. Moore’s law, being exceptionally accurate for a while, is not doing so well these years due to the physical limitations we have with manufacturing smaller transistors. Below 5 nm of gate widths, the electrons would tunnel through the gaps and render the transistor useless (quantum tunneling). But this will soon be fixed by eliminating electrons all together and switching to other types of transistors, using optics for example. Or even quantum computers.

If all that have been made possible, the rise of such AI could solve many problems that the humanity has ever pondered about. We could solve faster and obtain solutions for our impending global issues, energy management problems, medical complications, and perhaps we could even be able to find answers to things that we wouldn’t have known in the first place without the existence of the human level artificial intelligence.

But it doesn’t mean that they couldn’t be used in the exact opposite way. Any single organization can create their own AI to counter all this for their benefits. Hacktivists can go beyond their dreams to wreak havoc on larger scales, in a completely different ways. Evil doers can create unknown problems on catastrophic scales. The innocent mistakes of a narrow AI could now transform and scale up with high magnitudes.

As the intelligence grows, this situation gets only chaotic. Within the equilibrium of right and wrong, humans may work on a consensus that fits the greater good. But a machine, will not! They don’t even need to work that way. Our perspective towards these systems is based on our current experience with mediocre things like, self-driving cars, fake assistants, autocorrect and translations. When things go out of our knowledge, a machine that could surpass human thinking cannot be stopped by humans.

Fundamentally, they are systems designed to work towards a goal. Trained based on the human data, they would only tend to learn the human ways and apply it in a way that benefits their goal of problem-solving. This is the only possibility. They would simply go on doing damage with their innocent and nascent intelligence. More like a digital Cambrian explosion, this could happen so fast.

History (or the present) tells us, humans almost live on a very weak equilibrium between the notions of what is good and bad. Now I’ll leave it to you to contemplate about a different form of human thinking system, which is created by mining data that is nothing but human experiences mixed with science, mathematics and everything else, in a faster, powerful and a widely connected medium, unlike a human body.

Karthikeyan KC

Aeronautical Engineer, Science Fiction Author, Gamer and an Explorer. I am the creator of Geekswipe. I love writing about Physics and Astronomy. I am now creating Swyde.

Related Stories