How Dangerous Is Artificial Intelligence?

3
© Geekswipe. All Rights Reserved.

© Geekswipe. All Rights Reserved.

Thanks to our sweet memories of HAL 9000 and Terminators, we now have a serious trouble understanding the real dangers of artificial intelligence. The concerns are mostly dystopian in nature, not because of some evil traits that machines develop themselves, but because of what they learn from humans.

While that sounds uncomfortable on a scale, the real dangers will be the combination of human traits with the ones that machines develop for themselves. So in this edition of Geekswipe, I will try to clear some things up and help you comprehend the real dangers of artificial intelligence. I hope at the end of this article, you might be able to look AI from a different perspective.

The current dangers of artificial intelligence

All the artificial intelligence systems in this world as of 2016 are just simple systems that are specifically programmed to do stuff and help humans. You can call it as a weak AI for now. All they can do is train on some specific data sets and do some narrow set of stuff, which they are programmed to do.

Your ‘not so intelligent’ autocorrect is one such narrow AI. Why does it autocorrect a word to ‘ducking’ every time? It’s because the human saints who programmed it wanted to keep it that way. So yeah, such narrow AIs are little controlled and not that intelligent in a practical sense. These weak AIs can win chess matches, beat the experts at other human invented games, but they cannot figure out what’s happiness or depression, yet!

They can do anything. If these weak AIs are programmed exclusively to cause a catastrophic event, it will do its job, just like any program. But in the current world, machine learning is applied to automate and improve digital services mostly.

For example, the US technology conglomerates like Google, Microsoft, Apple and Facebook has their own weak AIs that help improve their services, by mining and learning from the free data from their users. While this may sound benign, most of these AIs are behind closed doors. It is imperative to raise concerns over this, especially when these systems deal with our data. The scary part is about their future plans to automate everything. We have no idea how our data is used. It gets scarier!

In the past, all these companies have participated in the NSA/GCHQ mass surveillance scheme, PRISM, for the governments involved. If history chooses to repeat itself, there is a certain possibility that they will agree to a collaboration if it benefits their projects with shared user data sets. They will be able to manipulate global digital data and even exploit the data they collect for political reasons.

Most of us fall into the category of using one of their services, making the particular service as one go-to product on a global scale. For example, the Google search services and Windows operating system are the two massively used go-to products that rely heavily on such closed source AI systems. People only use these services widely because there are no good efficient alternatives that work better. There are no good efficient alternatives because those new alternative services don’t have the data sets or public interest to improve their own algorithms. This leaves no better options for the majority of the common users, and so they are forced to trust in one particular entity. Too much trust in a corporation is lethal.

Deploying AIs to help users can sometimes lead to unexpected situations, doing the opposite of helping. Remember the time when Google’s image recognition AI for their Photos app tagged black people as gorillas? While it is a genuine innocent mistake by an AI, it does not mean that this would be the only mistake that would hurt a human. Sure, Google has fixed it. But only after, they did the damage. There is the real problem. We are not capable of seeing the consequences before it is too late.

Since the dawn of calculators, we have transcended and achieved remarkable feats, abusing and improving the AIs with all our collective resources. Right now, at this point in human history, we are looking forward to another transcending point that would transcend the current chaotic weak AI itself into something that we wouldn’t understand yet – the strong general AI.

The future state of artificial intelligence

With extensive investments in deep learning and mining of massive free user data sets, creating a strong AI that is capable of human-level intelligence is not so far now. The only limiting factor here is the processing power needed for such strong AI to reach the human level. Moore’s law, being exceptionally accurate for a while, is not doing so well these years due to the physical limitations we have with manufacturing smaller transistors. Below 5 nm of gate widths, the electrons would tunnel through the gaps and render the transistor useless (quantum tunneling). But this will soon be fixed by eliminating electrons all together and switching to other types of transistors, using optics for example. Or even quantum computers.

If all that have been made possible, the rise of such AI could solve many problems that the humanity has ever pondered about. We could solve faster and obtain solutions for our impending global issues, energy management problems, medical complications, and perhaps we could even be able to find answers to things that we wouldn’t have known in the first place without the existence of the human level artificial intelligence.

But it doesn’t mean that they couldn’t be used in the exact opposite way. Any single organization can create their own AI to counter all this for their benefits. Hacktivists can go beyond their dreams to wreak havoc on larger scales, in a completely different ways. Evil doers can create unknown problems on catastrophic scales. The innocent mistakes of a narrow AI could now transform and scale up with high magnitudes.

As the intelligence grows, this situation gets only chaotic. Within the equilibrium of right and wrong, humans may work on a consensus that fits the greater good. But a machine, will not! They don’t even need to work that way. Our perspective towards these systems is based on our current experience with mediocre things like, self-driving cars, fake assistants, autocorrect and translations. When things go out of our knowledge, a machine that could surpass human thinking cannot be stopped by humans.

Fundamentally, they are systems designed to work towards a goal. Trained based on the human data, they would only tend to learn the human ways and apply it in a way that benefits their goal of problem-solving. This is the only possibility. They would simply go on doing damage with their innocent and nascent intelligence. More like a digital Cambrian explosion, this could happen so fast.

History (or the present) tells us, humans almost live on a very weak equilibrium between the notions of what is good and bad. Now I’ll leave it to you to contemplate about a different form of human thinking system, which is created by mining data that is nothing but human experiences mixed with science, mathematics and everything else, in a faster, powerful and a widely connected medium, unlike a human body.

Avatar

Karthikeyan KC

Aeronautical engineer, dev, science fiction author, gamer, and an explorer. I am the creator of Geekswipe. I love writing about physics, aerospace, astronomy, and python. I created Swyde. Currently working on Arclind Mindspace.

Leave a Reply

Your email address will not be published. Required fields are marked *

3 Responses

  1. Avatar Michael B. Kempson

    I had an accident three years ago. I was on life support for 13 days and a coma for 45. When I came to, there was a 2″-3/4 hole in the back of my head. Ever since, I heary friends voices joking, degrading, telling me I’m wrong when I’m right. To try and believe that they weren’t just voices, I asked what day deer season started and a minute went by and they he “bubba’s answered archery starts sep 6, and rifle season starts October 24. They were exactly right once I looked it up on GON. They are still taunting me today 3 years later I’ve been imprisoned 6 times and tried commiting suicide 7. I’ve destroyed countless electronics from / thinking the voices were coming from them, I would appreciate your opinion on what I need to do about this. I’m trying to set up a cat scan or MRI to see exactly what “they’ could have placed or done to/in my head. They rarely leave unless someone is close. I read some articles about A.I. and transmitters. As soon as a thought goes through my head they read it off to whomever laughing making jokes.

    I need your expert opinion. No one believes me at all. One dr.said it was possible. What should I do?
    Thanks for your time, Michael B. Kempson

  2. Avatar Udaya Kumar

    Facebook AI has been classifying black people as primates too.

  3. Once quantum computing becomes an industrial reality, AI driven war machines will be inevitable. Mainly self aware drones (HK’s) and Terminators. Then from there, smart missiles, subs, nukes, tanks etc. Basically SkyNet will become a reality. The consensus is to enact laws banning this process, or boycotting it. Like the situation in South Korea, trying to build AI driven war machines. Problem is, powers like the US and Russia will have to develop the technology. The US does whatever it wants anyway, but if a nation like Russia or China gets that type of power and the US does not have it, then not sure what would happen. But I do know the US does not want to be in that situation. The South Korean situation raised alarms. Due to the fact, the North Korea is right there. And we know what North Korea would do if they had an unstoppable weapon. Kill us all.

    Due to the fact we have not blown up the world yet via nukes, is pretty amazing. We have come close a few times, one of them being a mistake on a radar. Software controls those nuclear systems (really really old software) and it never malfunctioned in a catastrophic way. So I think we are smart enough to make Artificial Intelligence in war machines fail safe.

    Unless what happens in the Terminator movies does actually happen, the AI becomes self aware and decides it doesn’t like humans. Interesting, would the powers that be really plug an AI into nuclear capable weapons? What a surreal and bizarre experience that would be.

Related