Could AI be the end of humans?

Something of an odd title, I’ll admit, but academics at the Centre for the Study of Existential Risk, based in Cambridge, will be considering the risk of a robot uprising as part of research into the risks of Biotechnology, Nanotechnology, Climate Change, Artificial Life and Artificial Intelligence.

Man looking suspiciously at phoneTechnology is everywhere these days – computers in your washing machine, incredible computing power in your mobile phone, and those television remote-control watches that you can use to really annoy the sales assistants in Currys. No, seriously, technology is everywhere and it’s getting more and more powerful all the time.

We’ve become quite familiar with the term Artificial Intelligence, especially given that it is widely used in computer games. AI is the discipline of making machines act as if they can think: having characters intelligently react to a player’s actions in a game, or example, or designing systems that can hold a conversation with a human being, perhaps for the purpose of offering technical support.

In the end, though, Artificial Intelligence is really just about a computer doing what it’s been programmed to do. It might do it very quickly, and it might seem to be very clever, but it’s still programmed. I’m no academic, but my university degree was in Artificial Intelligence and, in my view, there’s little risk here so long as programmers do their jobs properly. An AI-controlled military drone, for example, could misidentify a target for an airstrike, but one would hope a system like that would have some sort of override for if it started going haywire!

Artificial Life, on the other hand, with its emphasis on simulating the functions of living organisms, is a more likely candidate for rebellion. AL introduces an element of learning and, in some systems, the ability for one generation of software to write the next generation. The idea here is that each successive generation is then tested against some criteria to see if the new version is “better” than the old one. If it is, it survives. If it’s not, it’s scrapped. It’s sort of an electronic evolution… survival of the fittest.

I should say I still don’t think it’s at all likely that we’ll see some sort of robot uprising, but I guess there’s the potential that, if left unchecked, generations of machine-created software could lead to a version that decides it doesn’t need us any more. Imagine, for example, if the software that controls the power stations around the country decided it didn’t want to route electricity to the National Grid any more… it could make better use of it by increasing the amount of power being sent to to the computers themselves. Or what if that military drone from the AI example decided it didn’t want to take orders from the pilots on the ground, so decides to destroy either the airbase the pilots are working from, or the transmitters relaying their instructions.

Really, I think mankind would have to mess up royally to allow machines to get to that stage. Surely any machine or software with a risk attached would have some sort of manual override… wouldn’t it?

What do you think of the idea that researchers are to study the risks of technology? Does it make sense, or is it a waste of time and money? Can you see a day when machines become smarter than humans, and decide they don’t need us? Or is that just the stuff of science fiction? Let us know your thoughts in the comments.

Scan to Donate Bitcoin
Like this? Donate Bitcoin to at:
Bitcoin 37jTGtDxbNyYznXA19LzQMfobgGuKJSs3f
Donate

Join us on Facebook

Facebook icon

Declare your geekdom for the world to see... well, the part of the world that's on Facebook anyway.

Visit our Facebook page to keep up to date with the latest Geek-Speak posts right in your Facebook stream, as well as hearing about discounts and offers before they're posted on the site.

What are you waiting for? Head on over and "like" us.

Facebook icon used under CC license

Comment are closed: Sorry, comments are closed on this article. We automatically close comments on older articles to try and cut down on the amount of spam comments being submitted to the site.

If you want to tell us what you think about this article, why not visit our Facebook Page or Subreddit and leave a comment there instead?