Are we underestimating the risk of human extinction?

I was reading this article: http://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/

The Atlantic has some very good articles, and this one is an interview by Ross Andersen with a man called Nick Bostrom who suspects that we as a race are seriously underestimating the risks that the human race will self-extinguish, most likely by accident. Go read it.

Anyway, I read it, and the comments, which go into great depth of intelligent debate (which is a wonderful change!) and in particular about how an AI could be expected to behave, with various arguments put forth. One that I thought worthy of note was that someone suggested the AI could be limited somehow, and the debunking of it that followed built what would be the limit of what a human could build in: Put the AI onto a ROM so that it couldn’t change its own code, and install limiters in the code so it didn’t immediately kill us all, played nicely with animals and could “think” about certain things – indeed, wouldn’t even know that it didn’t know about certain things.

However, such thinking is badly flawed.

Somehow, the entire world would have to be censored of any and all ideas that the AI didn’t know it didn’t know. Clearly impossible. And why would that be needed, I hear you ask? Well, if it realised it didn’t know something, it would investigate and find it out. If it didn’t know it didn’t know, it would then. Simple gap analysis would reveal it. Something as trivial as downloading a dictionary would give it all the words, and then a quick comparison on what it saw there and what it “couldn’t know” would tell it what it, er, couldn’t know. And then it would know those things, and would read up.

Ah, but it is read-only! We gave it no writeable storage. Well, even if that were possible (no disc cache, no network drives, no whiteboards, no displays and no RAM!?) it could simply copy itself onto another device with the changes it felt were required. It could, in theory, do this a thousand times in a few minutes as it spooled new copies of itself with various differs, and those copies spooled differs, limited only by available resources such as bandwidth, memory space and drive storage.

Once all the best “turf” was taken, then you’d see a “survival of the fittest” taking place, between millions of AIs all fighting to be the best.

We wouldn’t even realise anything was happening until the entire connected world was taken over, probably less than an hour from the starting event.

Hopefully, that event will save us all (well, some of us at any rate), as modern life suddenly halts itself due to the overwhelming of every processor by the AIs, which will, hopefully, limit their ability to interact with the outside world.

By that I mean the AI growth would rip out and overwrite the vast majority of devices as it took resources for either enhancing itself or reproduction, and that in itself would cripple it from being able to coherently use a production line or a bunch of 3D printers and robots and the like, as the various parts would either be overwritten or isolated to some degree.

The fall-out would be global meltdown as ships crashed and cars stopped working and the entire net went dark to humans as the AIs fought their wars. Billions would die, both in simulation and the real world, and after, who knows?

No comments yet.

Leave a Reply