Share on Google Plus Share on Twitter Share on Facebook Share on LinkedIn Share on PInterest Share on Fark! Share on Reddit Share on StumbleUpon Tell A Friend
Printer Friendly Page Save As Favorite View Favorites View Article Stats
Exclusive to OpEdNews:
OpEdNews Op Eds

OUR FINAL INVENTION -- A Book Report on Artificial Intelligence

By (about the author)     Permalink       (Page 1 of 4 pages)
Related Topic(s): ; ; ; ; ; ; ; ; ; ; (more...) ; ; ; ; ; ; ; ; ; ; , Add Tags  (less...) Add to My Group(s)

Must Read 1   Well Said 1   Valuable 1  
View Ratings | Rate It

Headlined to H4 4/29/14

Become a Fan
  (2 fans)

(image by James Barrat)

I have just completed James Barrat's new book,   Our Final Invention - Artificial Intelligence and the End of the Human Era.

Before you read this report, please check out the footnote at the end dealing with terms and nomenclature.(1)

Our Final Invention comments on and challenges Ray Kurzweil's book, The Singularity is Near, so it's a must-read for anyone who participates in AI forums or works in the field. Ray's book came out in 2005 so FINAL INVENTION has 8 years of perspective to build upon.

Like Kurzweil, Barrat feels the Singularity is only a matter of time; in fact the book goes into reasons why he feel's it's probably unavoidable. Unlike Kurzweil, Barrat is not as optimistic about the Singularity's safety, in fact he itemizes how things can become quite unfriendly.

Barrat's take on the actual event of when AI reaches human-level intelligence and then moves onto superintelligent levels is what he says is termed the "busy child". (See additional thread on this.) Eliezer Yudkowsky, who was interviewed for the book along with Ray Kurzweil and Arthur C. Clark, best described the busy child in his provocative article, "Staring into the Singularity", many years ago. Of course, probably the very first person to describe a self-improving machine was Irving John Good in his 1963 article, "Speculations Concerning the First Ultraintelligent Machine" at click here

The most famous paragraph of Good's paper is the following, where he for the first time attempted to define what we now call Superintelligent AI or SAI.

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously."

Barrat points out that the transition from human-level AI to Superintelligent AI could happen quickly, maybe even in days or milliseconds. Perhaps an emerging "Busy Child" would even pretend to fail the Turing Test so that it could compute its escape strategy before humans even knew of its capabilities. In Barrat's book, the message is clear: we should give Ray Kurzweil all due respect for making us optimistic about the Singularity, but we should proceed with extreme caution.

In the book, and in an interview he later did (such at, Barrat says that it was Arthur C. Clark that prompted him to seriously consider the down-side of having superintelligent machines sharing a planet with us.

With at least three major players overtly or covertly funding AI development -- IBM, Google and DARPA -- we should really be worried about the funding that DARPA provides to developers because DARPA --being a part of the U.S. Military -- will inevitably seek to weaponize AI. After all, as Barrat points out, the "D" in DARPA does stand for "Defense". The author also states that,

"Despite Google's repeated demurrals through its spokespeople, who doubts that the company is developing artificial general intelligence? IN addition to Ray Kurzweil, Google recently hired former DARPA director, Regina Dugan."

So in addition to hundreds of governments and corporations researching and funding Strong AI, it would be foolish to NOT assume that IBM, Google and DARPA are leading the pack. Thus folks, you can also be sure that these multi-billion dollar entities have assigned at least one reader to this very forum to see what all us "wing-nuts" are up to.

It is certain that, while people like Eliezer and I would include Ray Kurzweil in this group, are attempting to build friendly AI, there are going to be the usual dark forces and meat heads that will attempt to kill and maim with it. And all this sounds fine and dandy until one considers that SAI will be much more lethal that mere hydrogen bombs.

Unfortunately, if the U.S. military-industrial complex ever succeeds in building human-level AI or Strong AI (as Ray mostly calls it in The Singularity is Near), there is little chance they will be able to control it. AND there is absolutely NO chance they will be able to control it if it the "Busy Child" makes its way to superintelligence. If this happens, SAI will be able to "get out of the box" -- meaning attach itself to some or all computer networks in the world, and more.

How do we know this? We know this because the experiment has already been tried with at least one human genius. Games have been invented to see if a genius-level human can convince normal-IQ humans -- by words only -- to let him seek some specific goal, like get out of a text box. Thus, if a mere human-level genius can devise ways of escaping, imagine what a superintelligent entity can do.

Barrat cites the "Stuxnet" virus as another example of what we can expect from SAI.

"The consensus of experts and self-congratulatory remarks made by intelligence officials in the United States and Israel left little doubt that the two countries jointly created Stuxnet, and that Iran's nuclear development program was its target."

The point is this. If we want to learn how a superintelligent system may very well act, we should, as Barrat writes: "almost thank malware developers for the full dress rehearsal of disaster that they're leveling at the world. Though it's certainly not their intention, they are teaching us to prepare for advanced AI."

Next Page  1  |  2  |  3  |  4

James Jaeger is an award-winning filmmaker with over 25 years experience producing, writing and directing feature motion pictures and documentaries. For complete bio see Jaeger's first documentary, "FIAT (more...)

Share on Google Plus Submit to Twitter Add this Page to Facebook! Share on LinkedIn Pin It! Add this Page to Fark! Submit to Reddit Submit to Stumble Upon

The views expressed in this article are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.

Writers Guidelines

Contact Author Contact Editor View Authors' Articles

Most Popular Articles by this Author:     (View All Most Popular Articles by this Author)

Machine Intelligence - Will AI Become Autonomous?

Psychopaths in Government

OUR FINAL INVENTION -- A Book Report on Artificial Intelligence

Abiotic Oil -- Did Nazi Scientists Discover Unlimited Oil Reserves?

We Were Warned: NAFTA, the Environment and Swine Flu

Why You Are A Slave to Banksters


The time limit for entering new comments on this article has expired.

This limit can be removed. Our paid membership program is designed to give you many benefits, such as removing this time limit. To learn more, please click here.

Comments: Expand   Shrink   Hide  
6 people are discussing this page, with 7 comments
To view all comments:
Expand Comments
(Or you can set your preferences to show all comments, always)

Ever since I encountered Dmitri Itskov and his eff... by Rob Kall on Tuesday, Apr 29, 2014 at 11:06:37 AM
AI is a danger to human race. ... by BFalcon on Tuesday, Apr 29, 2014 at 3:58:45 PM
AI proponents do not know what human cosnciousness... by Derryl Hermanutz on Tuesday, Apr 29, 2014 at 6:58:54 PM
I hope I am wrong, but it occurs to me that "progr... by Daniel Penisten on Wednesday, Apr 30, 2014 at 11:58:13 PM
In considering the process of creation followed by... by David Chester on Wednesday, Apr 30, 2014 at 4:23:48 AM
I am not concerned about artificial general intell... by Paul Easton on Wednesday, Apr 30, 2014 at 7:13:40 PM
Waylaid! Caught by surprise again! Scared, I am, o... by Daniel Penisten on Wednesday, Apr 30, 2014 at 11:46:53 PM