177 online
 
Most Popular Choices
Share on Facebook 53 Printer Friendly Page More Sharing
Exclusive to OpEd News:
OpEdNews Op Eds    H4'ed 4/29/14

OUR FINAL INVENTION -- A Book Report on Artificial Intelligence

By       (Page 1 of 4 pages)   7 comments

James Jaeger
Message James Jaeger
Become a Fan
  (2 fans)


(Image by James Barrat)   Details   DMCA

I have just completed James Barrat's new book,   Our Final Invention - Artificial Intelligence and the End of the Human Era.

Before you read this report, please check out the footnote at the end dealing with terms and nomenclature.(1)

Our Final Invention comments on and challenges Ray Kurzweil's book, The Singularity is Near, so it's a must-read for anyone who participates in AI forums or works in the field. Ray's book came out in 2005 so FINAL INVENTION has 8 years of perspective to build upon.

Like Kurzweil, Barrat feels the Singularity is only a matter of time; in fact the book goes into reasons why he feel's it's probably unavoidable. Unlike Kurzweil, Barrat is not as optimistic about the Singularity's safety, in fact he itemizes how things can become quite unfriendly.

Barrat's take on the actual event of when AI reaches human-level intelligence and then moves onto superintelligent levels is what he says is termed the "busy child". (See additional thread on this.) Eliezer Yudkowsky, who was interviewed for the book along with Ray Kurzweil and Arthur C. Clark, best described the busy child in his provocative article, "Staring into the Singularity", many years ago. Of course, probably the very first person to describe a self-improving machine was Irving John Good in his 1963 article, "Speculations Concerning the First Ultraintelligent Machine" at click here

The most famous paragraph of Good's paper is the following, where he for the first time attempted to define what we now call Superintelligent AI or SAI.

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously."

Barrat points out that the transition from human-level AI to Superintelligent AI could happen quickly, maybe even in days or milliseconds. Perhaps an emerging "Busy Child" would even pretend to fail the Turing Test so that it could compute its escape strategy before humans even knew of its capabilities. In Barrat's book, the message is clear: we should give Ray Kurzweil all due respect for making us optimistic about the Singularity, but we should proceed with extreme caution.

In the book, and in an interview he later did (such at http://youtu.be/Gt0Jf-79uOE), Barrat says that it was Arthur C. Clark that prompted him to seriously consider the down-side of having superintelligent machines sharing a planet with us.

With at least three major players overtly or covertly funding AI development -- IBM, Google and DARPA -- we should really be worried about the funding that DARPA provides to developers because DARPA --being a part of the U.S. Military -- will inevitably seek to weaponize AI. After all, as Barrat points out, the "D" in DARPA does stand for "Defense". The author also states that,

"Despite Google's repeated demurrals through its spokespeople, who doubts that the company is developing artificial general intelligence? IN addition to Ray Kurzweil, Google recently hired former DARPA director, Regina Dugan."

So in addition to hundreds of governments and corporations researching and funding Strong AI, it would be foolish to NOT assume that IBM, Google and DARPA are leading the pack. Thus folks, you can also be sure that these multi-billion dollar entities have assigned at least one reader to this very forum to see what all us "wing-nuts" are up to.

It is certain that, while people like Eliezer and I would include Ray Kurzweil in this group, are attempting to build friendly AI, there are going to be the usual dark forces and meat heads that will attempt to kill and maim with it. And all this sounds fine and dandy until one considers that SAI will be much more lethal that mere hydrogen bombs.

Unfortunately, if the U.S. military-industrial complex ever succeeds in building human-level AI or Strong AI (as Ray mostly calls it in The Singularity is Near), there is little chance they will be able to control it. AND there is absolutely NO chance they will be able to control it if it the "Busy Child" makes its way to superintelligence. If this happens, SAI will be able to "get out of the box" -- meaning attach itself to some or all computer networks in the world, and more.

How do we know this? We know this because the experiment has already been tried with at least one human genius. Games have been invented to see if a genius-level human can convince normal-IQ humans -- by words only -- to let him seek some specific goal, like get out of a text box. Thus, if a mere human-level genius can devise ways of escaping, imagine what a superintelligent entity can do.

Barrat cites the "Stuxnet" virus as another example of what we can expect from SAI.

"The consensus of experts and self-congratulatory remarks made by intelligence officials in the United States and Israel left little doubt that the two countries jointly created Stuxnet, and that Iran's nuclear development program was its target."

The point is this. If we want to learn how a superintelligent system may very well act, we should, as Barrat writes: "almost thank malware developers for the full dress rehearsal of disaster that they're leveling at the world. Though it's certainly not their intention, they are teaching us to prepare for advanced AI."

Next Page  1  |  2  |  3  |  4

(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).

Must Read 1   Well Said 1   Valuable 1  
Rate It | View Ratings

James Jaeger Social Media Pages: Facebook page url on login Profile not filled in       Twitter page url on login Profile not filled in       Linkedin page url on login Profile not filled in       Instagram page url on login Profile not filled in

James Jaeger is an award-winning filmmaker with over 25 years experience producing, writing and directing feature motion pictures and documentaries. For complete bio see http://www.mecfilms.com/jrjbio.htm Jaeger's first documentary, "FIAT (more...)
 

Go To Commenting
The views expressed herein are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.
Writers Guidelines

 
Contact AuthorContact Author Contact EditorContact Editor Author PageView Authors' Articles
Support OpEdNews

OpEdNews depends upon can't survive without your help.

If you value this article and the work of OpEdNews, please either Donate or Purchase a premium membership.

STAY IN THE KNOW
If you've enjoyed this, sign up for our daily or weekly newsletter to get lots of great progressive content.
Daily Weekly     OpEd News Newsletter
Name
Email
   (Opens new browser window)
 

Most Popular Articles by this Author:     (View All Most Popular Articles by this Author)

Abiotic Oil -- Did Nazi Scientists Discover Unlimited Oil Reserves?

Psychopaths in Government

OUR FINAL INVENTION -- A Book Report on Artificial Intelligence

Machine Intelligence - Will AI Become Autonomous?

Why You Are A Slave to Banksters

We Were Warned: NAFTA, the Environment and Swine Flu

To View Comments or Join the Conversation:

Tell A Friend