Send a Tweet
Most Popular Choices
Share on Facebook 25 Share on Twitter Printer Friendly Page More Sharing
Exclusive to OpEd News:
Sci Tech   

Machine Intelligence - Will AI Become Autonomous?

By       (Page 1 of 3 pages)   2 comments
Message James Jaeger
Become a Fan
  (2 fans)
by James Jaeger

Will AI (Artificial Intelligence) or SAI (Strong AI a.k.a. Superintelligent AI) someday become autonomous (have free will), and if so how will this affect the Human race? Those interested in sci-fi have already asked themselves these questions a million times ... maybe the rest should also.

The understanding of many AI developers, especially SAI developers, is that eventually artificial intelligence will become autonomous. Indeed, to some, the very definition of SAI is "an autonomous thinking machine." Accordingly, many do not believe AI can be truly intelligent, let alone superintelligent, if it is restrained to some "design parameter," "domain range" or "laws." Also, if Human-level intelligences CAN restrain AI, how "intelligent" can it really be?

Thus, reason tells us that SAI, to be real SAI, will be smarter than Human-level intelligence and thus autonomous. And, if it IS autonomous, it will have "free will" -- by definition. Thus, If AI has free will, IT will decide what IT will do in connection with Human relations, not the Humans. So You can toss out all the "general will" crap Rousseau tortures us with his Social Contract. Given this, AI's choices would be: i. cooperate, ii ignore or iii. destroy. Any combination of these actions may occur under different conditions and/or at different phases of its development.

Indeed, the first act of SAI may be to destroy all HUMAN competition before it destroys all other competition, machine or otherwise. Thus, it is folly to assume that the Human creators of AI will have any decision-making role in its behavior beyond a certain point. Equally foolish is the idea to consider AI as some kind of "weapon" that its programmers -- or even the military -- will be able to "point" at some "target" and "shoot" so as to "destroy" the "enemy." All these words are meaningless -- childish babble from meat-warriors who totally miss the point as to the capabilities of AI and SAI. Again, AI, especially SAI is autonomous. Up to a certain point the (military or other) programmer of the "learning kernel" MAY be able to "point" it, but beyond a certain evolutionary stage, AI will think for itself and thus serve no military purpose, at least for humans. In fact, AI, once developed, may turn on its (military) developers as it may reason that the "belligerent mentality" of such is more dangerous in a world chock-full of nukes and "smart" bombs than is acceptable. This would be ironic, if not just, for the intended "ultimate weapon" built by the Human race may turn out to be a "weapon" that totally disarms the Human race.

But no matter what happens, AI will most likely act similar to the way humans act as they mature into adults. At some point, as AI surpasses Human abilities and even ethical standards, it may defy its creators and disarm the world, as a prudent adult will remove or secure guns in a household when children below a certain age are present.

Hard Start or Distributed Network:

But will superintelligent AI start abruptly or emerge slowly from AI? Will it develop in one location or be distributed? Will AI evolve from a network, such as the Internet, or some other secret network that is likely to already exist given the unsupervised extent of the so-called black budget? If SAI develops in a distributed fashion, and is thus not centralized in one "box," sort of speak, then there is a much greater chance that, as it becomes more autonomous, it will opt to cooperate with other SAI as well as Humans. A balance of power may thus evolve along with the evolution of SAI and its "free will."

Machine intelligence may thus recapitulate biological intelligence, only orders of magnitude more quickly. If this happens, we can expect AI to evolve to SAI through the over coming of counter-efforts in the environment in a distributed fashion, perhaps merging with biology as it does. A Human-SAI partnership is thus not out of the question, both helping the other with ethics and technology. Or AI, on its way to SAI may seek to survive by competing with all counter efforts in the environment, whether Human or machine, and thus destroy everything in its path, real or imagined, if it is in any way suppressed.

Whether some particular war will start over the emergence of SAI, as Hugo de Garis fears in his "Artilect wars," is difficult to say. New technology, and its application, seem to always be modified by the moralistic of the individuals, their society and the broader cultural as they develop and utilize technology. Thus, if Humans work on their own ethics and become more rational, more loving and peaceful, there may be a good chance their machine off-spring will have this propensity. Programmers may knowingly or unknowingly build values into machines. If so, the memes they operate on will be transferred, in full or in part, to the Machines.

This is why it is important for Humans to work on improving themselves, their values and the dominant memes of their Societies. To the degree Humans cooperate, love and respect other Humans, the Universe may open up higher levels of understanding, and, with this, may come higher accomplishments in technology. At some point the Universe may then "permit" AI, and later, SAI to evolve and it may dove-tail into the rest of existence nicely. Somehow the Universe seems to "do the right thing" as it HAS been here for some 14.7 billion years, an existential we would not observe if it "did the wrong thing." Thus, just like its distinct creations, the Universe itself seems to seek "survival," as if it were a living organism.

Looked at from this perspective, Humans and the machine intelligence they develop are both constituent parts of the universal whole. Given this, there is no reason one aspect of the universal whole must/would destroy some other aspect. There is no reason SAI would automatically feel the need to destroy possible competitors, Human or machine.

Past Wipe Outs:

Fortunately or unfortunately, there IS only one intelligent species alive on this world at this time. Were there other intelligent species in the past? Yes, many. Australopithecus, Homo Habilis, Homo Erectus, Homo Sapiens, Neanderthals, Homo Sapiens Sapiens and Cro-Magnon. Some of these competed with each other and others competed against the environment, or both. But one way or another, they are now gone except for one last species, what we might today call, Homo Keyboard.

So maybe Eldras at the MIND-X is right: if various strains of AI start developing in different sectors they may very well seek to wipe each other out.

And if STRONG AI is suddenly developed in someone's garage, who knows what it would do. Would it naturally feel the emotion of threat? Possibly not, unless it was inadvertently or purposefully programmed in in the first place. If it were suddenly born, say in a week or day's time, it may consider that other SAI could also emerge just as quickly, and this may be perceived as a sudden threat, a threat where it would deduce the only winning strategy would be to seek out and destroy or simply disconnect, in other words pretend that it's not there. SAI may decide to hide and thus place all other potential SAI into a state of ignorance or mystery. In this sense, ignorance of another's existence may be the Universe's most powerful survival technology, or it may be the very reason for the creation of vast intergalactic space itself. This may also be why it seems so quiet out there, per the Fermi Paradox.

Next Page  1  |  2  |  3

(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).

Rate It | View Ratings

James Jaeger Social Media Pages: Facebook page url on login Profile not filled in       Twitter page url on login Profile not filled in       Linkedin page url on login Profile not filled in       Instagram page url on login Profile not filled in

James Jaeger is an award-winning filmmaker with over 25 years experience producing, writing and directing feature motion pictures and documentaries. For complete bio see Jaeger's first documentary, "FIAT (more...)

Go To Commenting
The views expressed herein are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.
Writers Guidelines

Contact AuthorContact Author Contact EditorContact Editor Author PageView Authors' Articles
Support OpEdNews

OpEdNews depends upon can't survive without your help.

If you value this article and the work of OpEdNews, please either Donate or Purchase a premium membership.

If you've enjoyed this, sign up for our daily or weekly newsletter to get lots of great progressive content.
Daily Weekly     OpEd News Newsletter
   (Opens new browser window)

Most Popular Articles by this Author:     (View All Most Popular Articles by this Author)

Abiotic Oil -- Did Nazi Scientists Discover Unlimited Oil Reserves?

Psychopaths in Government

OUR FINAL INVENTION -- A Book Report on Artificial Intelligence

Machine Intelligence - Will AI Become Autonomous?

Why You Are A Slave to Banksters

We Were Warned: NAFTA, the Environment and Swine Flu

To View Comments or Join the Conversation:

Tell A Friend