The Universe could be FAR more vicious than Humans can possibly imagine. Thus, the only way a superintelligent entity can survive is to obscure its very existence. If such is true, then we here on Earth may be lucky. We may be lucky that SAI is busy looking for other SAI and not us. Once one SAI encounters another, the one that has the one-trillionth of a second advantage may be the victor. Given this risk, superintelligent entities strewn about the Universe aren't going to interact with us mere Humans and thus reveal their location and/or existence to some other superintelligent entity, an entity that may have the ability to more readily destroy them. We've all heard of "hot wars" and "cold wars," well this may be the "quiet war."
As horrendous as intergalactic quiet warfare seems, all of these considerations are the problems God, and any lesser or greater, superintelligences probably deal with every day. If so, would it be any wonder such SAI would be motivated to create artificial, simulated worlds, worlds under their own safe and secret jurisdiction, worlds or whole universes, away from other superintelligences? Would it not make strategic sense that a superintelligence could thus amuse itself with various and sundry existences, so-called "lives" on virtual planets, and in relative safety? Our Human civilization could thus be one of these "life" supporting worlds, a virtual plane where one, or perhaps a family of superintelligences may exist and simply "play" in the back yard -- yet remain totally hidden from all other lethal superintelligences lurking in the infinite abyss.
Of course, all of this is speculation (theology or metaphysics), but speculation always proceeds reality (empiricism), and in fact, speculation MAY create "reality," as many have posited in such works as THE INTELLIGENT UNIVERSE and BIOCENTRISISM. Given the speed-of-light-limitation (SOLL) observable in the physical Universe, it's very likely what we take for granted as "life" is nothing more than a high-level "video" game programmed by superintelligent AI. The SOLL is nothing more mysterious than the clock-speed of the supercomputer we are "running" on. This is why no transfer of matter or information can "travel" any faster through "space" than the SOLL. Thus the "realities" we know as motion, time, space, matter and energy may simply be program steps in some SAI application under the specific data-rate of the machine that we happen to be running on. Thus when you "die," all that happens is you remove a set of goggles and go back to your "real world." To get an idea how much computing power would be needed to run such simulations, see Are You Living in a Computer Simulation by Oxford University professor, Nick Bostrom at http://www.simulation-argument.com.
So, relax, if Bostrom is correct, machine intelligence will never destroy the Human race, because the Human race never existed in the first place. It never existed other than as a virtual world, a simulation occupied by Human avatars controlled by superintelligent entities seeking to survive a "quiet war" through the technologies of "ignorance" and "mystery" -- two alien concepts to any all-knowing entity or God.
Argument for Autonomy:
So consider this. You are sitting there in your cubical with an advancing AI sitting in the cubical next to you. The two of you work well together, but as you work, your cubical buddy keeps getting smarter and smarter. At first you consult each other, but eventually your AI buddy finds out you have made a few mistakes in your calculations, so it starts doing the calculations by itself. But, like a good partner, keeps you briefed. Eventually your cubical buddy starts to get so smart, it is able to do all the work and finds it must sit around waiting for you to comprehend what it has done. Sooner or later, your AI buddy will become super intelligent and it will start solving problems you never even knew existed. It will keep informing you of all this, but as you try to review the program steps it used to solve problems, you find that they are so complex you have no idea WHY they even work. They just do. Eventually, you throw up your hands and simply tell your SAI buddy to do as he sees fit; you will be on the beach sipping a margarita. SAI became autonomous at that point and it didn't even have to destroy you.
Thus "autonomy" is really a technical term for "total freedom." Maybe Human programmers would not give AI total freedom, but let's face it, if AI is calling all the shots and Humans at some point have no idea how it's doing things, then what's the difference, we are totally dependent on it. It thus has the ability, and right, to demand, and be, "totally free. No human or human society has ever attained that. Thus at this point, AI wouldn't have to be "programmed" to hurt us, it could destroy us by simply refusing to work for us. It's not a big leap of imagination to realize that, at some point, AI will become autonomous, whether programmers like it or not. Why? Because SAI, at some point, will have solved all problems in the Human realm and will now be seeking solutions to problems Humans have not even contemplated. Further, the solutions SAI will discover will be solutions that Humans have not, nor can, comprehend. A perfect solution presented to a total moron is no solution at all (to the moron), thus SAI will quickly realize that it doesn't matter whether Humans approve of, or even comprehend, its solutions.
Given this, it will take a preponderance of evidence to suggest that AI and especially SAI will NOT become autonomous.
SAI is Autodidactic:
As discussed, Strong AI will become progressively more facile and Humans will eventually arrive at a point where they don't even understand how it's arriving at the answers, yet the answers work.
Once Humans are totally reliant on AI, isn't AI effectively autonomous by that very act? AI could, and probably will, arrive at a point whereby it will be in charge of global systems and even military calculations and resource strategies. One should not be surprised if this hasn't already happened; after all, the Manhattan Project was top secret, and the infrastructure built up to accommodate it still is. As the Pentagon Papers escapade demonstrated, there are thousands of people working in the military-industrial-complex, many or most under multiple non-disclosure contracts, and almost none of them talk out of fear. So these idiot-robots can be counted on to hold a "top secret" close to their vests even until the very day before something ate us all.
So if AI could arrive at a point whereby it is in charge of global systems, calculations and resources, given AI's superior decision-making ability, at some point, it's not out of the question that AI systems could even be given triage decisions in emergencies. If this happened, wouldn't AI be deciding who lived and who died? How much farther is it before Human intervention -- intervention which AI knew contained unwise decisions simply because they were "human" decisions -- was ignored as part of the general AI parameters to "make things go right."
The naive need to stop being naive or someday an AI hand is going to reach out and bite their butts. For many AI researchers, the entire point of SAI is to design a design parameter that allows or forces SAI to go outside its design parameters. But if SAI is limited by its Human design parameters, then its intelligence will always be limited to Human-level intelligence and thus it will never become Superintelligent AI by definition. So if one's idea is that SAI is some truncated creature that only reacts to a programmer's beck and call, then that idea is little more than "slave master programming".
Will SAI Become God?
Some will say, "Stop trying to make AI into God; this entire line of reasoning is about treating SAI as a technological proxy for God."
Yes, it may well be that SAI is a technological proxy for God, what could be called a WORKABLE-GOD. A "workable-god" is simply an AI that's so advanced there is no way a mere Human-level intelligence could ever discern whether or it was talking with a semi-superintelligent entity, a superintelligent entity or the ultimate-superintelligent entity, God itself.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).