Barrat points out that Symantec corporation started out as an IA company and now their the biggest player in the Internet immune business. Symantec discovers about 280 million new pieces of malware (viruses, worms, spywear, rootkits, Trojans) every year -- most of it created by software that writes software.
But wait, isn't this exactly what the "busy child" does -- write its own software? So is it so much of a leap to suspect that AI could very well act like a virus, at least before it becomes REALLY dangerous, dangerous in ways humans are not even capable of imagining?
Barrat makes it clear that the power grid is the most critical thing to protect because it is "tightly coupled" with all other systems, including U.S. defense, which gets 99% of its electricity from civilian sources and 90% of its communications via private networks.
The Stuxnet virus was designed to hit and destroy networks and infrastructures like this, but specifically SCADA systems, short for Supervisory Control and Data Acquisition. In other words Stuxnet was designed to destroy HARDWARE not just SOFTWARE and DATA. Specifically, it was designed to destroy industrial machines connected to the Siemens S7-300 logic controller, which is a component of the SCADA system. These controllers do things like run gas centrifuges for nuclear-enrichment facilities, like the centrifuge running in Natanz, Iran. But now that Stutnet has been used on the Iranians, rogue versions of it have inadvertently been released all over the Internet. Now any intelligent hacker -- from Anonymous to pissed-off high school kids -- are able to get copies of this US/Israel-manufactured virus and adapt its code for purposes of their own.
So here's another example of governments doing more harm than they do good. These unintended consequences should serve as a warning of what can or will happen with AI systems if they are developed by defense agencies who have no intention of making them "friendly."
Given the kinds of catastrophes that we are subjecting the human race to with the development of SAI -- especially militarized AI -- it should be obvious that the best defense against "normal accidents" would be to decentralize as much of civilization's infrastructure as possible.
This means that first and foremost the electrical power grid should be de-centralized, starting with current "smart grid." This hair-brained idea should be terminated because such an arrangement means that vast portions of the grid are accessible over the Internet. The main idea of the "smart grid" -- and things like "smart meters" -- is make it easier for (lazy) power companies to bill, and spy on, their customers. Given that this also makes it easier for NSA-Israel terrorist viruses, like Stuxnet, to rampage through civilization, "smart" grids are pretty dumb.
Not only did kindred meathead spirits at NSA and Israel (probably in the Mossad) develop Stuxnet, they developed two other malware viruses called Duqu and Flame. These are reconnaissance viruses that can record keystrokes, steal data, and operate cameras or microphones remotely on YOUR personal PC, as well as any other networked computer. Thanks to Edward Snowden's whistle-blowing, we now know that tech like this, if not this exact tech itself, is being used to spy on U.S. citizens, citizens who are unwittingly financing such malware through their taxes.
It is obvious that policymakers feel no back off on spending public money on any of these nefarious and dangerous applications and they do so without informing citizens or providing any national discussion. The recklessly development and deploy of viruses like Stutnet should be terminated. Whether or not such will come to pass, the basic question Humanity has to deal with is this:
TO BUILD AI OR NOT TO BUILD IT
A variant of Shakespeare's famous TO BE OR NOT TO BE, this question is a metaphor for the idea that building AI for the human race is like an individual contemplating whether to commit suicide or not, and whether doing so would be for the greater good.
Does the Universe require of a species to deliver its nexus no matter what its fate? Just like a parent is supposed to be totally willing to sacrifice for its child, is the human race supposed to be willing to sacrifice itself to give birth to SAI? Even if it means the extinction of Humanity, must Humanity cheerfully accept this fait for the greater good, the greater good of the Universe. What do you think Democrats?
On the other hand, SAI may not destroy Humanity; it could usher in a golden era like no other. It might even partner with Humanity, as Ray Kurzweil suggests (my feeling as well). Under such a circumstance it could reward us with our long-sought-after utopian civilization. This may be entirely possible with safely engineered technology and a proper balance of ethics.
But the answer to the "basic question" posed above could turn out to be that the potential risks out weigh the possible rewards. If it looks like the military-mentality will definitely develop or commandeer AI -- then weaponize it -- there is a good possibility that it will get out of control and destroy all of human civilization, possibly more. If this is the case beyond a reasonable doubt, AI research may have to be completely terminated -- just as we are attempting to ban and terminate nuclear tests, biological warfare and chemical warfare.
If the "profit movers" or the "thugs with guns" in the state refuse to cooperate, AI may be opposed by mass revolts, the overthrow of numerous governments, and/or the burning of corporate assets to the ground. This could happen no matter what the human cost, even if it meant billions of people fought and died in one of Hugo De Garis' "gigadeath wars". After all -- billions might reason -- nothing less than all of (human) civilization will hang in the balance with the decision as to whether to build Strong AI or Superintelligent AI.
So folks, Mr. Barrat's book gives us some serious thinking to do. And in addition to such continuous, serious study, I would proffer some obvious first steps. The first would be that the world should get off of its addiction of CENTRALITY and place more emphasis on DISTRIBUTED REDUNDANCY.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).