Send a Tweet
Most Popular Choices
Share on Facebook 33 Share on Twitter Printer Friendly Page More Sharing
General News    H4'ed 8/25/20

Tomgram: Michael Klare, Artificial (Un)intelligence and the U.S. Military

By       (Page 1 of 3 pages)   2 comments
Follow Me on Twitter     Message Tom Engelhardt
Become a Fan
  (30 fans)

This article originally appeared at To receive TomDispatch in your inbox three times a week, click here.

Just when you thought it couldn't get any worse, the U.S. military, as TomDispatch regular Michael Klare informs us today, has had a brilliant idea -- robot generals (!) -- into which to sink yet more billions of our tax dollars as divestment in our infrastructure, schools, health care, and the like only continues in the age of Trump and the midst of a grim pandemic. Of course, given American generalship in the "forever wars" of the twenty-first century, who doesn't feel that almost anyone or anything could have done better? Still, to turn the potential destruction of the planet itself (via nuclear arms) over to computers? I mean, what could possibly go wrong?

Well, actually, let Klare fill you in on just what could prove to be a Terminator moment for humanity. Tom

Robot Generals
Will They Make Better Decisions Than Humans -- Or Worse?
By Michael T. Klare

With Covid-19 incapacitating startling numbers of U.S. service members and modern weapons proving increasingly lethal, the American military is relying ever more frequently on intelligent robots to conduct hazardous combat operations. Such devices, known in the military as "autonomous weapons systems," include robotic sentries, battlefield-surveillance drones, and autonomous submarines. So far, in other words, robotic devices are merely replacing standard weaponry on conventional battlefields. Now, however, in a giant leap of faith, the Pentagon is seeking to take this process to an entirely new level -- by replacing not just ordinary soldiers and their weapons, but potentially admirals and generals with robotic systems.

Admittedly, those systems are still in the development stage, but the Pentagon is now rushing their future deployment as a matter of national urgency. Every component of a modern general staff -- including battle planning, intelligence-gathering, logistics, communications, and decision-making -- is, according to the Pentagon's latest plans, to be turned over to complex arrangements of sensors, computers, and software. All these will then be integrated into a "system of systems," now dubbed the Joint All-Domain Command-and-Control, or JADC2 (since acronyms remain the essence of military life). Eventually, that amalgam of systems may indeed assume most of the functions currently performed by American generals and their senior staff officers.

The notion of using machines to make command-level decisions is not, of course, an entirely new one. It has, in truth, been a long time coming. During the Cold War, following the introduction of intercontinental ballistic missiles (ICBMs) with extremely short flight times, both military strategists and science-fiction writers began to imagine mechanical systems that would control such nuclear weaponry in the event of human incapacity.

In Stanley Kubrick's satiric 1964 movie Dr. Strangelove, for example, the fictional Russian leader Dimitri Kissov reveals that the Soviet Union has installed a "doomsday machine" capable of obliterating all human life that would detonate automatically should the country come under attack by American nuclear forces. Efforts by crazed anti-Soviet U.S. Air Force officers to provoke a war with Moscow then succeed in triggering that machine and so bring about human annihilation. In reality, fearing that they might experience a surprise attack of just this sort, the Soviets later did install a semi-automatic retaliatory system they dubbed "Perimeter," designed to launch Soviet ICBMs in the event that sensors detected nuclear explosions and all communications from Moscow had been silenced. Some analysts believe that an upgraded version of Perimeter is still in operation, leaving us in an all-too-real version of a Strangelovian world.

In yet another sci-fi version of such automated command systems, the 1983 film WarGames, starring Matthew Broderick as a teenage hacker, portrayed a supercomputer called the War Operations Plan Response, or WOPR (pronounced "whopper") installed at the North American Aerospace Command (NORAD) headquarters in Colorado. When the Broderick character hacks into it and starts playing what he believes is a game called "World War III," the computer concludes an actual Soviet attack is underway and launches a nuclear retaliatory response. Although fictitious, the movie accurately depicts many aspects of the U.S. nuclear command-control-and-communications (NC3) system, which was then and still remains highly automated.

Such devices, both real and imagined, were relatively primitive by today's standards, being capable solely of determining that a nuclear attack was under way and ordering a catastrophic response. Now, as a result of vast improvements in artificial intelligence (AI) and machine learning, machines can collect and assess massive amounts of sensor data, swiftly detect key trends and patterns, and potentially issue orders to combat units as to where to attack and when.

Time Compression and Human Fallibility

The substitution of intelligent machines for humans at senior command levels is becoming essential, U.S. strategists argue, because an exponential growth in sensor information combined with the increasing speed of warfare is making it nearly impossible for humans to keep track of crucial battlefield developments. If future scenarios prove accurate, battles that once unfolded over days or weeks could transpire in the space of hours, or even minutes, while battlefield information will be pouring in as multitudinous data points, overwhelming staff officers. Only advanced computers, it is claimed, could process so much information and make informed combat decisions within the necessary timeframe.

Such time compression and the expansion of sensor data may apply to any form of combat, but especially to the most terrifying of them all, nuclear war. When ICBMs were the principal means of such combat, decisionmakers had up to 30 minutes between the time a missile was launched and the moment of detonation in which to determine whether a potential attack was real or merely a false satellite reading (as did sometimes occur during the Cold War). Now, that may not sound like much time, but with the recent introduction of hypersonic missiles, such assessment times could shrink to as little as five minutes. Under such circumstances, it's a lot to expect even the most alert decision-makers to reach an informed judgment on the nature of a potential attack. Hence the appeal (to some) of automated decision-making systems.

"Attack-time compression has placed America's senior leadership in a situation where the existing NC3 system may not act rapidly enough," military analysts Adam Lowther and Curtis McGiffin argued at War on the Rocks, a security-oriented website. "Thus, it may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position."

This notion, that an artificial intelligence-powered device -- in essence, a more intelligent version of the doomsday machine or the WOPR -- should be empowered to assess enemy behavior and then, on the basis of "predetermined response options," decide humanity's fate, has naturally produced some unease in the community of military analysts (as it should for the rest of us as well). Nevertheless, American strategists continue to argue that battlefield assessment and decision-making -- for both conventional and nuclear warfare -- should increasingly be delegated to machines.

"AI-powered intelligence systems may provide the ability to integrate and sort through large troves of data from different sources and geographic locations to identify patterns and highlight useful information," the Congressional Research Service noted in a November 2019 summary of Pentagon thinking. "As the complexity of AI systems matures," it added, "AI algorithms may also be capable of providing commanders with a menu of viable courses of action based on real-time analysis of the battlespace, in turn enabling faster adaptation to complex events."

Next Page  1  |  2  |  3

(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).

Rate It | View Ratings

Tom Engelhardt Social Media Pages: Facebook page url on login Profile not filled in       Twitter page url on login Profile not filled in       Linkedin page url on login Profile not filled in       Instagram page url on login Profile not filled in

Tom Engelhardt, who runs the Nation Institute's ("a regular antidote to the mainstream media"), is the co-founder of the American Empire Project and, most recently, the author of Mission Unaccomplished: Tomdispatch (more...)

Go To Commenting
The views expressed herein are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.
Writers Guidelines

Contact AuthorContact Author Contact EditorContact Editor Author PageView Authors' Articles
Support OpEdNews

OpEdNews depends upon can't survive without your help.

If you value this article and the work of OpEdNews, please either Donate or Purchase a premium membership.

If you've enjoyed this, sign up for our daily or weekly newsletter to get lots of great progressive content.
Daily Weekly     OpEd News Newsletter
   (Opens new browser window)

Most Popular Articles by this Author:     (View All Most Popular Articles by this Author)

Tomgram: Nick Turse, Uncovering the Military's Secret Military

Noam Chomsky: A Rebellious World or a New Dark Age?

Andy Kroll: Flat-Lining the Middle Class

Christian Parenti: Big Storms Require Big Government

Noam Chomsky, Who Owns the World?

Rebecca Solnit: Why the Media Loves the Violence of Protestors and Not of Banks

To View Comments or Join the Conversation:

Tell A Friend