This article originally appeared at TomDispatch.com. To receive TomDispatch in your inbox three times a week, click here.
Yes, it's true. After 20 years of war actually, more like 30 years if you count American involvement in the Russian version of that conflict in the 1980s the U.S. has finally waved goodbye to Afghanistan (at least for now). Its last act in Kabul was the drone-slaughtering of seven children and three adult civilians with a Hellfire missile. And that, as Azmat Khan recently showed in a striking report in the New York Times Magazine, was pretty much par for the course in this country's global war on terror that, for countless civilians, has distinctly been a war of terror of the most horrific sort.
In those same years, this country led the way in the use of Hellfire-missile-armed drones globally, while our president any president you care to name became an assassin-in-chief, something Donald Trump showed all too clearly when he used a drone to take out Iran's second most powerful leader at Baghdad International Airport in January 2020. And though Joe Biden has launched significantly fewer drone strikes so far than his predecessors, he's still been ordering them, too.
Worse yet, it's sadly clear that, however sci-fi-like those drones once seemed, they're still piloted by actual human beings (even if from far, far away). As such, they represent a relatively early stage in the process of fully automating weapons systems on land, on sea, and in the air and the decision-making that goes with them a development, as TomDispatch regular Rebecca Gordon reports today, that this country is all-too-enthusiastically involved in.
Count on one thing as you read her latest piece and think about automating a global killing machine: such mechanisms, created by humans, will prove no less destructive to us than the previously piloted or driven versions of the same. Now, consider the future of automated killing, up close and personal. Tom
Keep Your LAWS Off My Planet
Lethal Autonomous Weapons Systems and the Fight to Contain Them
Here's a scenario to consider: a military force has purchased a million cheap, disposable flying drones each the size of a deck of cards, each capable of carrying three grams of explosives enough to kill a single person or, in a "shaped charge," pierce a steel wall. They've been programmed to seek out and "engage" (kill) certain human beings, based on specific "signature" characteristics like carrying a weapon, say, or having a particular skin color. They fit in a single shipping container and can be deployed remotely. Once launched, they will fly and kill autonomously without any further human action.
Science fiction? Not really. It could happen tomorrow. The technology already exists.
In fact, lethal autonomous weapons systems (LAWS) have a long history. During the spring of 1972, I spent a few days occupying the physics building at Columbia University in New York City. With a hundred other students, I slept on the floor, ate donated takeout food, and listened to Alan Ginsberg when he showed up to honor us with some of his extemporaneous poetry. I wrote leaflets then, commandeering a Xerox machine to print them out.
And why, of all campus buildings, did we choose the one housing the Physics department? The answer: to convince five Columbia faculty physicists to sever their connections with the Pentagon's Jason Defense Advisory Group, a program offering money and lab space to support basic scientific research that might prove useful for U.S. war-making efforts. Our specific objection: to the involvement of Jason's scientists in designing parts of what was then known as the "automated battlefield" for deployment in Vietnam. That system would indeed prove a forerunner of the lethal autonomous weapons systems that are poised to become a potentially significant part of this country's and the world's armory.
Early (Semi-)Autonomous Weapons
Washington faced quite a few strategic problems in prosecuting its war in Indochina, including the general corruption and unpopularity of the South Vietnamese regime it was propping up. Its biggest military challenge, however, was probably North Vietnam's continual infiltration of personnel and supplies on what was called the Ho Chi Minh Trail, which ran from north to south along the Cambodian and Laotian borders. The Trail was, in fact, a network of easily repaired dirt roads and footpaths, streams and rivers, lying under a thick jungle canopy that made it almost impossible to detect movement from the air.
The U.S. response, developed by Jason in 1966 and deployed the following year, was an attempt to interdict that infiltration by creating an automated battlefield composed of four parts, analogous to a human body's eyes, nerves, brain, and limbs. The eyes were a broad variety of sensors acoustic, seismic, even chemical (for sensing human urine) most dropped by air into the jungle. The nerve equivalents transmitted signals to the "brain." However, since the sensors had a maximum transmission range of only about 20 miles, the U.S. military had to constantly fly aircraft above the foliage to catch any signal that might be tripped by passing North Vietnamese troops or transports. The planes would then relay the news to the brain. (Originally intended to be remote controlled, those aircraft performed so poorly that human pilots were usually necessary.)
And that brain, a magnificent military installation secretly built in Thailand's Nakhon Phanom, housed two state-of-the-art IBM mainframe computers. A small army of programmers wrote and rewrote the code to keep them ticking, as they attempted to make sense of the stream of data transmitted by those planes. The target coordinates they came up with were then transmitted to attack aircraft, which were the limb equivalents. The group running that automated battlefield was designated Task Force Alpha and the whole project went under the code name Igloo White.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).