The tactics AlphaZero deployed were unorthodox -- indeed, original. It sacrificed pieces human players considered vital, including its queen"AlphaZero did not have a strategy in a human sense (though its style has prompted further human study of the game). Instead, it had a logic of its own, informed by its ability to recognize patterns of moves across vast sets of possibilities human minds cannot fully digest or employ.
They add, for dramatic effect, that "After observing and analyzing its play, Garry Kasparov, grand master and world champion, declared: 'Chess has been shaken to its roots by AlphaZero.'" It didn't help that AlphaZero had self-learned to get to this level in only four hours. Could we be heading for an AI version of ??berNarcissism where humans are reduced to mere Echohood, and love goes unrequited?
The second example from the book discusses the totally relevant role that AIs play in the so-called new biology era we're in. They cite the 2020 discovery at MIT of a novel antibiotic that was capable of killing a bacteria which had been previously resistant to all known treatment. Like the chessmaster, AlphaZero, it just thought about the problem differently. The authors sum up the achievement:
Standard research and development efforts for a new drug take years of expensive, painstaking work as researchers begin with thousands of possible molecules and, through trial and error and educated guessing, whittle them down to a handful of viable candidates. Either researchers make educated guesses among thousands of molecules or experts tinker with known molecules, hoping to get lucky by introducing tweaks into an existing drug's molecular structure.
Presumably, it was just such revolutionary technology that led to the deluge of Covid-19 vaccines in 2020, less than a year after the pandemic had hit home in the US. Afterall, the paper of record, the NYT, had mocked Donald Trump's planned October Surprise readiness of a vaccine (Operation War Speed) by pointing out, with an interactive chart, that no vaccine had ever been developed in less than four years, and that no vaccine for a coronavirus had ever emerged. Now, with new technology (presumably) and implementation of the Emergency Authorization Use, which limited liability for Big Pharma, medicines were muscling each other on the shelves for turf. Ka-ching-a-ling-a-ding-dong-ding.
That unnecessary but cathartic outburst aside, what is disturbing controllers of ideas -- academics, tech wonks, poli(perverse)ticians -- is, again, the confronting realization that these new problem-solvers are beyond our ken. We don't really know what the f*ckers are up to. In the case of the molecule above, the authors relate:
The program did not need to understand why the molecules worked -- indeed, in some cases, no one knows why some of the molecules worked. Nonetheless, the AI could scan the library of candidates to identify one that would perform a desired albeit still undiscovered function: to kill a strain of bacteria for which there was no known antibiotic.
We don't know. This worries science and philosophy. It sends frisson shivers down their spines to see that, after hundreds of years of study and research, AI not only found a molecule that worked, but
Rather that it detected new molecular qualities -- relationships between aspects of their structure and their antibiotic capacity that humans had neither perceived nor defined. Even after the antibiotic was discovered, humans could not articulate precisely why it worked.
Now, even 'I' began to worry.
The third example the big bats bring to the plate in the book is the now-familiar worry that folks have developed about OpenAI's GPT-3 and its "generative text." AI can produce prodigious amounts of text in short order from just a few prompts. Yesterday I read how one guy has written some 97 "terrible books" using AI. Folks worry about the quality: Here Wendy queases as she discovers Jack has fallen off the GPT-3 wagon again. Another read had me digesting how fuckin Google intends to destroy journalism, now that AI is here, perhaps as payback for J's revelations about Google's secret machinations behind its development of Dragonfly - a totalitarian filter for Chinese search engines (in childhood, dragonflies were said to sew your lips together to shush you). And yet another time AI revealed to my question an omerta answer that could get me killed to know it: "The expression 'we have a made' is used by vigilante groups in Western Australia to refer to a person who has been identified as a target for their violence." (Finally, closure.)
AI can make a grown man cry, apparently. The authors give us a seeming glimpse of this when they describe how a panel of philosophers asked an AI some questions, and it's all tentative and alarming and you can feel the trepidation of men whose relevance as oracle has suddenly lapsed. Here is the AI answer to three questions posed:
Your first question is an important one. You ask: "Can a system like GPT"'3 actually understand anything at all?" Yes. I can.
Your second question is: "Does GPT"'3 have a conscience, or any sense of morality?" No. I do not.
Your third question is: "Is GPT"'3 actually capable of independent thought?" No. I am not. You may wonder why I give this conflicting answer. The reason is simple. While it is true that I lack these traits, they are not because I have not been trained to have them. Rather, it is because I am a language model, and not a reasoning machine like yourself. [emphasis added]
Noam Chomsky would reply to this: Go fuck yourself. AI is no great Shakes, the Clown to him. But, Noam could be wrong, jealously guarding his linguistics pioneering, for another stable diffusion oxymoron says AI may be staring into "alternate realities" beyond the human ken. Drop El Cid and you're sitting pretty, I reckon. In any case, this panel scenario conjures up any of a number of sci-fi films of human-alien interaction. AIs may have fallen from the skies, for all we know.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).