The chapters aren't always easy to distinguish between the writers, although it's clear that Henry Kissinger wrote Chapter 5, Security and World Order. This chapter largely delineates for many pages his long-established worldview sometimes summed up as "realpolitik." Sometimes seen as a pragmatism advised by Macchiavellian principles. There is here a reiteration of Cold War tension and crises and resolutions, culminating in what Kisssinger referred to as "detente" policies. Kissinger discusses the evolution and growing complexity of systems requiring high degrees of diplomatic intervention, including nuclear, cyber, and AI technologies. He notes, that as with earlier diplomatic work in nuclear deterrence,
In the decades to come, we will need to achieve a balance of power that accounts for the intangibles of cyber conflicts and mass-scale disinformation as well as the distinctive qualities of AI"'facilitated war. Realism compels a recognition that AI rivals, even as they compete, should endeavor to explore setting limits on the development and use of exceptionally destructive, destabilizing, and unpredictable AI capabilities.
While Kissinger is effective in listing global vulnerabilities to expanding esoteric powers that fewer and fewer fully understand without the assistance of machine thinking, continued discussions of "balance of power" in an age that cries out for united nations conferring and sharing aspects of sovereignty. Herr Dr. K's old magic seems missing; well, he is 100 years old.
Apparently, Kissinger sprang to life on the issue of AI at the 2016 Bilderberg conference, where he met up with the other two and they all decided to write a book about the AI revolution. In a Time article, a couple years back, Kissinger answers the journalist's query about why the "elder statesman" would be interested in something seemingly off topic from his expertise in history and politics. He said,
The technological miracle doesn't fascinate me so much; what fascinates me is that we are moving into a new period of human consciousness which we don't yet fully understand. When we say a new period of human consciousness, we mean that the perception of the world will be different, at least as different as between the age of enlightenment and the medieval period, when the Western world moved from a religious perception of the world to a perception of the world on the basis of reason, slowly. This will be faster.
Henry will be gone. But reality will remain.
Chapter 6, AI and Human Identity, and Chapter 7, AI and the Human Future, belong to the mind of Eric Schmidt. These chapters, like Kissinger's, are largely a rehashing of Schmidt's earlier collaboration with Jared Cohen in The New Digital Age: Reshaping the Future of People, Nations and Business. It's clear that Schmidt wants to be seen and acknowledged as a thought leader on the future of our species, although he mostly speaks for his own class; he can seem incoherent, on practical matters, for mere plebs. There was his notion of every household owning a robot and his use of a holography to send his "spoiled" brats off on a Jacob Riis-type exploration of how the other half lives, and what comes across in these readings of his ideas is control/ He likes control. He was the driving force behind the Dragonfly filter for China that was ridiculed by the left. In The Age of AI, he suggests that "AI may serve as a playmate when a child is bored and as a monitor when a child's parent is away." Sticks in the craw. There's something about Schmidt. He thinks he's The Illustrated Man; he's so conceited.
But the biggest reason I'm having trouble with Schmidt is I recall the stinky diaper he seemed to wear when he went to visit Julian Assange at Ellingham Hall, the country residence in Norfolk, England where Assange was living under house arrest in 2011. It was supposed to be a summit of tech wonk minds over the Internet's future. Who should control information and under what conditions? Assange deplored the need to control others, feeling that individual privacy should be protected, while state governance should be as transparent as possible. In The New Digital Age, Schmidt and Cohen had disparaged the virtues of youth, saying that whistleblower publishers need "supervision," and dissidents need to be accounted for, contained as a subset, and controlled. No doubt this need for control would be affected in how AI policies and determinations are effected.
The AI revolution is live and happening now. Assange is sidelined for the action. We'll never know how he may have performed the play-by-play of its unfolding politics. But there is, beyond the politics of our situation, the existential threat of AI that can neither be dismissed or solved any more than the three crises Noam Chomsky has cited as potentially catastrophic for the human species: nuclear war, climate change and the end of democracy. We can expect neo-Luddites and Matrix Neos and folks just wanting to get off the grid altogether to avoid the confrontation with the coming centralized digital totalitarianism. The Age of AI references the trap ahead and the seeming "reality" of the need to adapt to the new world order. In Chapter 6, Schmidt's chapter, we get a lucid enough picture of the hivemindedness ahead:
Like the Amish and the Mennonites, some individuals may reject AI entirely, planting themselves firmly in a world of faith and reason alone. But as AI becomes increasingly prevalent, disconnection will become an increasingly lonely journey. Indeed, even the possibility of disconnection may prove illusory: as society becomes ever more digitized, and AI ever more integrated into governments and products, its reach may prove all but inescapable.
There will be nowhere to hide, Frank Church said back in 1975.
Tristan Harris, a former design ethicist for Google, and founder of the Humane Center for Technology, wondered, in the recent documentary, The Social Dilemma, What's wrong with us all? We seem dislocated and unable to understand what is afflicting us when it comes to our relationship with the tech industry. It includes cyber platforms and AI.
Harris, like the three authors of The Age of AI, voice their concerns before the most recent call for a "pause" in AI activity, a telling delay in itself. But each enunciates the massive evolutionary problem ahead as humans try to understand and respond coherently to AI and quantum computing and whatever other dazzlers awaiting us as we make use of mind and body transmogrifying technologies in gene editing and synthetic biology. Toward the end of The Age of AI, the authors ask some simple but significant philosophical questions:
Are humans and AI approaching the same reality from different standpoints, with complementary strengths? Or do we perceive two different, partially overlapping realities: one that humans can elaborate through reason and another that AI can elaborate through algorithms? If this is the case, then AI perceives things that we do not and cannot -- not merely because we do not have the time to reason our way to them, but also because they exist in a realm that our minds cannot conceptualize.
What's the problem? we ask. One answer is that we are living amidst a paradigm shift during which we do not all seem to be sharing the same reality or mindframes. It is easy to get lost in the interstices of this new arrangement of what's real. The Age of AI, though disagreeable as the authors are, in their allegiances and class privilege, is still a good brief read that provides an adequate overview and outline of key features of the future ahead.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).