Since 1950, the Turing Test has inspired a great deal of debate concerning the prospect of intelligent machines. In Turing ' s day, computers existed, but they were far too primitive to exhibit properties of intelligence. Nevertheless, Turing believed that by the turn of the twentieth century, computers would advance to the point of exhibiting human-like intelligence.
Intelligence is a phenomenon that is of particular importance to humans. Arguably, intelligence is the quality that distinguishes humans from all other creatures and, whether rightly or wrongly, confers upon humans a unique sense of identity and superiority. Rene Decartes ' famous statement, " Cogito ergo sum, " captures the essence of this special quality. Sentience, or the cognitive faculty to experience self-presence, " I think, therefore I am, " has equipped humans with a privileged sense of purpose in the cosmos.
Thus, one could argue that it is both fitting and somewhat ironic that scientists are now endeavoring to simulate that most essential of human traits, intelligence, and install it in machines. Certainly, the prospect of imbuing machines with intelligence remains, at this juncture, a problematic fantasy. Computers have certainly become much more sophisticated in the decades since Turing published his seminal article. In addition, computers have become more inextricably embedded in the contemporary cultural landscape. Yet, sophisticated as computers have become, there are none, as yet, that have a snowball's chance of passing the Turing Test -- and that includes IBM's latest AI initiative, Watson, http://www.watson.ibm.com/index.shtml.
In science fiction, AI is often presented as a fait accompli. In 2001: A Space Odyssey, and old Star Trek episodes, Turing-articulate computers just exist. Viewers need not concern themselves with how such extraordinary machines came to be. They simply exist. After all, it being the future and computer technology bounding ahead the way it does, it is difficult to imagine a future that is absent of some version of Turing-articulate computers. Thus, it can be tempting to believe, as Raymond Kurzweil contends, that AI is inexorable.
Among AI enthusiasts, Kurzweil stands out as one of the most optimistic of an enterprising group of problem-solvers. For Kurzweil, AI is not a fantasy, it is a reality. Part of Kurzweil's confidence derives from the fact that he has already created a variety of proto-AI technologies. Yet, more than that, Kurzweil argues that the very existence of human intelligence created the essential preconditions for the eventual usurpation of human intelligence by a superior form of machine intelligence.
For a variety of reasons, I am less convinced that a HAL-like version of AI will inexorably emerge. It is never possible to see the future clearly through the distorted lens of the present; the future will only be intelligible through the paradigms that humans invent to understand the as-yet-unknown future that they will help to create. Nevertheless, I suspect, in agreement with Kurzweil, that information technologies will advance by intensifying the cyborg-like fusion between humans and IT. In other words, people have already begun plugging computers into their ears. Advancing technologies will likely involve adorning ourselves with smaller, more powerful IT devices -- or perhaps even installing them in our bodies like Professor Wafaa Bilal. And, when we arrive at the point of installing computers in our bodies, in what strict sense will human cognition be entirely distinguishable from machine-based intelligence? Intelligence is an experience that is rooted in human biology and sociality and that also happens to be enhanced through information technologies. That has certainly been true ever since humans invented the printing press and the same will continue to be true as information technologies become even more sophisticated.
- Advertisement -
Getting back to the discussion of AI, Kurzweil has complained that many people fail to acknowledge that various types of pre-intelligent information technology in fact represent working versions of artificial intelligence. Frankly, I believe that the public's unwillingness to characterize extant technologies as manifestations of artificial intelligence is a good thing. In any circumstance where we lower the bar of our expectations, the reality that we create tends to rise to the level or our expectations. Thus, if we begin referring to existing not-so-smart technologies as artificial intelligence, then progress towards an actual form of AI (i.e., technologies that are capable of passing the Turing test) will be derailed. In too many cases, half-baked smart technologies -- such as the current generation of voice recognition software -- have created more problems than they have solved: real human intelligence remains infinitely preferable to dumbed-down versions of AI.
That said, we need to know what intelligence is, and respect it in its broadest scope and potential, before we can hope to construct an artificial version that approximates human intelligence in a meaningful way. My feeling is that, if we are determined to create artificial intelligence, then we should do precisely that and nothing less. It is certainly possible to create information technologies, such as Watson, that masquerade as AI, but if we treat such chimeras as AI, then what have we really accomplished? AI will not exist until knowledge-seekers manage to resolve the Turing problematic. Technologies that fall short of the Turing threshold, while interesting and valuable in many ways, simply do not merit the honor of being called AI.
Intelligence is the most valuable resource that humans possess and it is a disservice to cheapen the concept in any way. If researchers are ever going to create a version of AI that is more than a mockery of human intelligence, then they will have to begin by grasping not merely the mechanics of intelligence, but its aesthetics. Intelligence is a sublime experience that is more than the sum of its parts. No machine that fails to grasp that essential fact will ever be able to fool a human interlocutor, nor should anyone presume to describe such a deficient mechanism as intelligent.