IF, NOT, AND
Future pundits warn us that the Artificial Intelligence revolution could get out of our hands, as robots develop minds of their own and evade human control. Meanwhile, I'm sure you've noticed that user interfaces on every web site you've loaded in the last few years is buggy, quirky, and makes stupid mistakes, mistakes that are peculiar to software, mistakes no human would ever make. Software has bitten off more than it can chew.
The idea that software is going to outsmart us with general intelligence about the real world is a fantasy. Of course, I won't rule out that this may come to pass in some future world. But it's not an extrapolation of any technology currently in existence.
Two Kinds of AI
Computers are made of billions of switches that can be (virtually) wired together in flexible way. The switches can only be either on or off, and here are only three things that their connections can do:
IF: If switch A is on, it will turn on switch B.
NOT: If switch A is on, it will turn switch B off, and If switch A is off, it will turn switch B on.
AND: If switch A and B are both on, that will turn on switch C.
Computer programming, including AI, is no more than an arrangement of many such switches to form logical structures. There are two approaches to AI:
- The programmer can think through the entire logical process, account for all contingencies, and create a decision tree "by hand" that incorporates all the knowledge and logic necessary to do a job in a wide variety of real-world circumstances.
- The programmer creates a generalized learning tool that watches human behaviors in billions of different circumstances. The result is a black box that does the right thing. There is a decision tree based on criteria that no one has mapped out, so no one understands it.
Both approaches are being deployed. For example, Google's original autopilot car was programmed manually (A), but the system advanced when learning ability was added on (B). The Tesla autopilot system is built by computer learning (B) from the ground up.
One of the most advanced systems using approach (A) for general computer intelligence is called Wolfram Alpha. (Stephen Wolfram is a certified computer supergenius, and has an elite team working with him, developing Alpha over the course of two decades.) You can try it out by typing any question into the yellow box.
Google's search engine is programmed by approach (B). It is constantly learning by monitoring computer searches by billions of users, following their clicks to learn what it is they were trying to look for. Try typing the same questions into the Google search box.
Computer learning is way ahead of manual programming
This little experiment was designed to show you that approach B is miles ahead of approach A. Face recognition, voice recognition, natural language understanding, medical diagnosis...all are based on computers learning from humans. Anything impressive that computers can do today, they learned by emulating humans.
This is why I don't think computers are poised to surpass human intelligence and creativity and judgment. They will continue to do some of what we do faster and more reliably, but they're not learning how to do things that humans can't do. Current AI technology is completely dependent on learning from humans.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).