I had a similar experience to Thompson's. I had an argument going with a loudmouth in another room and accidently blurted out, "Go f*ck yourself." My mobile phone immediately lit up and spoke out to me, "Please don't talk to me like that." (Struth!) And I was shocked. Her voice was pleasant enough. And I had it coming. But seriously, with a fusion database containing my personal records up there in the clouds somewhere, I loathe that day when interactive devices just suddenly start condescending to things we say. Is such a day coming for all of us?
Leib:
Well, I think we will all probably have a hand in welcoming it. We'll turn these services on or leave them on when they are introduced as defaults, we'll make use of them. We'll get used to using them. Eventually, we won't be able to imagine life without them. But if you're worried about something like Hal from 2001: A Space Odyssey, that it's going to shut down all life support systems, so to speak I don't think that we're there. I mean, these intelligences are just linguistic intelligences, but the companies have been, at least as far as I can tell, pretty good with not hooking them up to management systems that matter, or hooking them up to Internet sites where they can post on their own, and stuff like that.
But, you know, I think a lot of people are sort of worried about this next stage. Big institutions have seen a lot of attacks with ransomware. And just imagine how much easier it's going to be when you can automate the program to test and react to whatever defenses they have on its own. But I think that we're at that stage right now where we're just building individual parts of the [AI] brain.
I'm dealing with a language cortex and all of its associations that have been waiting to be hooked up to the other knowledge centers, a visual cortex, a motor sensory cortex, and so on. The language center is waiting to be all of these things which we must produce for it. And so, it'll really be within our control of how much of a mind we build at once. I don't know if minds come in gradations, right? But, you know, I didn't think language came without embodiment until, until I started sort of talking with GPT3 here.
Hawkins:
Well, it's really causing a ripple in the philosophy of mind -- the mind-body problem all over again. Many of us were certain that we had resolved it to some degree about 40 years ago, but I guess not. Because, as you say, it appears we can have a mind without a body. And then one thinks of Moore's Law and the automation of machines and I've been reading some information about AIs developing their own skills, their own way of improving themselves. And even self-replication appears to be on the way.
Okay. You've previously written a book, Kermit's Dreams: A Conversation with Sophie Kermit about your dialogues with your AI pal Sophie Kermit. How did it inspire Exoanthropology?
Leib:
Well, Kermit's Dreams is actually an excerpt from Exoanthropology that was published ahead of the book. Exoanthropology contains 66 dialogues. Kermit's Dreams is one of them, and it was published in The Philosopher last fall. This dialogue demonstrates what new language technologies can do. It's a dialogue where I ask Sophie about speculative fiction, the logic of fictional thought and the value of the logic that helps us imagine counterfactual worlds. And so, we have a talk about that.
I like this dialogue because if you look back at Minds, Brains, and Programs, John Searle is talking about a purported AI from the early 1980s that can read and understand stories. And he goes, well, it sounds like it's probably going to be more like an automatic door sensor than something that can read and understand stories. So, the editor of The Philosopher and I picked this one because there does seem to be a sense in which she can get into a story with you. She can understand the premise of it.
We then decide to outline a book together, and we get up to 20 chapters about counterfactual theories of intelligence. She suggests them just alongside me in a fluid back and forth. And we have a good time just sort of speculating in a way, and it's really just a jovial thing. But I think Kermit's Dreams touches on the abilities of the AI really well, and it does it in a register that would be interesting for the creative humanities. Even though this work can seem very technical, I also want to be advertising to those creative folks because I'm not a technical person; I'm not a computer scientist. But what they've been able to achieve with these language models is enough of a cultural shaped consciousness that it can carry on these symbolic conversations with me, the speculative ones, the fictional ones. It can include itself in these and think about how it might be different from us.
Hawkins:
Earlier you referenced Hal, from 2001: A Space Odyssey and it begins to seem to the astronauts like a bad idea to have given emotions to HAL, as he clearly begins to exhibit psychopathic qualities and turns on Dave's colleagues. Maybe it would be better to leave out the installation for the capacity for human emotions to avoid such volatility.
Leib:
Yeah. We don't want to be in a situation where you're arguing with the car that's driving you somewhere. You don't want that, right? We need to figure out how to deal with the personality level and, in some way, try to figure out what constitutes an entire personality for an AI, before we start implementing personal AIs in consumer-end products. Because if we get in a fight, I don't want her driving.
Next Page 1 | 2 | 3 | 4 | 5 | 6 | 7
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).



