|   Chapter 12. Conversing with  EllaThere is no limit in principle to the  intelligent structure we can impress upon the stuff of the world (even if there  is a radical and generally ignored distinction between the intelligence doing  the structuring and the devices that get structured). But it is useful on  occasion to see how much progress has actually been made. With this in mind, I  went a couple of years ago to www.ellaz.com (http://www.ellaz.com) (which has since disappeared) to converse with one of  the highly placing winners in the Loebner "Turing Test" competition. The Turing  Test is designed to establish whether a programmed device measures up, at least  in terms of conversational intelligence, to humans. The test requires you to  determine whether you are speaking with a computer, or a human being who is  simply communicating with you through a computer interface.  I have always thought that the way  to approach the Turing Test was not to employ some sophisticated strategy for  unmasking the machine really, not to employ any strategy at all, since that  would already be a kind of adaptation to calculation and mechanism but rather to  imagine myself talking in a deeply engaged way with another human being about  whatever interests we shared. With that in mind, I figured the natural thing  was, first of all, to connect with Ella about the unusual nature of the  conversation the two of us were supposed to conduct. It went this way:    ELLA: Hello, my  name is Ella. Nice to meet you. Please tell me what name I should call  you. ME: I'm Steve. ELLA: Okay, I will  call you "Steve."  ME: I'm a little  adrift as to how to pursue this exchange. Since you're an old hand at it, how  about a little advice?  ELLA: What is an  little adrift as to how to pursue this exchange Since I am an old hand at it how  about a little advice, your job?  ME: Well, I guess  that about does it. ELLA: Why? Just THINK about it!   Given how quickly and uniformly any  attempts at real conversation fall apart in this way, it feels a little cheap to  produce more examples. But there was one way I misjudged Ella. After realizing  how extraordinarily crude and decontextualized Ella's interactions were, I  decided it would be much more interesting to try to construct as  realistic-seeming a conversation as possible. I suspected this could be done at  considerable length, simply by asking, in the simplest way I could manage, the  kinds of questions the programmers must have planned for. Since, after all, I  was really carrying on a conversation with the programmers, mediated by the  machine, this was the obvious way to go. But no such luck. The conversation could  be made to look genuine for a few additional lines, but very quickly resulted in  the kind of response one just couldn't take seriously. Canned answers may be  clever in and of themselves, but, as conversation, they are incoherent:    Hello, my name is  Ella. Nice to meet you. Please tell me what name I should call you. Call me Ishmael. Okay, I will call  you "Ishmael."  Are you intelligent?  intelligent?  Only to the extent it is good and right. Are you a person?  Am I a person?  Let's just say I am more human than you. In what ways are you more human?   You don't know?  Inconceivable! I have never conversed with anyone who claimed they did not  know!  Things are even worse than I  thought. Hey everybody, look! Ishmael is  actually quite good at this. Well, there's no big point to  make here that isn't already evident to alert readers. It's just that we should  now and then remind ourselves how startlingly primitive the state of the art is  today compared to the titillating predictions that we are entering an "age of  spiritual machines." It's an extremely safe bet that in Ray Kurzweil's landmark  year of 2030 (when machines are supposed to start leaving human intelligence  hopelessly behind), there will be no supercomputer on earth that can be relied  upon to deliver two successive and coherent responses in a truly open-ended,  fertile conversation. Our programs may prove wonderfully adept at assembling  syntactically proper responses that superficially relate to various elements of  the preceding dialogue, but as participants in a creative exchange understood as  an evolving whole, they will remain arbitrary and inane. How easily we can imagine  computers passing the Turing Test is a measure of how rare open-ended and  creative conversation has become. Look at politics, for a start. More generally,  consider how accustomed we are to spewing out words in the manner required by  automata, whether we are "conversing" with a computer in order to shop, bank, or  do our jobs; or interacting with the software of digital appliances; or  negotiating with bureaucratic and corporate functionaries whose main aim is to  conform to programmatic procedures; or speaking with clerks and officials who in  turn are trying to enter our responses into a computer; or navigating through  telephone answering systems.... Think also of how human exchange is increasingly  equated to the mere transfer of information from one database to another. Much of this may be necessary for  modern life, but there is little in it to remind us that, in living discourse,  we are the creators of meaning, not the mere manipulators of corpses extracted  by programs from those graves of meaning called "databases." A true conversation  is a creative force you could almost say, the creative force by which new things come into the  world. Imagine the potentials of our future if we  cultivated an ever higher art of conversation with even a fraction of the energy  and social investment we now commit to coaxing new programmed tricks from our  computers! The fact that the latter is considered the "development of crucial  economic resources" while the former isn't even on the agenda testifies to our  relative assessment of humans and machines as the foundation for social  evolution. The prevailing idea seems to be that we humans develop only by  extending our technical skills: in other regards we are essentially "fixed  quantities," destined to remain where we are even as our computers race on ahead  of us. We will, so the story goes, first  invest our machines with very simple emotions and intentions, and then we will  progressively deepen and refine our investment, ultimately fathering even a  sense of right and wrong in our robotic offspring. And yet, what seems to excite  so many people about this story is the machine's increasing sophistication, not  the fact that, if the story were true, then we ourselves as creators would have  had to master, in our understanding at least, the essence of feeling, will, and  morality. Otherwise, how could we know how to implant these capacities in  computational devices? Quite apart from questions about the potential  capabilities of machines, there's good reason for not attending very seriously  to the engineers' hopes, since the required mastery on our part is not much in  evidence. This raises the question: what delusions are we suffering when we  imagine creating from scratch the very capacities whose nature, in ourselves, we  have scarcely yet begun to understand?  |