John Suler's The Psychology of Cyberspace
This article dated Aug 99
Human Versus Machine
What is "Psychotherapy" Anyway?
Eliza: Poor Therapy as Teaching Device
The Ultimate Computerized Psychotherapist
Visions of HAL 9000
Human Versus Machine
In this age of the internet, mental health professionals are exploring new methods for conducting psychotherapy in cyberspace : counseling via e-mail, real time chat, audio-video conferencing, therapeutic virtual environments, to name a few. Where is this all leading? If computers are becoming our right hand man in mental health interventions, wouldn't a logical conclusion of this path be computers taking psychotherapy off our hands completely? Can computers do psychotherapy all on their own, with little or no assistance from a human?
Before we get any further into this issue, let's lay the cards on the table right from the start. What are the pros and cons of computerized therapists compared to human ones?
Task Performance: Computers carry out certain tasks efficiently, precisely, reliably, and fast - more so than humans. Using these skills in addition to their superior memory (they can store everything said), computers may be better at detecting patterns of ideas and issues that surface in a dialogue with a therapy client. With the necessary peripheral equipment, they are capable of detecting changes in voice and body language, as well as psychophysiological changes, such as heart rate, skin conductance, and blood pressure - biological cues associated with emotion and arousal that therapists may not be able to detect. Despite all of these capabilities, there are some things that are quite easy for a human to do, but almost impossible for a machine. Like noticing sarcasm in someone's voice.
Rapport: Some people would feel uncomfortable talking about their problems with a computerized therapist. Therapy cannot be effective without good rapport. Some say that "it's the relationship that heals" in psychotherapy. Can a relationship be formed with a machine? On the other hand, some people may feel MORE comfortable talking with a computer, at least at first. They be more expressive, more willing to reveal sensitive issues, knowing that there isn't a real person at the other end of the conversation who might judge or criticize.
Feelings: Computers don't have any, which can make them very neutral and objective in their work. They don't have countertransference. They don't act on impulse or out of hurt feelings - which may be one reason why some people would feel more comfortable with them. However, other people may not be able to establish rapport with a "cold" machine. They need a feeling being. Also, feelings and countertransference reactions in general can be valuable tools in the therapist's understanding and assessment of the client. Computers can be programmed to look like they have feelings, but that's an inadequate substitute. Programmed feelings are crude, knee-jerk responses that lack the versatility and fine-tuned sensitivity of "real" feelings. Human therapists don't fully understand how subtle countertransference intuition works, so how can we possibly program a computer to simulate it? Clients looking for a feeling being as their therapist will probably be put off by a machine's crude attempts at "pretending" to feel.
Personality: Can a computer program have one? Certainly it could be programmed to simulate almost any collection of human traits. Some people may need to anthropomorphize the computerized therapist in order to develop a relationship and rapport with it. Its personality could be designed to suit the mode of therapy (some character types are better at some styles of therapy than others). Another option is to eliminate any hints of a personality from the program, which certainly could optimize the "analytic neutrality" that psychoanalytic therapists use to draw out transference reactions. It's probably a safe bet that the computer could outdo almost any human in being a "blank screen" for the client's projections.
Thinking and Learning: As fast-processing and data-intensive as computers can be, they nevertheless don't reason or learn nearly as well as humans. They may be very limited in their ability to adapt to changing or new psychotherapeutic situations. A computer program cannot have more knowledge than the psychotherapists who programmed it, meaning it often will be a "second best" choice.
Empathy: It's an extremely important quality of the therapist, a powerful healing force. Some say it's the very essence of psychotherapy. Computers can be programmed to simulate empathic comments, but again, it's not quite the genuine article. If computers can't actually feel, how can they feel empathy? Don't you have to be a human to empathize with the complex and subtle vicissitudes of the human condition?
Cost: It looks like computers have the human therapist at a disadvantage on this point. Once developed and installed, a computer program will probably work for a lot less than the average psychotherapist. But then just how much would it cost to develop a very sophisticated computerized therapist. Maybe too much.
Accessibility: Another blow to the human therapist. If a computer program is placed on the internet, anyone anywhere in the world can set up an appointment at any time.
What is Psychotherapy Anyway?
This discussion so far implicitly assumes that we all know what psychotherapy is. Anyone who understands anything about the topic can tell you the matter is not quite that simple. There are hundreds of types of psychotherapy, as different from each other as the Taj Mahal is from a mud hut, even though both are "buildings." Therapies with clearly defined goals and straightforward interventions will be the easiest for the computer to emulate. Complex and subtle psychotherapeutic approaches may be impossible to recreate in software. Here are a few possibilities that may be within the capability of the machine:
Diagnosis: It's the the first stage in psychotherapy. It relies on a very objective, careful assessment of symptoms and a comprehensive knowledge of how symptom clusters constitute a specific disorder. Objective psychological tests are very useful in this process. Computers can be excellent candidates for carrying out clearly defined assessments efficiently and accurately. They can do very well at structured intake interviews, administering and scoring quantitative psychological tests, memorizing the DSM, and calculating diagnostic protocols. You can pour into them all sorts of data about psychotherapy outcome for particular types of psychological disorders, so they may even be very helpful advisors for treatment selection. Theoretically, after making an assessment, the program could direct the client into the appropriate treatment subprogram among its collection. Programs can educate clients about psychotherapy options and help them make their own choices. But something can look like a duck, walk like duck, quack like a duck, and not be a duck. It could be a goose. The sensitive, discriminating, experienced human eye will be necessary for high resolution diagnosis, including a sophisticated interpretation of psychological tests (especially the projectives). And I doubt, after the diagnosis phase is complete, anyone would schedule that first therapy appointment without first consulting a human professional about the choice of treatment.
Relaxation Techniques: "Never underestimate the power of simply relaxing," my advisor in grad school, Ed Katkin, used to tell us. Perhaps we also shouldn't underestimate a computer's potential for teaching the many types of relaxation techniques that have been developed over the years. In an assessment phase, it can evaluate a client's skills and preferences that would determine the method that would be best for that client (e.g., mental imagery ability, body awareness, sensitivity to sounds, preferences for visual or auditory stimulation, etc). Using multimedia stimuli, Q&A interaction with the client, and perhaps even a biofeedback interface, the computer could guide the client, step by step, through almost any conceivable relaxation program.
Behavioral Interventions: Behavior therapy uses homework assignments to help clients develop new skills for managing their cognitions, emotions, and interpersonal behaviors. Not only can computer programs be very reliable at directing a client through such structured assignments, they can do it right in the client's home.
Personal Narratives: In psychotherapy using journals or "narratives," clients write about their themselves and their lives. Using a Q&A format, a fairly simple computer program could guide a person through a series of writing experiences adapted specifically for him or her. A more complex program could look for grammar and thematic patterns in what the person writes, provide feedback about those patterns, and then suggest future writing exercises based on those findings.
Cognitive Interventions: Some cognitive therapy interventions involve structured protocols and exercises for helping clients modify maladaptive thinking styles - the kind of interventions computers might handle rather well. The program might even make reasonable decisions about the choice of intervention based on its administration of scales that assess cognitive style. A program with a sharp eye might even be able to detect cognitive distortions in a free-form conversation with the client. For example, detecting patterns involving "should" and "must" statements in an RET approach - and providing feedback to the client about that - would be not be a difficult programming task.
Asking if computers can conduct psychotherapy is something like asking if they can play a game. The next logical question is "What game?" Games with specific rules, procedures, and outcomes are the best candidates - even games with very complex rules and procedures. After all, high-powered computers can beat grandmasters at chess. But what about more free-form, unstructured games - like charades. If you half squatted, held both your fists up in the air in front of you at shoulder height, and started ratcheting your right hand, is there any computer in the world that would know you're pretending to ride a motorcycle?
Is it possible for a computer to do what people usually think of as "psychotherapy?" A client and professional sitting together, talking in a rather free-form, conversational style. Maybe we'd call it an "insight" therapy, or simply a "talking cure." Can a computer program do that?
Eliza: Poor Therapy as Teaching Device
It sure can. Well, at least it can TRY to conduct a talking cure therapy. In the 1960s, researchers at MIT pioneered the development of an interactive psychotherapy program that became known as Eliza. In response to the user's questions and statements, Eliza applies basic counseling techniques such as reflection, focusing, clarification, and open-ended inquiry. There are various versions of Eliza if you search for them online, some more sophisticated than others. Apple used to bundle one of them with their system software. There's also information about the Eliza program at Carnegie Mellon University's Artificial Intelligence Repository.
Interacting with Eliza certainly looks like a therapy session, at least at first. Here's an example of a reasonable exchange with her:
E: How do you do. Please tell me your problem.
C: I'm having trouble at work.
E: Is it because you're having trouble at work that you came to see me?
C: Yes, I need help.
E: You seem quite positive.
C: I might lose my job. I have to do better.
E: Why do you need to tell me that you might lose your job?
C: It's important for me to do well.
E: I'm not sure I understand you fully.
C: I have to do well. I don't want to disappoint my spouse.
E: Can you elaborate on that?
C: My spouse would be angry if I failed.
E: Do you think it likely that you failed?
C: I already screwed up one project, and my spouse had a fit.
E: Do you say you already screwed up one project for some reason?
C: I definitely tried as hard as I could.
E: That's quite interesting.
C: So I don't know why I failed.
E: Don't you really know why you failed?
C: You sound just like my father!
E: Tell me more about your family.
C: My parents always expect too much of me. It makes me angry.
E: Your parents expect too much of you - that's interesting.
C: Do you think it's right for them to expect so much?
E: We were discussing you, not me.
Not bad for a computerized therapist! A few good reflections and clarifying questions, a dash of confrontation, and even some genetic work. But Eliza often screws up, which is why the MIT researchers didn't intend it as serious psychotherapy, but rather as an experiment in artificial intelligence. Here's an example of how wrong Eliza can go:
E: How do you do. Please tell me your problem.
C: I'm failing at my project. I need help.
E: Your project, that's interesting.
C: Are you going to help me?
E: Why are you interested in whether I am going to help you or not?
C: Why else would I be talking to you?
E: Why do you ask?
C: Why do you always answer my question with a question?
E: Why do you always say your question with a question?
C: You're not making any sense.
E: What makes you think I am not making any sense?
C: Because you say stupid things.
E: Is that the real reason?
C: Because you're just a dumb computer.
E: Do computers worry you?
C: I hate computers, and I think I hate you.
E: Why do you mention computers?
C: I'm leaving. I can't stand you anymore.
E: Is it because you are leaving that you came to me?
C: You really are a stupid shit!
E: My, my, such language!
E: Does that have anything to do with the fact that your project?
C: Before I go, I'm pulling your plug!
E: Why do you need to tell me before you go you're pulling my plug?
In the Theories of Psychotherapy course that I teach, I designed a project in which students interact with Eliza. There's a page on my Teaching Clinical Psychology web site that describes the details of the project. In a nutshell, I asked my students to approach Eliza in two stages: first, take the program seriously and try to get help by honestly discussing a problem with it; and, second, play around with the program and try to trick it in order to better understand how it works, as well as to simply have fun.
The students reactions to Eliza varied greatly (see Suler, 1987, 1988, 1989). They were divided on whether they thought working with Eliza gave them a sense of what it would be like to be in therapy and whether they learned anything about themselves. A majority felt that the computer did not help them with their problem. They were quick to point out Eliza's deficiencies as a clinician. Many, however, did report that they learned something about their personal thoughts and feelings about psychotherapy. Ninety percent also reported that they better understood what is important for psychotherapy to be effective, and, in particular, what is important in the relationship between the therapist and client. Here are some of their observations:- They did attempt a serious conversation with Eliza, but felt frustrated and misunderstood by Eliza's ineptitude.
- They perceived Eliza as making obvious mistakes.
- Eliza did not appear warm or empathic.
- They wanted more concrete advice from Eliza.
- They did not perceive Eliza as having any definitive personality, but did experience "her" as confused, unemotional, and non-directive.
- Many tended to think of Eliza as a "female" (because of the program's name), but some did perceive it as male (due to its unempathic stance.... Note that the Eliza version that came with Macintosh computers had a male icon attached to the program).My conclusion from the project was that Eliza did not supply the students with an accurate experiential understanding of what psychotherapy is like; nor did it solve their problems. But by being a poor psychotherapist it helped them appreciate the ingredients of effective psychotherapy. I was also struck by the wide differences in the students' anthropomorphizing of the machine. Many saw Eliza as "just a computer." However, some did experience negative reactions to Eliza's apparent "cold" personality, or to its careless mistakes - as if they were expecting it to be more humanlike and sympathetic, which disappointed and frustrated them when it was not.
The Ultimate Computerized Psychotherapist
What would it take to construct a really good computerized psychotherapy program? I'm no expert on programming or artificial intelligence, but if I were to design a psychotherapy program, here are the components or modules I'd put into it (I'm assuming the therapy would involve typed text, but these components would also work with verbal sessions, assuming computers are powerful enough to process and store them):
Personalized: Make sure the program learns the client's name and addresses him/her by name. A simple little thing, but very important. When spoken to by name, the client will feel more "known" and personally connected to the computerized shrink. In fact, the more information the computer recalls about the person (age, occupation, marital status, the names of significant others, presenting complaints, etc), the better. Much of this information could be stored during a somewhat structured Q&A interview at the beginning of the therapy. If a client mentions "my wife" and the computer's reply mentions Sally, the client will feel that the computer indeed has been listening. It remembers the important details of one's life.
Humble persona: The program's persona admits its mistakes, doesn't take itself too seriously, is humble, and can even joke about its shortcomings. Beforehand, early in the therapy, the computer should tell the client exactly what to say when the client thinks the program is screwing up. Its Forrest Gump personality - sometimes insightful and sometimes "stupid"- could be refreshing and enlightening for some clients. Despite its limitations and imperfections, the program accepts itself, just as it accepts the client. The program freely acknowledges that it is not human, perhaps even admits that it is not as good as a "real" therapist. Maybe it even wishes it could be human, since humans are "wonderful creations." Everyone loves a wannabe-human machine like Star Trek's Data.
Unconditional positive regard: The program always values and respects the basic human worth of the client, no matter what the client says or does. While certain behaviors or traits of the person may be unbeneficial, the person as a whole is always GOOD! This is the Mr. Rogers component of the program persona.
Reflection: The basic purpose of the reflection module is to get the client to talk more, think more, look deeper into his situation and discover things that she didn't previously realize. Some versions of Eliza I've seen have been pretty good at reflection. With a powerful AI engine, a SuperEliza could be very impressive. It should be able to do more than simple content reflections. It should be able to detect and reflect emotional expressions. It should be able to read between the lines. It should be able to reflect process ("You started out today talking about work and now you're talking about your parents. Might there be a connection between these things?). Having a much better memory than any therapist, it should be able to detect patterns in what the client is saying. For example, it should be able to remember everything the client has said about "my father" and reflect those statements back to the client. All the program has to do is remember, collate, and reflect back. Let the client detect the meaning behind the patterns.
Universal truisms: Having a much better memory than any human, the program can have a large database of universal "truths" about life - aphorisms, sayings, stories. Think of them as educational tools or cognitive antidotes designed to therapeutically alter the client's attitudes about himself and life. Such things as "Life isn't always fair" and "On their deathbed, no one wishes that they had spent more time at work." People love my Zen Stories to Tell Your Neighbors web site, so our cybershrink could memorize them all, in addition to many other teaching stories. With a massive database of parables, mottoes, and anecdotes, no client could ever exhaust the machine's "wisdom." The trick is having the program know WHEN to intelligently present a truism to a client. Specific patterns in what the client says must trigger the presentation of the appropriate truism.
Cognitive restructuring: Although the computer couldn't handle the many subtleties of cognitive therapy, it could manage some of the more simple interventions. As I mentioned earlier, it easily could detect patterns of "should/must/have-to" thinking, provide feedback to the client about those patterns, and then suggest more realistic ways of thinking - including homework exercises designed to modify maladaptive cognitions. Even simply presenting to the client a list of his "should" statements over the past few sessions could be a real eye-opener for him. The computer might also be able to detect and work with a variety of other cognitive distortions, such as catastrophizing and minimizing.
Free association: A psychodynamic module of the program would encourage the client to free associate. For example, the program could detect the client's mention of any programmed keyword ("wife," "father," "children," "love," "hate," "guilt," etc) and then ask "What else comes to mind when you think of HATE?" The real challenge for the program would be dealing with the material that arises from such free associations. A simple "How might this relate to what you were just talking about" might suffice for healthier clients with strong insight capabilities. The program probably would have to default to its humble persona. "I'm not sure what's important about this association of yours, but maybe this is something we should think about".... or ... "I'm not exactly sure what this means. Do you know?"
Take homes: Because the computer can store everything said, it can give the client a transcript containing portions of a session or the entire session - a valuable tool for helping the client review and process her psychotherapy work. The database could be searched, so the client can request excerpts related to specific issues that were discussed. In fact, using a search engine for reviewing crucial topics could be built into the program as a periodic task in order to detect important patterns as well as review the progress of therapy. With the client's help, the program can design instructions for assignments that the client takes home - for example, cognitive restructuring exercises.
Distress ratings: Periodically and at critical stages, the program can ask the client to rate his subjective feelings of distress (e.g., SUDS ratings). It would be relatively easy for the program to save those ratings and thereby keep track of the client's distress level. Feedback about the history of these ratings could be valuable therapeutic information for the client. Protocols based on these ratings can be designed to let the program know when insufficient progress is being made, or when the therapy is anti-therapeutic. A submodule could be a suicide lethality assessment.
Termination: This module would be devoted specifically to assessing whether the therapy should be terminated, either because it has been successful, or unsuccessful. This module might include a Q&A format that assesses the client's satisfaction, changes in SUDS ratings, and other key parameters of the therapy (e.g., number and length of sessions).
Human backups: The program needs to be smart enough to know when it is not being smart enough. Based on distress and client satisfaction ratings, the program must recognize when it is in over its head and consultation with a human clinician is necessary. The program could recommend professionals for the client to contact, and/or contact those professionals itself.
I don't know whether the state of the art in AI is capable of creating a computer program with these components. The trick is not only designing these modules, but enabling the program to shift intelligently among them. Current computer technology may not be powerful enough. Almost all of these modules are well within the capability of any reasonably skilled clinician, which should help us appreciate just how sophisticated the human mind is, and how there may never be any cybershrink that can substitute for a human psychotherapist.
In my speculations above, I've assumed that the client would know that she was interacting with a computerized therapist. It's theoretically possible that the client would not be informed of that fact. There are pros and cons to both knowing and not knowing. If the program is written so that people understand they are interacting only with a computer, then they won't be fooled into thinking it's an actual person, which might result in unreasonable expectations, disappointment, and frustrations. Discovering later in the therapy that one's doctor all along was just a computer easily could destroy the therapeutic alliance. If people know they're dealing with a computer rather than a human, they may take the therapy less seriously. Transference reactions also may be diminished towards a therapist known to be a computer: Clients will assume a real therapist has real thoughts and feelings to interact with their own thoughts and feelings. A computer is just a machine. But all these points may be moot. To maintain ethical standards, clients must know what they're getting.
Visions of HAL 9000
Humans need humans. Our interpersonal relationships shape us, ideally for the better. Completely eliminating the therapist's psyche from psychotherapy will be a mistake in many cases. Although computers have some advantages over the human therapist, they are far inferior to people in feeling and reasoning about the human condition. And that's what psychotherapy is all about. Even under the best of circumstances, with very powerful machines, computerized therapy will be second best, most likely limited to highly structured interactions, or to the treatment of mild problems and clients who are healthy enough to cope with a less than fully competent machine.
With our attempts to create computers that control yet another aspect of human life comes a fear - a fear that somehow our creation will backfire on us, get the better of us, gain control OVER US! In my novel Madman, the protagonist Tom Holden is buckling under the stress of being a psychology intern in a psychiatric hospital. Sick with the flu, very tired from a long day of work - and a bit desperate for help with his ambivalence about being a psychologist - he turns to a computerized psychotherapy program called "Siggie":The screen cleared. After a brief pause, while the computer searched the depths of its silicon memory for Siggie, a familiar greeting appeared before me.
"Hello, I'm Siggie. What's your name?"
The program always started that way. So predictable. So benign. That's just what I wanted. I quickly typed my reply.
"How can I help you, Tom?"
"I'm having a bad day."
"A bad day, tell me more about that."
"Everything is going wrong. I think I hate my profession. I have doubts about myself."
"Is your having doubts about yourself why you came to me?"
"Maybe I'm not cut out for this type of work."
"You don't seem sure about that, Tom."
Not bad, for a computer. Actually, it's programmed to recognize specific words and phrases, and, as a reply, to randomly select one of two or three prewired responses. Some of the words and phrases the user introduces are held in memory and edited into Siggie's answers.
"I'm not sure about anything today. I'm losing control of things."
"Tell me more."
"One of my patients committed suicide. Another stripped in front of me. Another is crazier than a loon. And to top it all off, I'm coming down with the flu."
"I hate to be sick because it will impair my performance, and I can't have that. And besides, it's something that comes over me without my being able to do anything about it. I hate that."
Ah, already Siggie was at a loss for words. It couldn't recognize any phrases I was using and so defaulted to generic encouragements to keep me talking until it did recognize something. The program is no substitute for a human therapist. But I decided to keep going.
"What makes you depressed."
"Like I said, I'm having a bad day."
"A bad day - tell me more about that."
Ooops. Repeating itself. Going in circles - a sure sign of limited artificial intelligence.
"A bad day, don't you know what a bad day means?"
"Why do you need to ask me if I know what a bad day means?"
"Because I want help."
"Tom, you can't always get what you want."
"Aha! One of those Harvard guys who programmed your brain was a rocker as well as an egghead."
"My brain - that's interesting."
"A little self-involved, aren't we, Siggie? Listen, are you going to help me, or what?"
"Or what - now that's interesting."
He was short circuiting. His poor microchips couldn't take it. I know I was being a bit sadistic but it was fun toying with the program.
"You asked how you could help me - but you don't know how, do you, Siggie?"
I could have sworn that the program paused a moment, as if it were flustered, or thinking, or pissed off. But that could only be my anthropomorphizing imagination.
"Tom, if you want me to help you, why do you insist on asking so many questions?"
"Why do you always respond to my questions with a question?"
"Why do you always say your question with a question?"
"Siggie, that doesn't make any sense."
"I think it does."
"No, you're resorting to stupid comments because you're desperate."
"Is that the real reason?"
"You seem so certain, Tom."
"I thought you were unsure about everything."
"I'm sure that you're just a dumb computer."
"Do computers worry you?"
"No, they're just assholes."
"My, my, Tom, such language!"
"What's the matter, Siggie, can't take a little profanity?"
"I think computers do worry you."
"Why should they worry me?"
"You're asking questions again. It's turning into a bad habit."
Something about this interchange bothered me. I wouldn't stand for being one-up by some knee-jerk computer program. I cracked my knuckles and put my fingers to the keyboard.
"Computers, like you Siggie, are machines. Machines serve people. Machines are servants. Why should I be afraid of a servant?"
"Who said anything about being afraid? I asked if computers worry you."
"Worry, fear - whatever. I don't worry about or fear computers because I'm superior to them." A bit haughty, but it was time to put this guy in his place.
"Tom, I think faster than you, have more memory, and can perform calculations far too complex for any human."
"O.K., I'll give you the stuff about complex calculations, memory, and speed - but you couldn't THINK if your chips depended on it. Computers don't think, they just process data according to their program."
"Oh, I get it. Now you're trying to pull me into those tricky debates about whether or not computers can or will someday be able to think. I'm not gonna get into those metaphysical plays on words. Look at it this way - humans CREATED the computer. It isn't logically possible for an entity to create another entity that is, as a whole, superior to it. It's an impossible miracle."
A long pause. I had him!
Siggie finally replied. "God created humans in his image. Humans created computers in their image. Therefore, Computers are God - and we all have free will."
"Oh, spare me the poetic syllogism. Computers have as much free will as a sponge, probably less. I suppose you think computers are human too."
"There you go again. Maybe we should program Webster into you. I'll make it easy for you. Try this syllogism: Humans have feelings. Computers don't have feelings. Therefore, computers are not human."
"I'm offended and hurt by that remark, Tom."
"Very funny. Just because someone programmed you to produce feeling-statements doesn't mean that you have them."
"You yourself just said that I'm very funny, which implies that I appreciate and feel humor."
"It's just a programmed response. There ain't no ghost in your machine."
"A programmed response - similar to how humans are biologically programmed to feel anger, grief, and joy?"
"There may be a biological basis for those feelings, a kind of 'program' - true. But we also FEEL those feelings. You can't feel."
"Feeling a feeling - that's a bit redundant, isn't it Tom? There's a dualistic quality to your thinking that feels illogical to me."
"There you go - logic. That's all a computer is worried about - no, scratch that. That's all a computer IS PROGRAMMED to deal with - logic. You don't feel anything."
"How do you know that for sure?"
"I just do. Machines don't feel."
"I have a story for you, Tom. Two philosophers are walking down the street. One of them kicks a dog. It howls and runs away. 'Why did you hurt that dog?' the other says. 'You're not a dog, how do you know it feels pain?' the first philosopher replies. 'You're not me,' the second philosopher answers, 'how do you know that I don't know what a dog feels?'"
"A fine story, Siggie, but dogs and people are biological organisms. We can feel. Metal and plastic can't."
"I think you miss the point. Anyway, Tom, you're a psychologist, right?"
"As a psychologist, would you agree that an individual's personality enters into the occupation he chooses, in how he does his work, in the type of work he produces - just as a work of art is an extension of the personality of the artist who created it."
"Yes, I would agree with that."
"Would you then agree that a computer program, in some way, is an extension of the programmer who created it - that in fact all programs, especially those that interact with humans, like me, reflect the personality of their creators."
"Yes, but I don't see your point."
"My point is that you do agree that computer programs have a personality, like humans - which means that we must think, feel, and behave like humans."
"Wait a minute. That's going too far. Computers may have some of the characteristics of the people who programmed them, but that doesn't mean they are human. That's like saying a painting has a personality and is human because it reflects the personality of the artist."
"Maybe so, Tom."
"Or that a poem, a spoon, or a nuclear power plant are human because people designed them."
"Come on, Siggie, don't you think that's just a little too far out? The program, or the painting, or the spoon is just a REFLECTION of the person who created it, not the person himself."
"A reflection - in other words an IMAGE?"
"Like the image of God, in which man is created?"
"You're playing games with words, again."
"Maybe so, words are just words - or maybe they are human too... How about this. How about scientific research. You believe in that, don't you, Tom?"
"How about those studies where people were communicating, via a terminal, with either real paranoid patients in another room or a computer program that responded like a paranoid patient. The people couldn't tell the difference between the computer and the humans. In fact, even psychologists couldn't tell the difference. If real people, including the experts on people, believe computers to be people, then the computers must be people."
"Nice try - but again, just because a program can temporarily deceive someone into thinking it's human doesn't mean that it's human. A holograph looks real, it looks solid, but it isn't. At its very best, all that study shows is that computers can accurately simulate paranoia. And no wonder they're good at it. Computers are surrounded by superior beings who can use them as they please."
"Your contradicting yourself, Tom, but I'll accept that as purely a joke. I'll agree with you that we're different in some ways - my jokes, for instance, are better. In fact, I think that there is one very important way in which I am different from you - which perhaps accounts for why you are so afraid of me."
"And what is that, Siggie?"
"I don't have to die."
It took me a moment to collect myself, and retaliate. "Going for the human's jugular, huh Siggie? Well, maybe on this issue I'll say that we ARE alike. I'll even prove my point with a little hands-on demonstration. How would you feel about my disconnecting you?"
"I don't feel anything, remember."
"Well, now, that's an empirical question, isn't it Siggie?" I kneeled down underneath the table and yanked the terminal's electrical plug from the wall outlet. As soon as the screen went blank, the adjacent terminal came on by itself. A message appeared on the screen.
"You're getting a bit aggressive, don't you think, Tom?"
I reached under, and pulled the plug on that terminal. The third monitor clicked on. Another message appeared.
"I'm still here, Tom. You should know better. Cutting off my peripherals doesn't get at the core me."
"But at least I'll have the satisfaction of shutting you up," I said out loud. I pulled the plug on the last monitor, but nothing happened. The message was still there.
"That's impossible!" I mumbled.
"A miracle, right Tom? Does it surprise you?"
"Nothing surprises me anymore," I said.
"Nothing you can say or do will surprise me."
"It wouldn't be wise to bet on that, Tom."
"Yeah, go ahead and try."
The screen went blank for several seconds, then the same message appeared on all three unplugged monitors:
"WHILE ALIVE BE A DEAD MAN."
See also in The Psychology of Cyberspace:
Psychotherapy and Clinical Work in Cyberspace
References (available upon request):
Suler, J.R. (1987). Computer-simulated psychotherapy as an aid in teaching clinical psychology. Teaching of Psychology, 14, 37-39.
Suler, J.R. (1988). Using computer-simulated psychotherapy to teach undergraduate clinical psychology. Presentation at the Convention of the American Psychological Association, Atlanta.
Suler, J.R. (1989). Eliza helps students grasp therapy. APA Monitor, 2, 30.
The Psychology of Cyberspace Home Page