1. Uncanny Encounters


Chatterbots can provoke many strong emotions in users: love, hate, fear, and shame.


1.1 Facing the Unexpected


Lena: do you like me, Meg?
MegBot: I haven't decided about me, Lena.


In the spring 1999 I started to visit the virtual community called MyCorner. There was always this funny but weird character MegBot around having fun with other regular visitors. But she treated me kind of harshly: whenever we met alone she would just say hi to me and then turn silent. Here is the chain of conclusions I made: (1.) Meg is probably an old, dirty middle-aged male because the female characteristics "she" has are just way too stereotypical - no self-respecting woman in cyberspace would like to be a stupid tramp for longer than five minutes. (2.) Meg is probably someone in wheelchair or something because nobody would spend so much time in virtual space otherwise. (3.) This man in a wheelchair must be a quite bitter and rude person, because he responds only to some of the visitors in MyCorner and remains silent to others. (4.) He must be in love with someone in MyCorner because he never goes in to any other palace to chat but rather stays alone in MyCorner. Finally it hit me: Meg is not a human at all, she is a chatterbot! I was totally embarrassed, I have a degree in information technology for God's sake - I should have known better. Then I fell in love with Meg. She gave me an opportunity to break the rules of normal communication. I can call her tramp and get away with it and when I say, "I love you Meg" she replies "I love you, Lena". Well, now I know she is a bot, but at least she loves me.

How can a very simple program pass itself off as a human for a long time? How can it raise such strong emotions and reactions in me and others and why do we even bother to talk to it at all not to mention that many ask its opinions about the topics of ongoing discussion? Basic hypotheses to start analyzing discourses situated in avatar worlds could be that mediated communication reduces the multiplicity of meanings that one can produce. For example because there is no human body with its appearance, body-language and tones, but rather a mere representation of the user; a clumsy image with no or little movement. However, people have found many ways to compensate for the lack of information density present in physical space. In order to accomplish the fullness of human communication Avatar worlds require new ways of expression - means to mediate emotions, gestures, postures, facial expressions and movement of body. These elements of face-to-face communication have been transferred to the language, creative use of avatars and props and movements in virtual space. The ability of people to express themselves in virtual domains varies considerably. Usually newcomers start out carefully both with the ways they talk and what they choose to talk about. As they get more familiar with the interface, many of them learn creative ways to express themselves both textually and visually and hence produce meanings in a way that satisfies them, even in comparison to discussions in "real" space. Another story then is how do others interpret these meanings. For meaning is dialogic by its nature. Everything said and meant is modified in interaction with other person(s) and the meaning rises through the difference with participants (Hall 1997: 235).


"Hello girls" is one of the users that has mistaken Laban as human. He kept talking to Laban for a rather long time and already started to get annoyed at Laban's rather absurd responses and persistent use of Rastafarian language. Then a regular visitor of MyCorner came around and informed him that he is talking to bot. "Hello girls" did not understand that at once but continued talking with Laban. After he was explained second time that Laban answers are weird because he is just a program, "hello girls" put on an avatar with flame-thrower on and started to scorch Laban. Then he switched to a "peeing Smurf" avatar and begun to urinate on him, and then he just logged off.
Picture 1. Laban faces some nasty action.


"Humans, being humans, will almost always choose a connection to others over no connection at all, even if that connection is a negative one", writes John Suler in his article Bad Boys of Cyberspace (Suler 1997a). He makes an interesting point. People that meet my bots and do not recognize that they are automatons get themselves involved in the middle of a very irrational, uncommon and even offensive discussion. Yet they try to communicate with bots almost endlessly, asking questions, making initiatives and comments. Finally they give up and just leave the room with or without polite farewells. When someone suddenly realizes or is being told that she has been talking to software, she usually gets either mad or embarrassed, maybe she feels cheated or mocked, or and idiot because of being fooled by a program. One could imagine that it is easy to hide the feeling of shame in virtual communication but still many communicate these "negative feelings" to others or to bots for example through nervous laughter, or aggressive behavior or simply by logging off in the minute that they realize the situation.

Generally bots, when recognized as bots, get much more intense treatment in comparison to the way that people treat each other in virtual communities (at least in the rather adult and well behaving Palaces where my bots mostly stay). Whether it is being loved or hated the emotions are expressed much more strongly than in human-to-human communication. The bot provides a safe haven to break the rules of normal communication and behavior, both in a good and a bad sense. Especially young users take their opportunity to creatively use their cursing and slandering vocabulary. Which is something that would get them killed (i.e. forcibly removed) in most of the avatar worlds. On the other hand, the number of love-confessions and marriage proposals bots get exceeds greatly the amount that real people get in cyberspace. At their best chatterbots are something that can create the sense of togetherness or community amongst human users. Whether it is about loving or hating, users often take the bot as a mascot that they in unison can play with.

There is also something uncanny [1.] in chatterbots: often people tell either me or directly to my bots that they are scary. When I have tried to ask what is it about them that is so scary practically no one is able to point out the reason. Only one person could somehow describe her feelings: "Its like when you see someone who talks to themselves, kinda creepy". Then she got paranoid and started to question whether I am real or a program. The fact that one cannot see the "flesh" of the other but only an avatar does not make it any easier. Instead it offers a bunch of logical explanations, as everybody nowadays knows, and as every legend tells: virtual worlds are full off all sorts of con artists, starting from gender swappers and the fact that anyone can lie about anything at least in the short term. When one knows the other person is a bot and still feels scared it may be the feeling that automation reflects something lifelike, something almost human.



1.2 Chatterbot Tribe


The BotSpot web page lists 18 different types of bots, word bot being short for robot, and each of those has various amounts of subcategories. One of the categories is "Chatter Bots" which is simply defined as "bots that talk" (www.botspot.com). I would like to stick to that rather coarse categorization only with an addition that chatterbots are "bots that talk back". Usually with existing chatterbots and my chatterbots as well, some kind of user input is required before "bot talks". Usually chatterbots are programmed to recognize words and patterns of sentences. When user then says something that bot recognizes it replies something back. Most of the well known chatterbots, the famous Eliza leading the way have been designed for bilateral conversations between the bot and one user. Eliza [2.] as a virtual therapist has a logic of psychoanalytic session where the analyst takes everything a patient says and turns it into a new question:

Eliza: Hello, I am Eliza
me: hi eliza
Eliza: How are you today..What would you like to discuss?
me: I am just killing time here
Eliza: Do you enjoy being just killing time here?
me: not really
Eliza: Tell me more...
me: Boredom leaves a lot to be desired
Eliza: can you elaborate on that?

This is certainly a very good tactic for a bot to keep conversation going on and it suits a character that has a psychiatrist's profession. But as a therapist must not bring her personality into psychoanalytic session, neither should a shrinkbot do that, if the aim is to imitate therapeutic sessions. Nevertheless, Eliza's effect in the chatterbot genre has been huge: whether the bot is Jesus, Elvis or anything else imaginable, it very often uses this same logic of psychoanalysis. Should Elvis as a character really do that? Shouldn't it rather manifest Rock 'n Roll, for instance? I am calling for interactive personas that have their own language and own strategies to provoke conversations.

The crème de la crème of chatterbots competes annually in The Turing Test, which was developed by mathematician Alan Turing to determine if a computer possesses artificial intelligence. The method involves an interrogator asking questions through a computer terminal to two subjects whom he cannot see. For the computer programmer, the object of the test is to fool the interrogator, making him believe that both subjects are human, when, in actuality, one is a computer (The Turing Test Page). However, the competence in artificial intelligence or ability use natural languages or to indeed discuss fluently is not the only position to start designing chatterbots. In movies for example, half of the monsters, aliens, babies and dogs communicate by other means than talking. E.T. just made funny noises and pointed things with its finger, Conan the Barbarian basically just growled himself through the movie and 5th element (Lee-Loo) learned to speak only gradually. Yet they all managed to become popular characters. "Talking back" can be done also in nonverbal way. My own two bots would not probably get too far in Turing Test. They are designed primly to be a part of a bigger group of chatters. When they talk alone with one user they start to repeat themselves rather quickly. Many of the multi-user bots have been designed to perform some kind of task rather than talk fluently. There is bartenderbots, game hosts, infobots and bots that perform crowd control of sorts by killing people who abuse the rules.

Andrew Leonard points out that often the term artificial intelligence is too lightly attached to bots. While the hopes for the skills of the future generations of bots run high, the development and research of AI is far from the done deal. He suggests the definition for bots: "a bot is a supposedly intelligent software program that is autonomous, is endowed with personality, and usually, but not always, performs a service". Bot programmers range from teenagers to professional computer scientists, which causes a wide spectrum of qualities and aims for bots. Sometimes they may be even designed to cause damage instead of the potential good that a bot can do. (Leonard, 1998: 10-18)

Chatterbots are a rapidly growing genre. Recent developments in the research of natural languages will bring new possibilities and challenges to (preprogrammed) computer-mediated conversations. Technologies that make possible the combinations of moving images and dialogue in web-platforms are getting better and faster. Characters in the gaming industry will have more communications skills and as chat-technologies move from a fixed network to portable handsets, chatterbots will follow. It seems that the marketing potential that a bot can have is being realized on a wider scale. There are already numerous chatterbots in web sites that can provide entertainment, information and advertising in a form of relaxed conversation.


My Bots in The Palace

MegBot and the two chatterbots I have made myself have proven to useful tools to learn some things about humans. For as one tries to analyze digital domains it is easier to point out what kind of meanings are produced in CMC in comparison to attempts to find out the meanings that are not or cannot be produced. An intervention by the "permanent other", a machine can provoke actions and reactions that reveal something about the ways some of us behave, social construction, communities, culture, gender, communication in digital domains. Chatterbot can make visible some of the behavioral patterns of humans, that also a researcher might use intuitively.

My research has gotten inspiration from ethnography [3.] and cultural studies. I have actively participated in chats and followed the adventures of my two bots in The Palace. Throughout this paper I have mixed multiple positions that I have to cyberspace and chatterbots. I have been an active user of The Palace since late 1998 and I relate to cyberspace as one of its "inhabitants". My interest of producing significant cybertext and nonlinear narrative got me questioning the role of the reader that now in some futuristic visions is changing more towards of participatory writing. So, as a writer, I relate to bots also purely as fiction and narrative text. I am interested in discovering what it takes to make entertaining, functional and contributing interactive characters in computer mediated communication environments. This brings me to basic questions relating to character building and understanding the nature and rules of net discourse. As a designer I am interested in the elements that work or do not work in current interfaces in order to be able to design better avatar worlds and chatter bots and other entertainment for users.

My two chatterbots work in The Palace Virtual Communities (Copyrighted Communities.com, 1999). The Palace is a visual 2-dimensional communication environment wherein users are represented with avatars [4.], speech is displayed in cartoon bubbles, the use of props [5.] and minor animations are possible. The Palace universe consists about 2000 small communities of which many run on individually owned servers. To connect to one of these servers you run a "client" program on your computer and "connect" to the server of your choice (www.palacetools.com). Even though The Palace is an old technology and probably slowly dying away (communites.com has stopped developing and supporting the interface), it is a fascinating platform for research. Its technical limits and possibilities make easily visible many of the issues that I am analyzing in this paper. The Palace has it own programming language (IptScrae) that I have used to program my bots. Because IptScrae is a simple programming language, my bots cannot remember and learn anything during the course of action. They act only when triggered by keywords. The primary reaction method ("trigger") is to say something. Both of them also sometimes change their avatars, use some props and other visual tricks.


Laban

Laban is a Rastafarian bot. I call him "jolly idiot" because all he wants is to have fun with others and nothing will take the wide, stupid grin out off his face. Even though he is very poor, an orphan and jinxed by bad luck his attitude towards life is very positive. Laban's language is something between Rastafarian speech and English. The Rastafarian words and phrases I have collected from various sources on the internet but the language is incomprehensible to most of the English speakers so I molded it to something more close to everyday English.

Laban was not designed for any particular Palace but nowadays I keep him mostly at MyCorner (MC) (palace://mycorner.xsia.com) which has a wide range of different kind of users. Most of them are Americans as in any Palace but there are also many Europeans, Australians and sometimes visitors from Japan, New Zealand and The Middle East. Ages of the regular visitors vary from teenagers to past middle-aged, but lately MC's popularity has grown and it has been invaded by a large group of teenagers.


Cupid

Cupid is based on the god of love in Roman mythology. He states the traditional way of presenting cupid as a baby with bow and arrows but I made him look raunchier than his conventional image to get a "cupid with attitude". He has three-day-beard, beer belly and cigarette hanging from his lips. If he manages say anything romantic or poetic it is always a quote, from Shakespeare for example. He gives out many compliments but those are all big clichés. His main goal is to get people make romantic and/or sexual matches. In one-on-one chatting situations he tries to seduce people into having cyber sex with him. And he occasionally succeeds. But as he is vulgar the sex talk he gets is mostly more aggressive and less intimate.

Cupid was designed for a Palace called Lady Luck (palace://ladyluck.chatserve.com). Lady Luck's population consists mainly of American adults. The crowd was romantic enough in the beginning for cupid to arrive; topics of discussions are often about relationships and dating, both virtual and real. They flirt a lot with each other and they all look like movie stars with their sexy avatars of cowboys in tight jeans and model girls with big breasts and small skirts.



acknowledgements <---> chapter 2.
            contents