Although face-to-face contact can’t be replaced, Facebook, chatrooms, blogs, and Twitter have made it easier to start conversations with other communities. This is a God-given opportunity to reach dialogue, but it is also an opportunity for time-wasters to soak up your precious time and energy. So how can you tell if your conversation is worthwhile?
I suggest you compare your conversation with “MGonz,” the legendary computer who can think. Or, to be more accurate, the computer program that can fool people into thinking that it can think. That’s no mean feat: a lot of thought and money has gone into the development of programs that simulate human conversation, or “chatbots.” These can be used to advertise goods in chat-rooms or to improve the service provided by automated customer services on websites.
So every year, programmers compete to fool judges for the Loebner Prize or for the Chatterbox challenge. For a short period of time, a “conversation” with a chatbot can sound very human; but these programs cannot detect nuance or subtlety. Their range of response is limited by their data, so they lack creativity. A chatbot cannot be an attentive listener because no one is paying attention; it cannot elaborate on its insights, because it does not have any. Eventually an attentive human interrogator will realize he is being fed a string of automated responses. Indeed, some of the best chatbots will be able to fool an expert for only about five minutes.
However, in 1989, MGonz was able to fool a human subject for over an hour. Had MGonz bridged the gap between human and machine? Not at all. MGonz worked on a simple principle: don’t respond to a person when you can insult them. Most of MGonz’s one-liners are too profane to repeat—but “Ah, type something interesting or shut up,” “What sort of idiot types something like that?” and “That’s it; I’m not talking to you anymore!” are fairly representative. The program simply pours a torrent of abuse on the unwitting human at the other end of the Internet connection, who desperately defends himself from the insults.
Vulgar abuse does not require thoughtfulness; in fact, it does not even require conscious thought. MGonz and similar programs don’t fool judges during Turing tests, because they cannot respond to requests for elaboration. So why did MGonz fool an innocent user into thinking he was having a discussion with a human? Unfortunately, human beings often act and speak like mindless machines. One person throws out malicious insults while the other desperately tries to save their pride. Neither listens to the other, neither learns anything of value, and no communication at all occurs.
So here is advice for anyone engaged in any discussion, online or face-to-face: run an MGonz test. Check to see if your conversation partner is ignoring all your points and is merely hurling personal abuse. If he or she is simply dreaming up their next ad homninem, you might as well be talking to a chatbot. Such conversations are not merely a puerile waste of time. They are profoundly dehumanizing and damaging.
When next you are subjected to an “argument” that feels like it’s strayed from a South Park script, point out that your conversation partner is acting like a chat-bot, and then directly question why he feels the need to confront you with abuse and mockery. Try to penetrate their motives. What is it about your faith that offends them so very much? What merits such ill-considered ridicule? The answers may generate a more productive discussion. But if that gets you nowhere, you may have to prayerfully and carefully consider shutting the conversation down.