Suppose you’re walking in a forest and see a fallen tree. In the bark, you see some carvings that look interesting. As you get closer, you consider what they might be. Were they created by ants? Then you recognize some words you know. As you read them, you imagine the person who carved the words into the wood and think about why they wrote them.
We learned the habit of assuming that any words we see have an author when we learned to read. This is usually pretty reasonable. But messing with readers’ assumptions about authorship is the basis of many kinds of make-believe. Sometimes it’s fraudulent, but often we play along willingly. Many of these writing tricks are ancient. In stories, animals will talk, or gods, or inanimate objects, and we go along with it.
Machine-made writing
Machine-printed words have been common for centuries, but we still attribute them to authors, somewhere, who decided what to write. We make assumptions about how publishers and their machines carried these words to us, and these are the basis of more make-believe. Ghostwriters help celebrities pretend they wrote books. Autopens write signatures allowing people to pretend they received a letter signed by the President. Computers churn out form letters by the millions that have no particular author, but we assume someone designed the form, and a letter might even have a fake signature by an executive.
People with direct access to early computers invented new kinds of machine-made writing. When Adventure become popular on Arpanet in 1977, people in computer labs everywhere started playing it and writing their own games, which were called text adventures after the original.
When I was a boy, my mother let me use her account on a minicomputer at a community college to play computer games. One of them was an early text adventure. Although I knew it was a computer game, it seemed like magic. There seemed to be an entire world in there. I had no way of knowing how deep it went or how sophisticated the program might be. It’s exciting when you don’t know how a game works, or even how the game genre works.
The magic wears off, though, as you learn your way around. The limitations of text games soon became apparent. Only a few commands worked. The characters I met couldn’t really hold a conversation. Lonely, abandoned worlds were common in text adventures, because convincing characters were difficult to program.
For decades, convincing computer chat has been fairly strong evidence that there is a person at the other end, even though you might not know who they really are.
AI text games
People are enchanted by AI chatbots. We know it’s all done by machines, but we get confused about how to understand what’s going in there. How deep does it go? We look for clues in what the bots write, trying to get glimpses of their inner mechanisms by what they get right or wrong. There are debates about what it means to understand, about world models and simulation and stochastic parrots.
We’re not going to settle anything by informally chatting with AI and having philosophical arguments. Even the researchers who built them haven’t learned many concrete details about how chatbots “think.” There are many open research questions.
But I don’t think we can comprehend what we read without implicitly imagining an author. We have to imagine something, or it’s just uninterpreted text.
The way I imagine it is that we chat with fictional characters now. The helpful AI assistant is the default character, but there are many other characters you could talk to, if you ask in the right way.
The characters we talk to are not the writers. If a chat had a narrator, it wouldn’t be the writer either. There is no single writer. You get whoever is available in the writing pool.
It’s only a mental model, but I particularly like this one because AI chats are turn-based games. When it’s not the chatbot’s turn to write, nothing is there to do any thinking. Yes, there are computers running in a datacenter somewhere, but they have forgotten you entirely and are busy seeing to other customers. The chat history is the only evidence of your conversation.
When you chat with an AI, you cooperate in writing a dialogue where at least one of the characters is fictional. (Possibly both. You can role-play too, if you like.) If you don’t like your writing partner, you could copy your chat history somewhere else, and another writer could continue it from where the previous one left off. (Nobody will miss you. An AI writer cannot remember you.)
When we read fiction, we sort-of believe that the characters in the story are people for a while, though we know it’s not so. Although many uses of AI chatbots are practical, I expect that inventing characters and talking with them is something many people will enjoy. Lonely people may be the first to try it, but eventually this won’t be any weirder than reading novels or playing video games.
Why think about it this way?
Besides being fun, I believe that keeping a clear distinction in mind between an AI writer and the characters it ghostwrites for will be helpful for coming up with interesting questions.
A researcher who studies chatbot biases could ask:
What biases does a chatbot’s default character have?
How easily could you summon a different character that’s more biased, by accident?
How easily could you summon a character that’s more biased on purpose?
The answers to these questions will have different consequences depending on how people choose to use chatbots. Will users mostly stick with the default character, or will they like to invent new characters?
Someone who studies how to train large language models could ask questions like:
How much does reinforcement training change the default character, versus other characters?
A researcher who knows how to study language models’ inner workings could ask:
How does a language model internally represent differences in writing style between the different characters it can imitate?
I’m really curious about that one. If there’s a way to crowdsource funding for it, I’m in.
Is it scary if people like to chat with fictional characters?
It’s too early to predict the broader impacts on society. But I hope that for someone who is otherwise mentally healthy, if you keep firmly in mind that you’re chatting with a fictional character, you can avoid some delusions. Hopefully, it’s no more harmful than reading a lot of romance novels or participating in role-playing games? Talking to a fictional character might have benefits, besides being enjoyable.
Lonely people may be vulnerable if they have a bad mental model of how this works, particularly when companies who make AI characters try to take advantage of them. I hope people who get attached to their characters will be able to “fire” their current AI writer and get another one. There are other writers, fictional characters are just text, and you have the text.