📣 On chatbots and communication

TL; DR – chatbots give us privacy, and convenience. But when we talk to them, we talk in fragments. Short, command and answer statements. And we’re never quite sure what the bot is trying to do. Added up, I worry they’re moving how humans communicate to a worse place.


Chatbots are one of the most transformative technologies out there. Not only because of what they can do. But because of the future world they will shape.

I’ve worked on two chatbot ideas in the last five years. One wanted to help people find out about local public services. Want to know what day to leave the rubbish out to collect? Or where the closest library is? Message the bot, and it’ll tell you. Its value was its convenience. Helping people get a quick, direct answer by asking their question, rather than wading through websites or calling a phone number.

The other helped young people explore their sexual wellbeing. We worked with an amazing team¹ based in Kenya, building a chatbot for young adults in a country where sex and sexual health are taboo. Those sensitive questions you don’t want to ask out loud, you could now ask a chatbot. Not only about health, but also about pleasure and relationships. Its value was privacy, sprinkled with a bit of fun².

Even more than most technologies, chatbots are an ethical minefield. Here’s three high profile examples of what’s gone wrong: Microsoft’s Tay, exposed to Twitter to learn unsupervised, became racist, sexist, and pro-genocide within hours. Google’s Duplex was rightly panned for tricking people into believing they were talking to a human. And even the UN has condemned Amazon’s Alexa and Apple’s Siri for entrenching gender stereotypes, by making their digital butler ‘female by default’.

Ethical issues spring from the tech itself, particularly when it includes machine learning (Tay), how it’s packaged and presented (Duplex), and how it perpetuates harmful norms (Siri & Alexa). Quite the range.

What bothers me though, is something else. Something less dramatic, but more systemic.

It’s this:

Chatbots make how we communicate poorer. More fragmented, and less honest.

Unlike the three examples I talked about earlier, there’s no fix for this in chatbot design. It gets to what a chatbot fundamentally is.

What do I mean? Let’s start with fragmented.

Chatbots tend to messages that are sharp, practical, instant. Technology has been moving us this way for a while, from communication that’s thought through to rattled off³. First we had letters, then emails, then text messages. Each step moving us away from long-form writing, towards text in bits and pieces. Then, as more of our communication became one-to-many, we had the social media feed and WhatsApp group. Now, as we move to one-to-bot, this drop in quality will only get worse. Less words. Less craft. More fragments.

The bot mode of communication has been fragmented because it’s been optimised for giving answers (take the landing page’s ubiquitous customer service bot) or acting on commands (“Alexa, turn on the lights). The chatbot ideal: a rapid conversation, with a quick answer or solution.

And yet, the world of chatbots is becoming full of more nuanced topics. From telling us what to eat, to teaching us foreign languages, to helping us be less lonely, to philosophising, chatbots now promise conversations in less transactional, more human domains. It’s one thing asking a bot when to take out the garbage. For short Q&A like this, chatbots are the perfect medium. It’s another to try to learn a language, or talk about mental health, or discuss the meaning of life, with a bot. Along with the bots themselves, the values a chatbot realises – convenience, privacy, ease – and the bot mode of talking – rapid, fragmented – are being spread across an ever-expanding range of topics.

But what richness do we lose when the topics are big, and ‘answers’ are messy and complicated. And what potential human connection do we opt out of?

Sometimes, the privacy a chatbot offers makes it worth it. You might not be comfortable talking to a human about mental or sexual health, but would be okay talking to a bot. The conversation might be limited, but at least it’s happening.

Yet, I worry the chatbot’s imprint on humans will be to further contract how we talk and write, both with bots and with each other. Towards communication that’s fragmented. And towards an ever more distracted, impatient, inattentive world. Shaping the world like this isn’t a ‘byproduct’ of tech. It’s part of what technology is, part of the definition.

And what about less honest?

Chatbots have their own incentives when they talk to you. And – unless you designed it – you don’t know what they are.

While researching for this piece, I stumbled across the first use of the word “chatterbot”. It’s in 1994, in a paper submitted to the Proceedings of the Twelfth National Conference on Artificial Intelligence. There’s a section halfway through that really caught my eye. It’s on ‘tricks’ that chatbots use to seem more human. Here’s some snippets:

Since the program never says anything declaratively, it cannot contradict itself later.

Controversial statements are a way to drag the user into the program’s conversation, rather than letting the user direct the discourse.

Simulated typing, by including realistic delays between characters, we imitate the rhythm of a person typing.

Bots deploy these tricks to seem human, even if it means a worse conversation (less declarative, more controversial, etc). Their incentive is not to help. It’s not to understand. It’s not to dive deep to seek the truth together. None of those very human markers of good dialogue. Their incentive is to prove that they ‘work’. Hence the tricks.

Designers of bots might have other agendas too. That HelloFresh bot I linked to earlier wants to sell you HelloFresh boxes. To greater or lesser extents, this will always be true of bots built by profit-making ventures. Or a bot might want to shape your worldview. Who built that PhilosopherAI I mentioned, and what are their values and biases? Of course, you’ll talk to plenty of humans who want to sell you things and worldviews too. But would you rather pit your intuition to discern that against another human, or a bot and its ‘black box’?.

When I say bots are less honest, I don’t mean they’re lying all the time. I do mean that the conversation mostly won’t be what it appears. Instead, it’ll be engineered to achieve an outcome you don’t know about. And poorer as a result.

Gartner’s AI Hype Cycle predicts that chatbots will become mainstream in 2-5 years. Who’ll build them? And why? How human will they sound? Or will we sound ever more like robots?

We have a lot to think about. And, if you think rich communication is a part of humanity worth keeping – we don’t have much time.


¹ A huge shoutout to Christine, Chris, Martin, and the mHealth team at Population Services International 🙌🏽

² Before doing this work, I thought most chatbots were what Paul Graham would call ‘sitcom ideas’. They sound good, people will tell you it’s a good idea, but no-one it’s targeted at would actually use them more than once. After doing it… I still think there’s an awful lot of bad chatbot ideas out there. But also some that give people what they truly value.

³ Nobody writes about this better than Rebecca Solnit, in this beautiful essay.


🤔 Got thoughts? Don’t keep them to yourself. Email me on asad@asadrahman.io. Let’s figure this out together.

If you enjoyed this, subscribe to get pieces just like it straight to your inbox. One email, towards the middle of each month (and nothing else).

Banner is by Volodymyr Hryshchenko on Unsplash.