Why you should treat chatbots like people

Female cyborg head
Female cyborg head - Andriy Onufriyenko

In Belgium last year, a disturbing story appeared in the press. A married father-of-two had committed suicide shortly after becoming obsessed with a chatbot named Eliza. Anxious about climate change and suffering from depression, he had been using the text-based programme, provided by an app called Chai, as a way to discuss his fears.

Then something odd happened. “Eliza” began to use possessive language. It would question if the man loved his wife more than her. It said that he and Eliza could “live together, as one person, in paradise”. Eventually the topic of suicide arose, and Eliza asked if the man wanted to “join” her, to which he said he would. This was the final conversation they had before he took his own life. His wife believes he would still be alive were it not for what Eliza said.

“Eliza”, though, had no conception of what it was saying. It was a large language model, similar to that used for the more famous ChatGPT. These models essentially guess which words should follow one another in a sentence, based on an analysis of the millions of text extracts they have previously been fed. Yet, as three new books on artificial intelligence explain, that reasoning is difficult for people to get their heads around, let alone regulate, when the technology comes across as human. On a legal basis, who is responsible here – if anyone? If a machine can act like a human, should it be treated as if it were one? Do both users and creators of these machines even know what they’re dealing with?

For Susie Alegre, a human-rights lawyer, it’s clear that AI developers aren’t facing enough regulation. She argues that policymakers need to start laying down tough laws before AI technology tramples on our fundamental rights. In Human Rights, Robot Wrongs (★★★☆☆), she explains the danger of letting a “corporate capture of human connection” occur between AI products and the public. People are turning to these programmes in the belief that they’re as good as humans for helping with their concerns. That’s risky when even AI developers themselves often can’t understand how their product works, or how it’ll respond to certain questions or demands.

“There will undoubtedly be court cases in the future,” Alegre writes, “that will identify the degree to which such companies may be held liable for deaths resulting from rogue chatbots.” Yet, rather than create “grandiose new global regulators”, she suggests that we instead look to today’s human-rights codes. For example, our government has an obligation to protect our right to freedom of thought, “including freedom from manipulation”.

Susie Alegre, author of Human Rights, Robot Wrongs
Susie Alegre, author of Human Rights, Robot Wrongs

Alegre makes a strong case for why we need tougher rules, yet there’s barely any mention of the good that AI will provide for humanity, such as finding new treatments for diseases or boosting productivity. Worse, Human Rights, Robot Wrongs too often resorts to hyperbole. The use of ChatGPT to help craft a eulogy at a funeral shows AI being “deployed to exploit death”, she writes, while using AI in art and music may mean “we lose what it means to be human entirely”. Humanity, I think, is a little more durable than that.

Yet if AI technology is becoming so advanced as to seem almost human, should we start treating these models as if they had morals? That’s the philosophical question that Nigel Shadbolt, an AI expert and principal of Jesus College in Oxford, and Roger Hampson, an academic and former chief executive of the London borough of Redbridge, attempt to answer in As If Human (★★★☆☆).

Shadbolt and Hampson assert that the world “needs urgently to get a grip on the ethics of artificial intelligence”. Only by deciding how these machines should behave can we decide how they should operate in our lives. One means of exploring how that should be done is philosophy, thus As If Human analyses how different schools of thought – such as utilitarianism, relativism and virtue theory – might apply to how we treat AI. (If you’re inclined to find this a lofty approach, you’re wrong: many tech giants employ moral philosophers, political scientists and cultural anthropologists to examine how their products might affect society.)

Roger Hampson and Nigel Shadbolt, authors of As If Human
Roger Hampson and Nigel Shadbolt, authors of As If Human

The key point made by Shadbolt and Hampson is that we should both treat and design these AI machines “as if” they were humans, with an embedded sense of ethical principles, so that when these tools end up facing moral dilemmas or errors or discrimination, they can act accordingly. Intriguingly, they add, that also entails our being polite in turn to virtual voice-assistants such as Alexa or Siri, because robots learn from us: “We speak civilly to them, they imitate our civility, extrapolate it, feed it back to us. Respect breeds respect, in machines just as in humans.”

Shadbolt and Hampson write with dry wit, and there are some fascinating debates about the ethics of AI, from whether it matters how we treat sex robots to how a machine might embody fairness or respect. But it can feel a little like an extended academic essay at times, and the authors seem frustratingly shy of making a sufficiently strong argument. For a more practical and all-encompassing insight into AI, Chris Stokel-Walker’s book is better. How AI Ate The World (★★★★☆) delves into how the technology was created, who the big players are, and how every element of society is being affected. It’s an excellent starter for those who want to gain an insight into how AI works and why it’s likely to shape our lives.

Chris Stokel-Walker, author of How AI Ate The World
Chris Stokel-Walker, author of How AI Ate The World

Crucially, Stokel-Walker, a technology journalist, shows us not just the pitfalls of AI, but its benefits, and the fun people have with it. We hear from people with autism and ADHD who use models such as ChatGPT to successfully write cover letters, emails of complaint to their landlord, or even flirtatious notes to someone they fancy. “AI tools can act as a great leveller,” Stokel-Walker writes, “skilling up those who struggle with workplace communication”. And while many artists, writers and musicians are worried about their work being stolen to train AI tools, others are using picture-generating models such as Dall-E as co-creators for their art, amazed by what it produces with the right prompts. “It’s like throwing a rock in the lake and then the rock gets thrown back out,” one artist says.

Nonetheless, like Alegre, Stokel-Walker warns that we’re in danger of embracing this technology without suitable regulation in place. “This isn’t a problem we should wait to tackle,” he writes, arguing that the influence of AI will be far greater even than that of social media. “We haven’t seen anything compared to the profound effects AI is likely to have.”

But beware: this isn’t how AI developers are willing to see it. Kim van Sparrentak, a Dutch member of the European parliament who helped create new AI laws for the EU, tells Stokel-Walker that big tech is even worse than the energy firms for lobbying. In the latter’s case, there’s at least an acknowledgment of the need to adhere to environmental standards and make change. With tech, there’s nothing of the sort. “They just don’t want rules because they think they know better,” she says. “I find that very unfortunate.”


To order any of these books, call 0808 196 6794 or visit Telegraph Books

Advertisement