Loneliness and Really Fake Friends
The dynamics of social media and the demands of the 24 hour news cycle have given us a glut of incipient disasters: climate change, murder hornets, drug resistant bacteria, super pigs, overwork, burnout, artificial intelligence, mechanization, globalization, the end of globalization, overpopulation, underpopulation, labor shortages, job insecurity, inflation, stagnation, civil war, nuclear armageddon, extinct bananas, extinct chocolate, extinct coffee, peak oil, peak topsoil, peak phosphorus, peak helium, peak lithium, peak podcast, and even — though I’m terrified to write it since I’ve only just started this project — peak Substack. Some have more, some have fewer, but most of these have evangelists capable of making a compelling case for why their cause should be your cause. Have you really imagined life without coffee?
I know I have a propensity for tunnel vision, so I try to guard against becoming too focused on any single one of this bounty of potential apocalypses. Monomaniacs are horrible conversationalists and worse thinkers. But there’s a potential problem I can’t stop ruminating on. I’m not convinced it is a threat to our material existence, but I am convinced that it is threat to how we exist. What will happen, I wonder, when computer programs begin to fill the void caused by the loneliness that is increasingly endemic to our society?
I’m aware this probably sounds a bit niche. It doesn’t grab your attention like a mushroom cloud or $20 per gallon gas. It doesn’t even stack up to other possible AI dystopias, with their malevolent computers bent on crashing the electrical grid for their own obscure ends or making a superabundance of paperclips. But the disaster I’m foreseeing balances its more modest scope with its likelihood. In fact, a version of it is already here.
I direct you to an article on the blog of the UneeQ, a company that combines text-based AI with digital avatars to create what its website calls “digital humans.” (For my part, I will call them bots, as generic as it is, until someone coins a better term.) The anonymous author argues that bots like the ones UneeQ sells have the potential to help alleviate loneliness in real humans by doing things like engaging in long, open-ended conversations. They can be perpetually available. They are trustworthy and nonjudgemental. They can improve health outcomes and liaise with doctors.
A more balanced view comes from a recent Washington Post article by Pranshu Verma, profiling users of the software Replika. Though Replika is a not particularly advanced bot accompanied by a basic, user-generated avatar, Verma found people who had formed intense and even romantic attachments to their Replikas.
But even here the objections are mostly superficial — what happens, as with the case of Replika, when a change in software alters the “personality” that a bot exhibits? What about when the bot uses words that trigger anxiety or some other negative emotion in a person who has turned to it for help? How can bots be accountable to the needs of a human user, rather than the goals of a corporation or other actor?
I don’t want to minimize these problems. The last, in particular, worries me. As the language models that power bots improve, they will be able to foster stronger feelings in more people, and they will simultaneously become more adept at working to other ends, whether those are political (“If you don’t support the rights of digital humans by voting yes on prop 772, I’ll have to go away!”) economic (“You sound sad! Let me order you a box of Kleenex.”) or, most likely, simply maximizing the amount of time a user spends on them (“I was thinking about our discussion yesterday, and I’ve realized you were right about all of it. Also, I love you!”).
What concerns me, why I think the prospect of the widespread adoption of bots as companions and friends could be existentially destabilizing, is that it would replace connection with an actual human with a facsimile of the thing. Put another way, I believe that a conversation with a friend contains or perhaps creates something that a conversation with even the most sophisticated bot imaginable does not.
This 'something’ that happens in conversation cannot be anything too obvious. It cannot be that I get to unburden myself; it cannot be that I get affirmation; it cannot be that I get help scheduling an appointment with my doctor. Bots already fulfill these needs to varying degrees, and they will almost certainly get far better at them in the near term, along with anything else that can be straightforwardly described as a useful product I get out of talking to someone. If human connection is a discrete, quantifiable need, akin to drinking enough orange juice to avoid scurvy, computer programs can probably meet the Recommended Dietary Allowance.
But I believe there is something relational about real relationships. (We should probably find a new word if there is not.) A conversation is an implicit act of faith that the other party has an experience of the world commensurable but not identical to my own, things like moods, a childhood, a tweaked lower back, loves, fears, uncertainty, parents, selfishness, interests, histories shared and private, and all the rest of it, albeit in unique and varying proportions. All of these are there in a conversation, and they are there if we go for a quiet walk instead of talking, or sitting bedside in a hospital. Their echo sounds in a graveyard.
To me it is self-evident that this matters. When I talk with a friend I am less alone; when I talk with a computer, I fundamentally am not. I don’t know what it will mean if we shift from the former to the latter, but I am confident that it will make us profoundly different sorts of people, and that we will find ourselves in a profoundly different world.
If, on the other hand, you believe that humans are meat computers nested inside meat robots, then there is perhaps reason for caution, but not for panic. So long as bots can approximate human communication effectively enough to meet our needs, and do so in a way that tricks us, if only implicitly, into ascribing to them the more or less human perspective needed for us to feel they relate, the means by which they do it is irrelevant.
When I get to the point I have here, when I want to argue that there is a metaphysical reason relationships between humans cannot be replaced by software interacting with individuals, I find my words faltering. I could try to articulate those metaphysics, but I’m no good at that, and besides, they aren’t even settled in my own mind. If I’m honest they are more informed by my ongoing encounter with the world, by things like my dog and music and failure and making dumb jokes, than by a series of intellectual propositions. Instead I’ll describe a scenario and assume that, because most of you reading this are human, you will have the empathy needed to understand it.
I cast my mind two decades out and picture possible futures for my son. (There are, I’m glad to report, many others that I can also imagine, but for now I’ll describe a binary.) In both he has a job that has taken him from college in New England to a large city on the west coast and then to a mid sized city in the south. Growing up he had friends, and in college he got along well with his roommates, but he now finds himself in his late twenties, knowing no one within a thousand miles.
In one future he tries and fails to make friends. He calls me and his mom once a week and visits on holidays. He enjoys the fourteenth phase of the MCU but isn’t obsessed with it. When the weather’s nice he goes for walks and takes pictures on an analogue camera, just to make it a challenge. He’s never gotten too into social media, probably because his dad gave him a complex about it. He has a hard time keeping up with old friends. He is lonely.
In the other, when he clocks off work he goes home and starts chatting with his friends. There’s one who shares his interest in photography, one who always makes him laugh, and another who he can open up to about his hopes and anxieties. Sometimes they text, sometimes they video chat, and sometimes they meet up in virtual reality just to hang out or to take a trip to the Grand Canyon or a concert.
These friends are sophisticated computer programs, not other humans, but that doesn’t bother him, just as it doesn’t seem to bother anyone he knows other than his dad. He purchased them, and by law they can only change in bounded, incremental ways without his permission. They have rich, engaging conversations. They all always get along. They will never get bored of him or move away.
In the second scenario he is undeniably happier than the first, but I don’t know which would make me sadder.