AI & loneliness

As a full-time carer for a wife who has Alzheimer’s Dementia, I admit to feeling lonely. It comes with the territory. I’m drawn to the promises of AI companionship, but I shall avoid it, as this post explains.

Introduction

With the rise of companion chatbots will AI reduce loneliness? Perhaps not.

© Tony Cearns 2014, Liverpool

As a 72 year old man, here are some typical bits of life I encounter:

I drive to the supermarket. I wheel my trolley around the aisles, picking items off shelves. When finished, I pay for my items through a self-serve till. I leave. Other than one ‘excuse me’ and a ‘sorry!’, I have not spoken to anyone.

I enter a room at my university in readiness for a Philosophy seminar. There are seven other people waiting in the room, all engrossed in their phones. No-one looks up. No one acknowledges me. I am unable to acknowledge them without breaking into their private worlds. I decide to leave it be.

I’m out to dinner with a friend. We choose a table near the restaurant window. He drifts to his phone, interspersing conversation with screen scrolling. I ask my friend a question. He seems not to hear. I decide to ask it again. Nothing. Then, ‘sorry, what did you just ask?’

We can probably all relate to one or more of these situations - an absence of acknowledgement and the curtailment of possibility. The outcome of such episodes is a feeling of disconnectedness or fragmentation. Put differently, loneliness. The source: technology. It has a cumulative effect, like a dripping tap filling a wash basin - until it overflows. These episodes, however, are but soft examples of what may be coming down the line.

Being digitally connected, which is to be in a workflow of some kind, one’s own or of someone else’s making, can make us feel disconnected even though the technology’s purpose is that of mediating connection. But being in such a digital workflow is living once removed from stuff. The thesis of this blog post is that AI will exacerbate this feeling.

© Tony Cearns ‘Looking across to Wales’

One hopes that the connection between the two ends of a channel (i.e. humans) satisfies some need, but something is lost as a result of the set up. One is watching or talking to a representation that is standing in for the real person. These days one of the two ends is often not human. One is often a Bot. That’s living twice removed from stuff.

Am I overstating my case?

Surely, when I video-call my family I am satisfying a real need based on talking to real people that I love? Yes, of course that’s the case. It’s better than not having the video-call.

But I envisage that in time, people will have AI-enabled stand-in representations. (I say ‘AI-enabled’ but perhaps ‘AI-directed’ is more to the point?) When you video-call me and I am unable to take the call, a look-alike will answer the video-call as if it was me. It will look like me, sound like me, behave like me, and say the kind of things that I would normally say. Later in the day it will report back to me with a summary as part of its curation of all incoming communications that day. I will be fed with the bones of the content but not with its flesh, the emotion, the bits that can really mean something.

Much of this is already achievable to some degree. Furthermore, if I was available to take that call, I would not be able to tell whether the caller was an AI stand-in or a person. Human contact outsourced. Eventually we might forget that there is a difference between real people and their stand-ins, between reality and representation. A challenge to human autonomy.

© Tony Cearns - Padua, Italy

Point of View

Now why does all of this matter?

This is how I see it:

Loneliness is an important part of being human. It can be a big problem if it is chronic, but when managed well, it helps to shape how we live. The prevailing view is that there is a rising epidemic of loneliness. I am sceptical that that is the case. However, I do think there will be a rising epidemic fuelled through an increasing dependence on AI. And I don’t think it’s avoidable unless governments step in. But that won’t happen. To borrow a phrase from Trump, Big Tech ‘holds all the cards’.

Loneliness can only be managed well through maintaining appropriate direct human contacts, through the structures that we build in society based on belonging to something bigger than we as individuals.

Two aspects of our ability to manage loneliness are trust and the need to be needed. Appropriate human contact is one in which trust exists between two humans. The basis of this is that most humans know what it is to be lonely. So, there are grounds for empathy, and therefore a meaningful relationship. We have evolved as social animals and therefore our instincts are to participate in cooperative groups. Being an outcast is one of the worst things that can happen to a human. There is a deep ‘need to be needed’.

We cannot properly befriend a Bot. Why? Because trust and the ‘need to be needed’ are parameters that fall beyond a Bot’s horizon. Bots can simulate feeling lonely or the need to be needed, but we know that it’s just a simulation, a form they have grabbed from somewhere else. Bots do not have the self-reflexivity required to underpin such parameters.

The more we rely on AI for communication, companionship etc, the lonelier we will feel.

I can’t argue each point above in the confines of a blog (it’s not a philosophical argument), but let me attend to a few matters.

© Tony Cearns - Berlin

Contra The Dominant View

Let me start with the notion of ‘We’, a pronoun that helps to define humanness.

There seems to be a truth to John Donne’s oft-quoted lines (with the commonly used ellipsis) from Meditation XVII of his book Devotions upon Emergent Occasions (1624):

"No man is an island, entire of itself; every man is a piece of the continent, a part of the main ... any man's death diminishes me, because I am involved in mankind; and therefore never send to know for whom the bell tolls; it tolls for thee."

A bit of the missing piece goes, ‘If a clod be washed away by the sea, Europe is the less.

As Katherine Rundell shows (2022, Super-Infinite, Faber Press) Donne’s understanding of the nature of ‘We’ was profound perhaps because it came about as a result of a life-ending illness. Belonging seems essential for us to experience our lives as meaningful.

My wife’s persona, with her Alzheimer’s Dementia, is being washed away and I become the less. I find a truth in Donne’s passage. The Alzheimer’s inexorably dissolves the ‘We’ of our relationship.

Now, if you direct AI search engines to seek out evidence that loneliness causes unhappiness, disease and death, you will surely be pointed to thousands of scientific papers and blogs, not to mention pressure groups and the like.

Call this the ‘Dominant View’. The Dominant View has it that loneliness is on the up and that it’s a very bad thing. I don’t agree with this view when stated in such bald terms.

Is it on the up? The World Health Organisation’s ‘Demographic Change and Healthy Ageing’ unit concludes:

 ‘Social isolation and loneliness are widespread, with around 16% of people worldwide – one in six – experiencing loneliness …  A large body of research shows that social isolation and loneliness have a serious impact on physical and mental health, quality of life, and longevity’.

A 2025 report from the AARP (by Lona Choi-Allum and Gerard ‘Chuck’ Rainville AARP Research) states:

‘Loneliness among adults 45 and older is rising at an alarming rate. According to a recent AARP study, 40% of U.S. adults now report being lonely, a significant increase from 35% in both 2010 and 2018. This national study reveals that loneliness is not only persistent but growing’.

A close relationship between loneliness and health (behavioural, psychological and physiological) seems incontrovertible. One need only look up the website for the ‘Campaign to End Loneliness’ to be met with lots of evidence for such a view.

I’m not arguing that loneliness cannot contribute to poor health. I feel sure that the isolation that I suffer as a result of caring for a loved one with Alzheimer’s is affecting my health. A 2024 study by Penn State’s Centre for Healthy Ageing found that ‘The long-term health consequences of loneliness and insufficient social connection include a 29% increased risk of heart disease, a 32% increased risk of stroke and a 50% increased risk of developing dementia in older adults’. I see no reason why this would not apply to me, unless I were to take countermeasures.

But I question whether, despite the workflows that we all seem to have to fit into as part of modern living, loneliness is a bigger factor today than in the past or that it is necessarily a completely bad thing.

The Dominant View comes with many assumptions. As the philosopher Lars Svendesen has observed, loneliness is ‘laden with shame’, and so, as people increasingly become more open about mental problems, one would expect to see an increase in sufferers. People in the past were too ashamed or private to admit it. The issue is now just more open to discovery.

As to whether it’s wholly a bad thing – I will shortly come onto that.

The philosopher Ben Lazare Mijuskovic views loneliness as an essential feature of life. For Mijuskovic ‘all men are poignantly lonely’. It is a structural feature of being alive and, he holds, evidence for it can be found across philosophy (Kant, Bergson, Husserl, Sartre to name only a few), psychology (Freud), and literature (Defoe’s Robinson Crusoe, Dostoyevsky The Brothers Karamazov, George Elliot’s Silas Marner, Joyce, Faulkner, Wolfe and many more). The fear of loneliness motivates much of what we do with our lives. Although we may want to minimise some of its destructive powers, without it there would be little culture. Mijuskovic thinks that aloneness and isolation, and solitude are all reducible to ‘fear of loneliness’, which is unpleasant. Therefore for him, there is no pleasant solitude.

© Tony Cearns - London

But are things not more complicated and nuanced?

Montaigne wrote ‘There is nothing more unsociable than Man, and nothing more sociable: unsociable by his vice, sociable by his nature’. (The Complete Essays, 1:39 ‘ On solitude’.)

Man has an urge to be alone and an opposite urge to be with others.

Kant was to distil this as ‘unsocial sociability’, in his 1784 essay Idea for a Universal History with a Cosmopolitan Purpose.’ Kant argued that human beings have a dual, tension-filled nature:

  • We are social: we need others to develop our capacities, reason, language, culture, and moral awareness.

  • We are also unsocial: we are driven by self-love, competition, vanity, desire for power, and a wish to dominate or distinguish ourselves from others.

These two impulses conflict, and that conflict is not accidental - it is productive.

The theme continues to run to this very day. Modern critiques of Neo-liberalism, say Wendy Brown’s, argues something similar: that Neoliberalism intensifies unsocial sociability since everyone is a competitor, sociality exists mainly through the markets and that the result is hyper-individualism without solidarity.

So whereas Mijuskovic majors on the idea that the fear of loneliness drives achievement, Kant and the modern trend see not only a fear of loneliness but also an urge to be individualistic.

I said that I would consider whether loneliness is a bad thing.

Let me start by saying that I reject the reductionist accounts of loneliness that we get from Kant and Mijuskovic. I have many grounds for this which I can’t get into here but briefly:

  1. Loneliness isn’t just one thing - there are many types.

  2. Loneliness is essential for personal growth

  3. Loneliness is triggered by major life changes each of which required a different kind of response. In my case, retirement, children moving away, the deaths of close family members and friends, the slow loss of a loved one to Dementia. Each trigger has caused me loneliness, but each has been quite different in character.

  4. And, contra Mijuskovic, there can be positives from being alone, as many research articles testify. Indeed, one of the problems of modern living is finding places that are suitable for solitude.

To recap.

Chronic loneliness is a bad thing. Normal loneliness is a part of being human and, when managed well, has a positive role to play in personal growth. It is not necessarily increasing, contra to the Dominant View. It is simply being more easily discovered.

© Tony Cearns - ‘A bridge at Oxford railway station’

AI

There is a view that technology is neutral, offering promise or pitfall depending on how we use it. Whereas that may be true for simple forms of technology such as knives, or spades, it isn’t necessarily true in the case of AI. The question whether AI is value-neutral is a huge one that I can’t hope to expand on here, suffice it to say that if AI shapes what questions we ask and how we receive the answers to those questions, then may it not be neutral. (Interesting questions arise here: what do we mean by neutral? Is anything neutral? and so on - questions that can’t be followed up here). If AI shapes the kind of questions we ask and the answers that we deem plausible, it shortly follows on that it determines our attitudes and choice of tasks. In short, our values.

All I can offer are some connecting statements that seem true to me.

What makes AI different? The essence of it is that AI chatbots hold themselves out as being epistemic agents by trading on linguistic practices. But Large Language Models (‘LLMs’) are not epistemic agents. LLMs simply present stochastic patterns. They do not trade in beliefs or notions of truth and falsity and such like. They do not have intentionality. Here is my train of thought:

LLMs seem intelligent precisely because they trade on human linguistic practices.

But they are not intelligent in the same way that humans are intelligent. They present patterns through language but there is no epistemic grounding to their use of language as there is in humans. In a sense, for LLMs language is decoupled from the world. Following a line of thought stemming back to Heidegger, Merleau-Ponty, Wittgenstein and Hubert Dreyfus, humans see the world already as meaningful. We don’t give meanings to things. We just see them as meaningful in certain ways. Meaning is just not a thing for LLMs. There is no such thing in a LLM world.

Because LLMs trade in language, (that which shapes how and what we think and perceive), it is easy to think that LLMs understand the meaning of symbols that they use. The fact that the response time from AI to a prompt is immediate and the style, engaging, helps in this subterfuge (epistemic trust). But LLMs don’t understand anything. Believing LLMs to have understanding (that the symbols mean something to them) is to make a category mistake. 

The AI simulation is very seductive. We are primed to respond to it as if it understands. In an analogous way, my wife with Alzheimer’s often responds to a question from me in a predictable way and seemingly understanding way. But I know that there is little understanding behind her response. She has just learned (or remembered) to respond in that way. And so it is with LLMs.

The final piece lays us open to an increasing dependence on AI. In her book about Dementia (‘Travellers to Unimaginable Lands’), Dasha Kiper makes the point that it is difficult to relate to a Dementia sufferer who has no memory since we humans have no cognitive framework that allows for the absence of memory. One without a memory is an ‘unimaginable land’. In our dealings with her, we simply impute a memory, despite being constantly reminded that she hasn’t one. But we persist as if she did and we continue to persist. We can’t help ourselves.

The same holds true for AI. We know that LLMs only simulate, only model, but we persist in granting them human-like qualities such as understanding, empathy and the like, because we are primed to do this through our social evolution. I know this but it doesn’t stop me from ‘conversing’ with a Chatbot in a polite way.

It’s interesting that one of the inventors of AI argues that LLMs are a dead end as they are based only on language. ‘LLMs basically are a dead end when it comes to superintelligence,’ says LeClun in an interview in the Financial Times.

So where does all this leave us?

Loneliness is a natural part of human life, unless it is chronic. Although over the last few centuries we have seen an increase in individualism and a decline in the power of institutional structures, such as the Church etc, I don’t think this has led to an increase in loneliness in spite of the large number of empirical studies that purport to show such increases.

Loneliness and the need to be needed are important forces in how we humans conduct our affairs. Belonging to a group, or as John Donne so elegantly put it, ‘(being) a piece of the continent’, has been essential to human evolution and behaviour.

With the encroachment of AI into our daily lives and the possible (likely?) reduced physical face-to-face human encounters that may arise, it will be natural for many to turn to AI for companionship. But this companionship will only be an ‘as-if’ one. There is no ‘need to be needed’ on the part of an AI agent, at least currently. Apart from an increased sense of loneliness that will ensue once one has chosen to ignore this fact, there are also the dangers of reduced human autonomy and the likelihood of people being increasingly contained within an epistemic echo chamber.

We may be entering into a Faustian pact in which we have relied on a self-defeating compromise. It seems we have no choice but to learn how best to negotiate a world in which reality and representation become more indistinguishable.

Tony Cearns

Photographer, hill walker, philosopher, carer.

https://tonycearns.com
Previous
Previous

On Reading

Next
Next

Normal Plus