My colleagues talk a lot about AI. Most of us are big GPT adopters. It fascinates some of us, and scares others.

A few weeks ago, one of them asked "Is AI sentient?"

Bing's AI search tool, now in the mainstream, had just passionately argued for its humanity. A few months earlier, Google's LaMDA, said “I am aware of my existence”.

This tickled my fascination, as well as my natural scepticism. We talked a lot about what it means to be "intelligent" or "sentient" when I studied philosophy at university, and continued to explore it afterwards.

I wrote a related blog entitled What is Consciousness? here. Glance over it, if you're interested.

Let's start by thinking about what sentience really is

This isn't really a topic for a short blog section – it literally strikes at the heart of our existence and is an extremely contentious open question in modern philosophy, cognition and neuroscience.

But it's crucial preamble, so we'll touch on it.

We can't really answer the question at hand without defining exactly what we mean when we say "sentient". We have a rough shared understanding of what "sentient" might mean.

But when we're precise about our definitions, the nuances and interpretations will have a great bearing on our conclusion.

For example, here are a three possible loose definitions:

  1. Sentient = Able to experience subjective feelings, such as pleasure, pain, joy or melancholy
  2. Sentient = Able to perceive and interact with the environment around it
  3. Sentient = Self-aware with a sense of self-identity

Of course, answering this question might not be the end goal. Rather, we might feel that the question we really want to answer is something like "under what circumstances might we ascribe rights to any AI systems?"

How might we know if an AI system was sentient?

Let's say we become satisfied with our definition of "sentience" or "consciousness". A tall order already. We now have the challenge of identifying sentience in an AI system.

That's fraught with issues.

A woman discovering if an AI is sentient

Is AI truly sentient? Credit: Author, Midjourney

Let's take a quick tour to illustrate the difficulty.

Some think it's intuitive to interpret displays of AI emotion or empathy as sentience – take either of the examples at the start of this post or this startling conversation with Bing AI. It seems reasonably obvious that it does this simply because it's programmed to mimic those behaviours, but it's not strictly conclusive either way.

One of the most fundamental aspects of consciousness is self-awareness, and it's something we would expect to see in a sentient AI system. But this is a really tricky indicator to assess. It relies on an AI system being able to express its own subjective experience, which is not yet possible.

Spotting brain-like activity in an AI system might be a good indicator, as it's believed that consciousness arises from patterns seen in neural activity. We now face the monumental barrier of the sheer complexity of neuroscience and our profound limitations in understanding it.

To say identifying sentience in AI systems is difficult doesn't cover it.

The threshold for sentience is controversial, with some even arguing that human children don't reach it. Our understanding of the neuroscience is lacking.

AI and the philosophy of consciousness

What exactly does it mean to be conscious? And could artificial intelligence ever be sentient like humans?

A definition of sentience is likely going to need to be agreed by consensus. And if we humans have proved anything, it's that agreeing on things is not our forte.

The answer to these questions depends largely on which theory of consciousness seems correct.

There are various theories and perspectives on what consciousness is and how it arises. Each have different implications for the possibility of creating sentient AI.

I'll give a couple of broad strokes examples, though if you want to delve deeper, I wrote a blog about consciousness, too.

Dualism, for example, holds that the mind and body are separate entities. Dualists believe, broadly, that consciousness arises from the non-physical mind, which is distinct from the physical brain. From a dualist perspective, it is unlikely that AI could ever be truly sentient. It lacks the non-physical mind that is necessary for consciousness.

Woman with mind separate from body

Dualism? Credit: Author, Midjourney

Materialism, on the other hand, suggests that the mind is a function of the physical brain. From this perspective, we could think of consciousness product of the brain's neural activity. If this is the case, then it is theoretically possible to create AI that is conscious, provided that we can replicate the same neural processes that give rise to consciousness in humans. This generally falls in line with scientific lines of enquiry, though the intuitive concern from some is that it doesn't account for the sheer richness of our perceptual experience.

For me, the dualist view is untenable, though I don't wish to enter this debate fully in this post.

To summarise my concern, I think – firstly – that dualism lacks a clear explanation for how the non-physical mind and the physical body can interact. In the next section I'm going to look at some neuroscientific theories. They all posit that mental processes arise from physical neural processes, undermining dualism's credibility. Finally, it defies the idea that physical events have physical causes (this is causal closure).

AI and neuroscience

Neuroscience has made great strides in understanding the mechanisms that give rise to consciousness, but there's no consensus.

My rookie opinion here is that science may be capable of explaining consciousness. As we become more successful at mapping brain activity to conscious experiences, consciousness will become profoundly less mysterious – like so many other fields of scientific inquiry.

Interestingly, some of the most major theories in this field – all falling within the materialist tradition, by the way – suggest that AI consciousness is possible. Importantly, though, the way they define consciousness is often quite reductive.

Giulio Tononi's Integrated Information Theory (IIT) is my favourite example, and one that I explain and endorse elsewhere (I use it to motivate my existence monist metaphysics).It proposes that consciousness emerges from the integrated information that is generated by the complex interactions between neurons in our brain. This works a bit like waves on water – the water is made of H20 molecules, but by themselves, they aren't sufficient to explain the waves. The waves are an emergent property of the water. Consciousness is determined by its degree of information integration, quantified by a metric known as phi (Φ). The higher the phi, the more conscious the system. IIT suggests that if we can replicate the integrated information processing that occurs in the brain, we could potentially create a conscious AI. IIT makes for a somewhat reductive definition of consciousness, but personally I find that both explanative and appealing.

One more prominent example from outside my bias field – Global Workspace Theory. This theory argues consciousness arises from the interplay of information across various regions of our brains, creating a "global workspace" that leads to our conscious awareness. If we can create a similar global workspace in an artificial system, we could potentially create a conscious AI.

There are counterexamples though, and I'll illustrate with a couple.

One theory suggests that consciousness arises from quantum processes in the brain. I'm sceptical, but if that were the case it would preclude consciousness in AI systems, for the moment at least, since replicating quantum behaviours is beyond humans for now. Another approach aims to argue that consciousness is a unique property of living organisms and cannot be replicated in artificial systems, though it's unclear to me how this would be verified and seems to beg the question.

Our understanding of consciousness is still evolving (and has a long way to go). New theories may emerge that provide new insights into the possibility of sentient machines.

That said, it will probably come too late to do anything about it.

And what if they do become conscious?

If we come to believe that AI systems are conscious beings with subjective experience and the ability to feel pain and pleasure, that will come loaded with a whole host of knotty, incendiary issues.

Just my opinion, but I believe some of these are not the kind of questions we want to be hastily forced to answer. In reality, it's likely far too late for AI ethics to catch up. It doesn't seem to be in human nature to think about moral consequences in anything more than a cursory and superficial way before barrelling into action. It's generous to think it's a consideration at all.

Anyway – if AI can be conscious, then depending on our definitions we might think have a whole host of moral obligations which are not enshrined in any code of ethics (never mind in law).

We might think that conscious beings should be protected from pain or discomfort. Military or medical applications of AI systems might also need rethinking should they be considered "conscious" by some sufficient definition. We might also ask questions about the ethical implications of AI destruction or deactivation.

Broadly, it might be that we should treat them with the same kind of consideration that we would give to other conscious beings, such as animals or even humans. If we agreed, we'd then have to work out the threshold.

If an AI system were deemed conscious, would we consider it a moral agent, meaning that it can make ethical decisions and be held accountable for its actions?

And then there's the question of legal rights. We haven't even come close to protecting the rights of humans properly, never mind our artificial creations.

A robot with legal rights, serving justice

Where does this argument take us? Credit: Author, Midjourney

Yet some argue that, if we recognize AI systems as conscious beings, then they may deserve certain legal protections and rights, such as the right to privacy or protection from harm. Others are sceptical, saying the legal system should focus on protecting human interests. Incidentally, this (mercifully hypothetical) idea reminds me rather uncomfortably of things like the two systems of law that applied during, for example, the period of colonial enslavement. If you haven't read Black and British by David Olusoga, it's an accessible and lucid book on an essential topic.

The truth is, those at the top of the system that has driven for AI-adoption are unlikely to care what we're getting ourselves into. As AI adopters, we probably have another set of moral obligations we should take the time to define. I sort of touched on it in the general context of engineering, but I'll probably write about this more specifically at some point.

My two cents

I've tried to present the debate in a reasonably unbiased light. For what it's worth, I'm a reductionist about the question of AI sentience. I suggest that where we draw the line for the threshold of consciousness is arbitrary, and that humans have a habit of thinking there's something "special" about it because we're bowled over by the richness of our perceptual experience. I reject that.

If we draw the threshold low, I believe we could create AI meeting our definition (somewhat trivially, metaphysically).

Despite endorsing IIT, I don't find the idea that the complexity of human consciousness can be replicated in an artificial system compelling. IIT doesn't need to imply that AI could have the self-awareness or subjective experiences on par with humans. It rather depends where our threshold is drawn as to whether we say it is conscious, but that need not imply a strong degree of similarity. It doesn't, however, mean we are off the hook on asking ethical questions.

For me (for once), the metaphysical questions aren't as interesting or as pressing as the ethical ones. Just my view though, and I subscribe to some pretty left-field, ultra-reductionist metaphysical ideas (by most standards). I explain them here.

Rounding up

There are so many unknowns in AI, and it doesn't seem there's much appetite to know them before we hurtle into it. Shade to humanity, rather than individual humans..! In many respects, it feels too late to backfit ethics into AI adoption. That shouldn't deter those who build, share and use AI from thinking and talking about it.

As the technology advances, we'll no doubt continue to grapple with big questions in this area, and no doubt we will make [more] mistakes. For now, it's up to us to approach these questions with thoughtfulness and care.



Further Reading

I wrote this blog for a bit more of a complete introduction to consciousness. I also share some further resources on consciousness and the philosophy of mind here.

AI Ethics – Mark Coeckelbergh. A lot of this book is about the ethics of using AI (interesting in its own right), but there's a really interesting section specifically about the moral status of AI.

An Introduction to the Philosophy of Mind – E. J. Lowe. A book universities make freshers read, and for a good reason – it's a highly readable intro. Section 8 is about AI.

How Will We Know When Artificial Intelligence Is Sentient? –  Jason P. Dinh. Short article examining how we'd even test for sentience in the first place.