I first met René Descartes’ famous aphorism, “I think, therefore I am”, when I was a young teen. Naturally I wasted no time in deciding that the Great Man’s thought was trite and glib, a maxim whose proper home was surely the tee-shirt. Later, at university, I took philosophy classes and was surprised to discover that there were people who thought they’d refuted Descartes. How on earth was that possible?

Monsieur Descartes spent his youth as a soldier, seeing action in many battles. Later, when he became an academic, he used to stay in bed ’til noon thinking deep thoughts. Doubt plagued him: how can anyone be sure of anything? As you look around, the things you see, hear and smell could merely be a dream or a staged virtual environment.

Descartes decided that the only thing he could really be certain about was that he was thinking at all. But if he was thinking, then surely he would have to exist in order to think. The very act of thinking proved that he himself existed: cogito ergo sum in his famous Latin formulation.

This seems both compelling and curious. How can an act of thinking logically lead to a conclusion about existence? But at the same time, how could this argument possibly be wrong? The philosophical objections I found unconvincing – they were playing games with words: when faced with a problem, what real scientists do is to build models.

What do insects, expert systems and guided missiles have in common? They are all agents which conform to a simple artificial intelligence architecture which takes input from the agent’s environment and its own ‘body’ and then applies some processing to produce appropriate actions – as shown in the diagram below.

A First-Order Intentional System

The cycle repeats during the lifetime of the system. The mechanism which transforms inputs into the relevant outputs can be encoded by neural nets, expert-system rules or by programming language coding, depending on the system under consideration.

We call this kind of system a First-Order Intentional System (FOIS) because while it clearly has beliefs and desires, it clearly has no beliefs and desires about beliefs and desires: a FOIS can’t execute Descartes’ manoeuvre. For Freud, a FOIS is pure id; for Jung, it only has an unconscious mind .

Now consider the architecture of a Higher-Order Intentional System (HOIS) shown below.

A Higher-Order Intentional System

As you can see we have added a second system which can model the first. The HOIS creates a theory about its own behaviour: a theory which explicitly models itself as an actor in the world. In psychological terms, our agent has acquired an ego. What is the function of the ‘ego’ system? It is there to support language and social behaviour. To give an account of yourself to others, you need to have an accurate self-model about what you believe and what you desire. And now we have an entity which is capable of rising to Descartes’ challenge.

When I decide I want a chocolate bar I am merely thinking. But when I say to myself “I am thinking” I’m not just thinking; I’m thinking about thinking. This is the ‘ego’ modelling its own dynamics (the causal loop on the left of the diagram). But when the ego models the dynamics of its own symbolic evolution this couldn’t be going on without some machinery to underpin it. So the ego concludes that some mechanism (equated to ‘I’) must priorly exist.

So it seems pretty conclusive that an agent capable of self-modelling the evolution of its own beliefs can come to the conclusion that it exists … without having to rely upon any external environmental cues. That’s a non-trivial conclusion, but there is something being smuggled in here.

Suppose that we were to erase the contents of the ‘id’. We delete all perceptual data, all memories, all ‘instinctual’ canned knowledge so that the ’id’ is now an empty vessel. If we start up the ‘ego’ it has no inputs, nothing to work on, no basis from which to build a self-model. Can it even deduce that ‘it’ is thinking at all? I don’t think so.

Descartes’ argument is therefore something like this:

Because I exist (and have memories, a personality, a sense of self) I am able to think. And therefore, because I can catch myself thinking, I can deduce that I must exist.

Longer, less snappy, but more circular.

What would happen if you took a human being and deleted his or her memories and perceptual data? You may have heard about drug-enhanced extreme sensory deprivation – far from being an ideal setting for Descartes’ argument, sanity itself would dissolve into madness.

Afterword

In science, we use models to formalize our arguments. What does “I think, therefore I am” look like in the HOIS model?

I think of the ‘ego’ as a semantic net: a series of concepts linked by relationships. Some parts of the net are purely externally-focused such as [CATS] – {sleep-on} – [LAPS], while there are a great many other concepts linked to a root “I” concept. These represent your own sense of self, your own identity, your own self-theory.

The concept of “I think” is itself a particular concept in the ego:

[I] – {am implicated in a process of} – [THINKING]

which captures the dynamics of self- modeling itself, the introspective sense of being aware of yourself.

The concept of “I am” is more complex – what does it mean to the ‘ego’ for something to exist? Any concept in the ‘ego’ system exists at least there, in the imagination if you like, which is a kind of existence. So there must be a concept in the ‘ego’ saying something like:

[ANY CONCEPT] – {exists} – [AT LEAST IN THIS BIT OF REALITY].

Put the two concepts together and you can get, via a logically-sound algorithm, from “I think” to “I am”. The rest comes down to what you think “exist” really means.

Far from being trivial, Descartes’ insight has taken us to the inner core of consciousness itself.

Note

The title image is “cogito ergo sum” by the painter Paulo Zerbato.