A Story We Tell
     

Is Consciousness a Story We Tell Ourselves?

Internal language and consciousness after chain of thought reasoning.

Wittgenstein House in Vienna

Where and when?

A Symposium at the AISB Convention 2026
aisb.org.uk/aisb-convention-2026
1-2 July 2026
University of Sussex, Brighton, UK

Hosted by

The Society for the study of Artificial Intelligence and Simulation of Behaviour

Organizers

Conor Houghton
conorhoughton.github.io / conor.houghton@bristol.ac.uk
Seth Bullock
homepage


Key dates

There will be a session for 15 minute contributed talks. If you would like to take part submit an abstract of less than 400 words by 31 March to:

All decisions will be made by 15 April.

Plan

  • Session 1: panel discussion (1 hour). Is consciousness a story we tell ourselves?
  • Session 2: contributed talks (1 hour). A set of four or five short talks.
  • Session 3: panel discussion (1 hour). Is consciousness an experimental question?

Description

The aim of this symposium is to consider

  • The relationship between language and consciousness
  • Language and inference in humans and machines
  • The influence the role of language has on its structure.
and to suggest what key experiment could help address the relationship between language, consciousness and intelligence.

Consciousness may arise from, or be structured by, our ability to access and reason about internal representations through the internal use of language. This half-day symposium will address the question: "Is consciousness a story we tell ourselves?" by examining the relationship between consciousness, inner speech in humans and chain-of-thought reasoning in machines.

Many theories of consciousness assume some role for internal representations. However, language has often been treated as secondary. This is surprising. although language is most obviously a tool for communication, it provides the structured, sequential space in which humans build multi-step inferences.

Questions of consciousness have a new urgency because of the recent machine learning revolution. Chain-of-thought reasoning in large language models provides a striking parallel: models become more capable when they are allowed to "talk to themselves", producing extended internal sequences that guide inference.

This field lacks an agreed experimental programme but is rich in experimental questions. To what extent can we disrupt inner speech without fragmenting conscious experience? Can non-human animals or artificial systems support forms of consciousness? What are the crucial questions that are capable of empirical resolution?