The previous post showed how concepts can be represented geometrically - as regions in multi-dimensional vector spaces where similarity is distance. This geometric formalism is powerful: it provides a mathematical structure for meaning that connects cognitive representation to computational representation. But conceptual spaces describe how individuals represent a domain. For shared reasoning about systems, we need something that can be specified, agreed upon, and implemented.
This post introduces the formal apparatus: state spaces. Where conceptual spaces are geometric models of individual meaning and events are dynamic (how things change), state spaces are the static scaffolding that makes reasoning about change possible. Understanding state spaces is essential for understanding what planning presupposes - and why service design often proceeds without the foundations that would make planning possible.
Why State Spaces Resonated: A React Developer's Perspective
Before defining state spaces formally, it is worth explaining why state space thinking had such immediate resonance for me. It was not just the AI course. It was my background as a React developer.
React's core insight is that UI is a function of state. You define what states your application can be in, and you describe what the interface should look like for each state. React handles the rest - when state changes, the UI updates to reflect the new state. This is exactly state space thinking, applied to user interfaces. The state is a data structure (often a JavaScript object) that captures everything relevant about the application at a moment in time. The component tree is a declarative description of how state maps to visual output.
Before React, the dominant paradigm was imperative DOM manipulation - writing code to find elements, change their text, add classes, remove listeners. You were describing operations, sequences of mutations. The current state of the UI was implicit, distributed across the DOM, hard to reason about. React inverted this. Instead of describing operations, you describe states. Instead of mutating the UI directly, you update state and let the framework derive the new UI. The state becomes the single source of truth.
When I first encountered React, I kept reaching for imperative patterns. The answer was always the same: you do not change the element; you change the state, and the element changes as a consequence. Once I internalised this, I could reason about complex UIs by reasoning about state transitions: what states are possible, what triggers transitions between them, what should the UI look like in each state. These questions have clear answers; the answers compose; the resulting systems are predictable.
So when I encountered state spaces in the AI course - and then in Gärdenfors - I was not learning a new way of thinking. I was recognising a formalisation of something I already practiced. The question that gripped me was: why don't we think this way about services? Service design has journey maps and blueprints, but these are often loose, narrative, impressionistic. They do not specify state rigorously. They do not define transitions formally. They do not give you a way to derive what the service should look like from a clear model of what state it is in. What if they could? That question drives this series.
States as Descriptions
A state is a description of a system at a moment in time - not the system itself, but the description. A patient in a rehabilitation programme has an actual situation: their health, their circumstances, their capacities, their relationships. A state is a model of that situation: a selection of variables deemed relevant, with values assigned. As Objects, Entities, and Things discusses, what we treat as "the entity" whose state we are describing is itself a modelling choice.
The choice of what to include in the state - what variables, what level of detail - is a design decision. It is not given by the world; it is constructed by whoever is modelling the system. Consider a simple example: a light switch. The state might be "on" or "off" - a one-variable model with two possible values. But you could model the switch differently: include the brightness level, the colour temperature, the time since last toggled, the identity of who toggled it. Each choice produces a different state space and different programmatic opportunities. The right choice depends on purpose. If you are reasoning about whether the room is lit, "on/off" suffices. If you are reasoning about energy consumption, you need brightness and duration. If you are reasoning about security, you need who and when.
State Spaces as Possibility Sets
A state space is the set of all possible states a system can be in.
For the simple light switch, the state space is {on, off} - two possible states. For a switch with dimmer, it might be a continuous range from 0% to 100% brightness. More complex systems have larger state spaces. A chess board has roughly 10^44 possible positions. A Go board has roughly 10^170. A patient in rehabilitation might have dozens of relevant variables - diagnosis, treatment history, employment status, housing, benefits, family situation - each with many possible values. The state space is the Cartesian product of all these possibilities.
This is why state spaces are typically implicit rather than enumerated. You do not list every possible state; you describe the structure - the variables and their possible values - and the state space is implied.
Transitions: How States Change
A state space alone is static. To model dynamics, we need transitions - how the system moves from one state to another. A transition has a source state (where the system is), a trigger (what causes the transition - an action, an event, a condition becoming true), a target state (where the system ends up), preconditions (what must be true for the transition to be possible), and effects (what changes as a result).
For the light switch, the transition "toggle" has source state "off", trigger "switch is flipped", target state "on", precondition "switch is off", effect "switch becomes on". For a patient, the transition "complete training programme" might have source state "enrolled in training", trigger "programme end date reached", target states "completed" or "dropped out" depending on attendance, preconditions including "patient is enrolled" and "programme exists", effects including "patient has new qualification" or "patient has incomplete record".
The transitions define what is possible. A state space with transitions is a state machine - a model of how the system can evolve over time.
The Design Question
The key point for this series is that state spaces are constructed, not given. There is no natural, obvious state space for most real-world domains. Someone has to decide what variables to include (and what to exclude), what values each variable can take, how finely to discretise continuous variables, what transitions are possible, and what triggers and preconditions apply. These are design decisions. They determine what can be seen, what can be reasoned about, what can be planned.
Consider vocational rehabilitation. What are the "states" a patient can be in? You could model by diagnosis: "has condition X", "has conditions X and Y". But this misses functional capacity, social situation, motivation. You could model by employment status: "unemployed", "in training", "in work trial", "employed". But this misses health, benefits, support needs. You could model by a rich multidimensional space: health dimensions, social dimensions, economic dimensions, psychological dimensions. But now the state space is enormous, and you need data you probably do not have.
The "right" state space depends on what you are trying to do. And choosing it requires understanding the domain deeply - not just technically, but in terms of what matters, to whom, for what purposes.
Connecting to Conceptual Spaces
The previous post introduced Gärdenfors's conceptual spaces - geometric vector spaces where concepts are regions built from quality dimensions, and similarity is distance. A state space is a shared, operational formalisation built from what conceptual spaces describe individually. Conceptual spaces are already formal and geometric - they are vector spaces with mathematical structure. But they describe how an individual represents a domain. A state space takes that geometric structure and makes it shared, agreed upon, and implementable.
When we construct a state space, we are choosing which dimensions (from all the dimensions in stakeholders' conceptual spaces) to include, how to discretise continuous dimensions into distinct states, and which regions of conceptual space count as "the same state". Different stakeholders with different conceptual spaces will suggest different state space designs. The engineer thinks in terms of system states. The clinician thinks in terms of patient states. The administrator thinks in terms of case states. Each is drawing on their conceptual space to propose what should be in the formal model.
Aligning conceptual spaces is prerequisite to agreeing on a state space. If stakeholders have fundamentally different quality dimensions - different ways of seeing the domain - they will propose incompatible state space designs. The formal modelling work cannot succeed until the cognitive alignment work has made progress. This is why state space construction is design work, not just technical specification. It requires navigating between different conceptual spaces, finding sufficient common ground, making explicit what has been implicit.
What State Spaces Enable
Once you have a state space, certain things become possible. Planning - finding a path from an initial state to a goal state through available transitions - is what AI planning does. Verification - checking whether certain states are reachable or avoidable - allows you to ask whether the system can get into a bad state or whether the goal state is achievable. Simulation - running the model forward to see what happens under different scenarios - lets you ask what states might result from applying a given intervention. Communication - giving stakeholders a shared vocabulary for discussing the system - makes "the patient is in state X" meaningful if everyone agrees what state X means. And implementation - turning the model into code - becomes possible if the state space is well-defined.
Without a state space, none of this is possible, at least not rigorously. You can have informal discussions, intuitive plans, rough simulations. But you cannot have the precision that formal methods provide.
Service Design and Missing State Spaces
I described service design's artefacts earlier as loose, narrative, impressionistic. But it is worth being precise about what is missing. Journey maps, blueprints, and personas do contain implicit state thinking - journey phases are informal states, touchpoints are informal transitions, personas are informal regions in a user space. The problem is not that state thinking is absent. It is that it remains informal: states are not exhaustively enumerated, transitions are not precisely specified, preconditions and effects are not defined.
This creates problems. Gaps are invisible - without exhaustive enumeration, missing states go unnoticed, and the informal map does not force questions about what happens when users abandon mid-process or errors occur. Transitions are ambiguous - what triggers movement from "considering" to "purchasing"? The journey map shows an arrow but does not specify the trigger, preconditions, or effects. Communication is imprecise - when stakeholders discuss "the onboarding experience", are they pointing to the same states? Without formal definition, apparent agreement may mask divergent understanding. And when developers build from these informal artefacts, they must make the implicit explicit, filling gaps with their own assumptions, which may not match what designers intended.
The question driving this series: what would it look like to bring formal state thinking to service design?
This post has introduced state spaces as formal structures: states, transitions, the design challenge of constructing them. Combined with conceptual spaces from the previous post, we now have both the cognitive foundation and the formal apparatus. The next posts build on this foundation through Promise Theory (Burgess's framework for thinking about cooperation between autonomous agents through voluntary commitments), Iqbal's service grammar (a design grammar built on Promise Theory showing what rigorous service representation might look like), and statecharts (Harel's formal notation for reactive systems that provides exactly the state-and-transition rigour that service design currently lacks). Together, these frameworks offer a toolkit for the design work that planning presupposes: constructing shared representations that enable coordination and reasoning.
Next: Graphs and Service Representations - state spaces can be represented as graphs, and understanding service design's tools as constrained graphs reveals what they conceal.
References
Gärdenfors, P. (2000). Conceptual Spaces: The Geometry of Thinking. MIT Press.
Ghallab, M., Nau, D. and Traverso, P. (2004). Automated Planning: Theory and Practice. Morgan Kaufmann.
Harel, D. (1987). Statecharts: A Visual Formalism for Complex Systems. Science of Computer Programming, 8(3), 231-274.
Russell, S. and Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.