Conceptual Spaces: The Geometry of Meaning

The previous post surveyed competing frameworks for representing knowledge - distributed cognition, activity theory, social representations, prototype theory - and argued that Gärdenfors's conceptual spaces framework uniquely provides the geometric vector-space formalism this series needs: one that bridges human conceptual representation and computational representation through the same mathematical structure.

This post develops that argument in detail. What are conceptual spaces? How do they work formally? And why does this geometric formalism turn out to be the mathematical meeting point between human representation and machine learning?

Why Cognitive Foundations Matter for Design

State spaces, which we'll explore in a subsequent post, are formal structures: sets of states, transitions between them, precise definitions. But formal structures don't exist in a vacuum. They're constructed by people, used by people, interpreted by people.

In an earlier post, I surveyed three traditions that offer different answers to "what is a concept?" - information systems (formal entities), cognitive science (regions in mental space), and social theory (socially constructed and performative). Here I focus on the cognitive science tradition, particularly Gärdenfors's work, for a specific reason: it provides a geometric formalism in which concepts are regions in multi-dimensional vector spaces, and similarity is distance. This matters for the series not only because it helps explain why different stakeholders might represent the "same" domain differently, but because vector spaces are also the mathematical foundation of how machine learning systems represent meaning - from word embeddings to the latent spaces of generative models. Gärdenfors published this as cognitive science, but the formalism turns out to bridge human conceptual representation and algorithmic representation in a way that is directly relevant to the questions this series is building toward.

If different people represent the domain differently - if they have different concepts, different dimensions of variation, different notions of similarity - then constructing a shared state space becomes a problem of cognitive coordination, not just technical specification. And if algorithmic systems represent meaning in the same kind of geometric structure, then the question of how human and computational representations relate to each other becomes central to any design practice that engages with these systems.

Three Levels of Representation

Gärdenfors proposes that cognitive science needs three levels of representation, not two. The symbolic level operates with discrete symbols manipulated according to rules; this is where language and logical inference operate, and it is powerful for explicit reasoning but struggles with similarity, typicality, and learning. The subconceptual level is the neural substrate that implements the higher levels - patterns of activation, connection weights, the hardware of the brain. Between these sits the conceptual level: geometric structures where concepts are regions and similarity is distance, where perception meets cognition, where we recognise that a robin is a "more typical" bird than a penguin, or that orange is "between" red and yellow. It is this conceptual level that Gärdenfors's framework addresses - the level where meaning lives, not as arbitrary symbol-to-referent mappings but as structured spaces with geometric properties.

Quality Dimensions

A conceptual space is built from quality dimensions - the fundamental respects in which things can vary.

Consider colour. The colour space has three dimensions: hue (the position on the colour wheel), saturation (how vivid versus grey), and brightness (how light versus dark). Every colour can be located as a point in this three-dimensional space. Similar colours are nearby; dissimilar colours are far apart.

Or consider taste. There are commonly said to be five basic tastes: sweet, sour, salty, bitter, and umami. Each is a dimension along which foods can vary. A particular food occupies a point (or region) in this five-dimensional taste space.

Quality dimensions can be innate - some dimensions seem built into our perceptual systems, such as colour, pitch, and basic tastes, and are cross-cultural and present from early development. Others are learned: wine experts develop dimensions for tannin, acidity, and finish that novices lack; musicians hear intervals and chord qualities that non-musicians cannot discriminate. And some dimensions are shaped by language and culture: the colour terms available in a language influence (though do not determine) how finely people discriminate certain regions of colour space.

The quality dimensions available to a person shape what distinctions they can make and what similarities they perceive. This is crucial for understanding why stakeholders with different backgrounds might literally see the same situation differently.

Concepts as Regions

In a conceptual space, a concept is not a symbol but a region.

The concept "red" is a region of colour space - not a single point, but an area that includes crimson, scarlet, cherry, and brick, while excluding orange and purple. The concept has fuzzy boundaries; there are borderline cases where reasonable people disagree about whether something counts as "red".

The concept "bird" is a region in a space with dimensions like size, wing shape, beak type, habitat, and behaviour. Robins and sparrows are near the centre of this region - they are prototypical birds. Penguins and ostriches are near the edges - they are birds, but atypical ones. As Objects, Entities, and Things establishes, different conceptual frameworks can treat the same entities as fundamentally different kinds of things, which is why these boundaries are never neutral.

This explains several phenomena that symbolic approaches struggle with. Typicality effects - the fact that some category members are "better examples" than others - arise because typical members are closer to the centre of the region. Similarity gradients - the fact that a tangerine is more similar to an orange than to a grapefruit - arise because similarity is distance in conceptual space. And conceptual combination - what happens when we combine concepts like "red apple" or "pet fish" - can be understood as intersecting regions in a shared space, where the result inherits structure from both components.

From Conceptual Spaces to State Spaces

The crucial connection for this series is between conceptual spaces and state spaces. A conceptual space is a cognitive structure - how an individual represents a domain in their mind, consisting of quality dimensions and concepts as regions in those dimensions. A state space is a formal structure - an explicit model of possible configurations and transitions between them, consisting of state variables, possible values, and transition functions.

Conceptual spaces are the cognitive substrate from which state spaces are constructed. When we build a formal state space model, we are externalising and making precise what exists implicitly in conceptual spaces - choosing which dimensions matter, how to carve up the space into discrete states, what transitions to recognise. Different conceptual spaces generate different state spaces: if stakeholders have different quality dimensions, different ways of carving up the domain, they will construct different state spaces. What counts as a "state" for one person may not be recognised as a state by another. Aligning conceptual spaces is therefore prerequisite to shared state spaces: before a team can agree on a formal state model, they need sufficient overlap in their conceptual spaces - shared dimensions, shared concepts, shared ways of seeing the domain.

Raubal (2004) formalises this connection, showing how conceptual spaces can be given rigorous mathematical treatment that bridges to formal ontologies and knowledge representation. Zenker and Gärdenfors (2015) explore applications across domains - from linguistics to robotics to social cognition - demonstrating the framework's generality.

The Vector Space Convergence

This is where the formalism becomes more than cognitive science.

Gärdenfors published Conceptual Spaces in 2000, arguing that human concepts are regions in continuous geometric spaces where similarity is distance. Thirteen years later, Mikolov et al. (2013) showed that training neural networks on large text corpora produced word embeddings with strikingly similar properties: words as points in continuous vector spaces, similarity as distance, and - most remarkably - semantic relationships encoded as geometric operations. The vector from "king" to "queen" parallels the vector from "man" to "woman". Meaning is geometry.

The convergence goes deeper than word embeddings. Transformer attention mechanisms operate over continuous vector spaces. Variational autoencoders learn latent spaces where interpolation produces meaningful outputs. Graph neural networks learn node embeddings that preserve structural relationships as geometric proximity. In each case, the computational system arrives at the same mathematical structure Gärdenfors proposed as cognitive architecture: continuous spaces, regions, distance as similarity.

This is not coincidental. Continuous geometric spaces turn out to be how you represent structured similarity - whether in neurons or in neural networks. The mathematical constraints are the same: you need a space where similar things are nearby, where dimensions capture meaningful variation, where regions correspond to categories. Gärdenfors derived this from cognitive science. The machine learning community derived it from optimisation over data. They arrived at the same place.

What This Means for Service Design

The series is building toward a specific question: what conceptual apparatus does service design need to engage substantively with algorithmic systems? The vector space convergence makes this question precise.

If a clinician's understanding of "rehabilitation" is (in Gärdenfors's terms) a region in a conceptual space with quality dimensions like severity, complexity, duration, and functional capacity - and if a machine learning model's representation of "rehabilitation" is a position in a learned embedding space with dimensions that may or may not correspond to those human dimensions - then the question of alignment between human and computational understanding becomes a question of geometric alignment between vector spaces.

This is not a metaphor. It is a formal relationship. And it gives service design a way into the conversation about algorithmic systems that journey maps and blueprints cannot provide. A blueprint can show what happens in a service. It cannot show how the service's representation of a patient's situation relates to the patient's own representation, or to an algorithm's learned representation of similar situations. Vector spaces can.

The social difficulty of building shared understanding between human stakeholders - different people having different conceptual spaces for the "same" domain - is real and important. I encountered this directly at SCÖ, and the Limits of Making Visible series explores what happened when attempts to construct shared representation exposed misalignments the organisation needed to keep hidden. But the formalism matters for this series because it extends the problem beyond human-to-human coordination to the harder question of human-to-computational alignment.

Limits of the Framework

Gärdenfors's theory has limits. It works best for perceptual and concrete concepts - colours, shapes, tastes, physical objects. It is less clear how to apply it to abstract concepts, social categories, or institutional structures. There are also questions about how dimensions combine: Gärdenfors proposes that conceptual spaces are built from domains (sets of related dimensions) that combine into larger spaces, but the rules for this combination are not fully specified.

For this series, though, the theoretical incompleteness matters less than the structural insight. The claim is not that Gärdenfors solves the representation problem. The claim is that continuous geometric vector spaces are the mathematical structure in which both human and computational representations operate, and that any design practice engaging with algorithmic systems needs to work with this structure rather than around it. The incompleteness of Gärdenfors's account of domain combination is actually mirrored in machine learning: how to compose learned embeddings from different domains remains an open research problem there too.

This post has introduced the geometric formalism - quality dimensions, concepts as regions, similarity as distance - and argued that the convergence between conceptual spaces and machine learning vector representations is what makes Gärdenfors relevant to service design's engagement with algorithmic systems. The next post formalises these ideas further, introducing state spaces - explicit enumerations of the configurations a system can be in, and the transitions between them. Where conceptual spaces describe how meaning is structured geometrically, state spaces describe how situations change over time. Together they provide the representational foundation the series builds on.

Next: "What is a State Space?" - formalising the configurations and transitions that design and planning work with.

References

Gärdenfors, P. (2000). Conceptual Spaces: The Geometry of Thinking. MIT Press.

Gärdenfors, P. (2014). The Geometry of Meaning: Semantics Based on Conceptual Spaces. MIT Press.

Gärdenfors, P. and Zenker, F. (2013). Theory change as dimensional change: Conceptual spaces applied to the dynamics of empirical theories. Synthese, 190(6), 1039-1058.

Mikolov, T., Chen, K., Corrado, G. and Dean, J. (2013). Efficient estimation of word representations in vector space. Proceedings of ICLR 2013.

Raubal, M. (2004). Formalizing Conceptual Spaces. In Formal Ontology in Information Systems, IOS Press.

Zenker, F. and Gärdenfors, P. (2015). Applications of Conceptual Spaces: The Case for Geometric Knowledge Representation. Springer.