The previous posts in this series built a formal apparatus - conceptual spaces, state spaces, graphs, promises, statecharts, grammars - and then applied it to a critique of public sector transformation in "Beyond Technomagic". That critique diagnosed technomagic as attempting to plan without a reflexive account of the problem domain. But there is a prior question I have not examined: why this particular way of thinking about the problem? The formal apparatus I have been building draws on AI planning's state-space formalism - states, transitions, preconditions, goal conditions - because that is the vocabulary the AI course at Linköping taught me, and because it made legible what was absent at SCÖ. But the Pathway Generator itself was not a classical planner; it was a genetic algorithm. And the relationship between these two formalisms - what each assumes about the domain it models, what each renders visible and invisible - is itself a question with political dimensions that deserve examination.
Two formalisms, one project
The Pathway Generator evolves candidate rehabilitation pathways across a fitness landscape. It encodes a person's situation as a chromosome - a data structure representing current conditions, available interventions, and anticipated outcomes - and applies evolutionary operators (selection, crossover, mutation) to a population of candidate pathways, optimising toward a fitness function that expresses what a good outcome looks like. This is a different computational architecture from classical state-space planning. A PDDL-style planner defines a discrete set of states, a set of actions with preconditions and effects, an initial state, and a goal condition, then searches for a sequence of actions that transforms the initial state into one satisfying the goal. The planner navigates; the genetic algorithm evolves.
The AI course at Linköping gave me the PDDL vocabulary - states, transitions, preconditions, goals - and it was this vocabulary that crystallised what I was observing at SCÖ. The Pathway Generator needed a formally defined domain: variables that mattered, values they could take, interventions that could change them, outcomes that counted as success. The PDDL framing made this requirement architecturally explicit, as a structural precondition for computation. But the genetic algorithm had the same requirement, distributed differently. Its fitness function encoded assumptions about what a good rehabilitation outcome looks like. Its chromosome structure encoded assumptions about what variables describe a person's situation. Its mutation operators encoded assumptions about what kinds of changes are possible. The domain definition was there; it was just less visible - embedded in the algorithm's configuration rather than declared in a separate domain file.
What the AI course gave me, then, was not so much a description of the Pathway Generator's actual mechanism as a lens that made a shared structural requirement legible across different algorithmic approaches. Both classical planning and evolutionary optimisation presuppose a formally defined domain. Both require someone to have decided what counts as a state, what counts as an action or intervention, and what counts as success. The design question - who constructs this domain, and on what basis? - applies regardless of which algorithm is chosen.
The prior question
But there is a question behind the design question, one I did not examine in the earlier posts: why this algorithm? The Pathway Generator's developers chose a genetic algorithm. They could have chosen differently. Each choice would have encoded different assumptions about what vocational rehabilitation is.
A genetic algorithm models rehabilitation as optimisation: there is a fitness landscape, candidate pathways are evaluated against it, and evolution drives the population toward better solutions. This assumes that rehabilitation pathways can be meaningfully compared along a scalar fitness dimension - that "better" and "worse" are quantifiable, that trade-offs between different aspects of a person's situation can be collapsed into a single measure. It privileges breadth of search over depth of reasoning; it finds good solutions without necessarily explaining why they are good.
A classical state-space planner, by contrast, models rehabilitation as navigation: there is a map of possible states, a set of available actions, and the planner finds a route from where the person is to where they want to be. This assumes that rehabilitation can be decomposed into discrete states and transitions - that the person's situation at any moment can be adequately described by a finite set of variables, and that interventions have specifiable preconditions and effects. It privileges explainability over flexibility; the plan is a sequence of steps that can be inspected and justified.
A Bayesian network would model rehabilitation as probabilistic inference: given what we know about this person, what is the probability of different outcomes under different intervention sequences? This assumes that the relevant conditional dependencies can be identified and quantified - that we have enough data about how variables relate to estimate the probabilities. It privileges uncertainty quantification over deterministic planning.
A reinforcement learning agent would model rehabilitation as policy learning: through trial and interaction (real or simulated), the agent learns which actions in which states tend to produce better long-term outcomes. This assumes that rehabilitation can be modelled as a sequential decision process with observable states and delayed rewards - and that we have access to enough trajectories (real or simulated) to learn from. It privileges adaptation over prior specification.
A large language model, increasingly, would model rehabilitation as pattern-matching against historical trajectories: given narrative descriptions of situations and outcomes, the model generates plausible next steps by analogy with its training data. This assumes that narrative similarity is a meaningful basis for clinical recommendation - that what worked for people described similarly will work for this person. It privileges fluency and apparent plausibility over formal rigour.
Each of these formalisms constructs a different domain. Each foregrounds different aspects of rehabilitation and renders different things invisible. The genetic algorithm's fitness function flattens multidimensional outcomes into a scalar; the classical planner's discrete states lose the continuity of lived experience; the Bayesian network's conditional probabilities assume stable relationships that may shift under intervention; the reinforcement learning agent's reward signal encodes a particular theory of what matters; the language model's training data encodes the biases and blind spots of whatever documentation it was trained on.
Formalism as political commitment
The choice between these formalisms is not a technical decision in any politically neutral sense. It is a decision about what kind of knowledge counts, whose expertise is privileged, and what forms of human experience are rendered computable.
Sangiorgi and Prendiville (2017, p. 22) put the point directly: algorithms "make choices about what information to use, or display or hide, and this makes them very powerful. These choices are never made in a vacuum". D'Ignazio and Klein (2020, p. 5) are more emphatic: "all systems are political" - and the appeal to avoid politics, they argue, is itself a way for those in power to hold onto it. Davis (2020) develops this through the lens of affordance theory: technologies are political not because they determine outcomes but because they afford, encourage, discourage, and refuse particular patterns of use; the mechanisms through which they do so are shaped by the conditions of their design.
The genetic algorithm chosen for the Pathway Generator affords a particular relationship between clinical expertise and computational recommendation. Because it optimises across a fitness landscape, the critical design decisions are embedded in the fitness function - what gets measured, how different outcomes are weighted, what counts as "better". Whoever designs the fitness function encodes a theory of rehabilitation, whether or not they recognise it as such. A classical planner would have located the critical decisions differently - in the domain model, where the specification of states, actions, and goals would have been more architecturally exposed and potentially more available for clinical scrutiny. Neither formalism is inherently more or less political; they distribute the political decisions differently, making different assumptions visible and different assumptions hidden.
Burgess (2020) provides a useful framing here, from within the promise theory tradition this series has already engaged with: "computers are only proxies for decision-making... computation encodes policy in programs that act as proxies and evaluate the pre-programmed decisions". A state space - in any formalism - is encoded policy. The question of who writes the policy, whose understanding of the domain it reflects, and whose interests it serves is always a political question, even when (especially when) it presents itself as a technical one.
Construction at the level of formalism
The claim developed in earlier posts - that state spaces are constructed, not given - is incomplete in a way that existing literature makes clear. Winner's (1980) foundational argument that artifacts have politics applies not only to what goes into a state space - which variables, which values, which transitions - but to the prior choice of what kind of formal representation is used to describe it. As Bailey and Gamman (2022, p. 7) put it, "the choices one makes within a design process - of what to represent, and how, of what objects and ideas to normalize - can be said to be political... possibilities are both expressed and silenced through aesthetic decisions". This is, in Rancière's (2013) terms, a distribution of the sensible: the formalism determines what is perceptible within the system and what is rendered invisible. The choice of formalism shapes the state space as much as the choice of variables does. It determines what questions get asked during construction, what kinds of answers are possible, and what aspects of the domain are structurally invisible to the resulting system. This is precisely Hagendorff's (2021) point about AI ethics: that scrutiny focused on design decisions within a system frequently disregards the economic and political situatedness of the system itself - including the foundational choice of computational approach.
For the service designer working with computational intelligence, this means that the interrogation of the formalism itself - asking why this computational approach, with these representational commitments, for this domain - is design work of the first order. Hollanek (2020) argues that opening the algorithmic black box is not merely an engineering challenge but an act of critique: revealing how "hidden and immaterial layers of design reflect social and economic structures" and how the power relations these structures generate become sources of injustice. At SCÖ, the genetic algorithm was a given; it arrived with the Pathway Generator from Iceland. The design work operated downstream of that choice, reverse-engineering its requirements and documenting the gaps between what it presupposed and what existed. But the choice of a genetic algorithm for vocational rehabilitation was itself a design decision - one that encoded assumptions about what rehabilitation is, what knowledge is relevant, and what role human judgement plays - and it was a decision made without the kind of situated, contextual understanding that design practice provides.
This does not mean every service designer needs to become an algorithm specialist. It means that when algorithmic systems are introduced into complex service domains, someone needs to ask Winner's question in the specific case: what does this formalism assume about the domain? What does it make visible and what does it hide? Whose understanding of the domain does it privilege? These are design questions - and as D'Ignazio and Klein (2020) argue, the appeal to treat them as merely technical is itself a way for those in power to hold onto it. If these questions are not asked by designers - with their training in contextual inquiry, stakeholder engagement, and making the implicit explicit - they will be answered by default, by whoever happens to be configuring the system.
The analysis continues in "Four Senses of State Space", which distinguishes how the state-space concept has been operating across different registers - formal, representational, institutional, governmental - and clarifies which claims depend on which sense.
References
Bailey, J.A. and Gamman, L. (2022). 'The power in maps: reviewing a "youth violence" systems map as discursive intervention'. CoDesign, 18(7).
Burgess, M. (2020). A Treatise on Systems Volume 2. ChiTek-i.
Davis, J. (2020). How Artifacts Afford: The Power and Politics of Everyday Things. MIT Press.
D'Ignazio, C. and Klein, L. (2020). Data Feminism. MIT Press.
Hagendorff, T. (2021). 'Blind spots in AI ethics'. AI and Ethics, 2, pp. 851-867.
Hollanek, T. (2020). 'AI transparency: a matter of reconciling design with critique'. AI & Society, 35, pp. 1-10.
Rancière, J. (2013). The Politics of Aesthetics. Bloomsbury Publishing.
Sangiorgi, D. and Prendiville, A. (2017). Designing for Service: Key Issues and New Directions. Bloomsbury.
Winner, L. (1980). 'Do artifacts have politics?'. Daedalus, 109(1), pp. 121-136.