Three things converged during my time at SCÖ, a Swedish coordination association for vocational rehabilitation, that set the direction for this series.
The first was a project plan built on premises that did not hold. SCÖ had secured funding to pilot the Pathway Generator - an algorithmic decision-support tool developed in Iceland for recommending rehabilitation interventions. I was employed as the service designer who would study and support its implementation. Except the Swedish context lacked the data infrastructure, governance frameworks, and technical capacity that the Icelandic system presupposed. There were no patient records in a form the algorithm could ingest. More bizarrely, there turned out to be little or no digital infrastructure or data gathering processes or practices, and no governance frameworks or data sharing agreements that would support the use of this technology. There was no service catalogue tagged with compatible codes. The project had milestones, deliverables, and timelines - the form of planning - but the plan and the commitments to the funders assumed a world and a data and service infrastructure that did not exist.
The second was the algorithm itself, and what it required. The Pathway Generator works by navigating a state space: given a person's current situation (employment status, health conditions, prior interventions), it recommends a sequence of actions likely to lead toward a goal state (sustained employment, improved function). This is planning in the technical sense - the same formal structure taught in AI courses through languages like PDDL, where you define the objects that exist, the properties they have, the actions that can change them, and the goals you are trying to reach. The algorithm then searches for a path. But for the Pathway Generator to operate in a new context, someone or something has to construct that context's state space: what counts as a relevant object, what states are possible, what transitions are legitimate, what constitutes a goal. At SCÖ, none of this existed. The algorithm's computational requirements made legible what had been absent all along: the design work that would make such a system possible.
The third was my position as a service designer caught between these two problems - expected by different stakeholders to be variously a miracle worker, a technical fixer, an evangelist for innovation - and increasingly aware that the work I was actually doing bore little resemblance to what anyone had planned for. I spent some months mapping what "data science" meant to different stakeholders, conducting a from of reverse engineering, documenting the gaps between what had been promised and what was possible, producing concept maps and process diagrams that made things visible. The technoimaginary held together at high abstraction; the closer I looked at the material conditions, the more it dissolved.
What Planning Presupposes
A PhD course in AI at Linköping University gave me a formal language for what I was observing. The course covered planning as a computational problem: finding a sequence of actions that transforms an initial state into a goal state. The teaching materials put it directly:
"Compact descriptions of state spaces serve as input to algorithms... algorithms directly operate on compact descriptions which allows automatic reasoning about the problem: reformulation, simplification, abstraction".
The formalism is clear and precise, but it presupposes a great deal. The state space must already be defined: someone has to have specified what counts as a "state", what variables matter, what values they can take, what configurations are possible. The actions must be enumerable: there must be a finite set of action schemas with defined preconditions and effects. And the goals must be specifiable: someone must be able to describe, in the language of the domain, what they are trying to achieve. The algorithm navigates the state space but does not construct it; it chooses between actions but does not invent them; it optimises toward goals but does not discover what the goals should be. Planning operates within a framework that someone has already built. The domain file is not a planning problem - it is a design artefact. Someone had to decide what matters, what is possible, and what success looks like. The Pathway Generator needed exactly this kind of artefact for the Swedish rehabilitation context. What SCÖ had instead was a project plan that assumed the artefact already existed.
Complexity, Reduction, and the State-Space Problem
None of this should be surprising from a systems thinking perspective. The literature on complexity and wicked problems - Rittel and Webber's (1973) observation that the formulation of a wicked problem is the problem, Bronfenbrenner's (1979) nested ecological systems, Checkland's (1981) insistence that human activity systems resist the kind of clean decomposition that formal methods require - has long established that real-world service contexts do not reduce neatly to enumerable states and transitions. The Pathway Generator's state-space requirements are not just technically demanding. They are ontologically presumptuous - they assume that vocational rehabilitation can be adequately represented as a set of discrete states, transitions, and goals, when in practice it involves contested meanings, overlapping institutional jurisdictions, biopsychosocial complexity, and outcomes that different stakeholders define in incompatible ways.
But algorithmic systems require formal state spaces - that is what makes them computationally tractable. And they will be built regardless. The question is not whether to formalise but who participates in the formalisation, what gets included, what gets excluded, and whether the reduction is made consciously or by default. A state space is always a simplification. Whether it is a deliberate simplification - informed by contextual understanding of what the reduction costs - or an accidental one, where the constraints of the formalism quietly determine what the system can see, depends on whether anyone with that contextual understanding is involved in constructing it.
Planning and Design as Distinct Cognitive Activities
The distinction between planning and design has been articulated in several traditions. In design theory, it connects to Rittel and Webber's wicked problems and to Schön's (1983) reflective practice. In systems thinking, it connects to the difference between first-order problem solving (optimising within given constraints) and second-order inquiry (questioning what the constraints should be). But the most operationally precise articulation I have found comes from military thinking, where the consequences of applying the wrong cognitive mode tend to be more immediately visible than in civilian organisations. The military design movement - traced by Jackson (2019) and Zweibelson (2023) through systemic operational design and its subsequent doctrinal adoption - emerged precisely from operational environments that resisted conventional planning approaches. The U.S. Army / Marine Corps Counterinsurgency Field Manual (2006) offers a useful articulation:
"While both [design and planning] seek to formulate ways to bring about preferable futures, they are cognitively different. Planning applies established procedures to solve a largely understood problem within an accepted framework. Design inquires into the nature of a problem to conceive a framework for solving that problem. In general, planning is problem solving, while design is problem setting".
Planning is problem solving - working within an accepted framework. Design is problem setting - constructing the framework itself. The distinction is not about complexity or difficulty; it is about whether the framework exists yet.
The manual's second observation is equally relevant:
"When situations do not conform to established frames of reference - when the hardest part of the problem is figuring out what the problem is - planning alone is inadequate and design becomes essential. In these situations, absent a design process to engage the problem's essential nature, planners default to doctrinal norms; they develop plans based on the familiar rather than an understanding of the real situation".
This described the SCÖ project precisely. The project had a plan but no framework within which that plan could operate. What was needed was design - an inquiry into the nature of the problem, a construction of the state space within which planning could then function. What the project got instead was planning applied to an undefined state space, which is to say, it got nowhere. Meanwhile, the project milestones were quietly redefined as original goals proved unachievable, and the performance of innovation - meetings held, reports filed, workshops facilitated - substituted for the substance of it.
What This Opens Up
The convergence at SCÖ - algorithmic requirements that exposed missing design work, institutional planning or programme leadership that assumed that work had been done or could or would be done, and a designer positioned at the intersection - raised questions that extend well beyond a single failing project.
Where do state spaces come from? In AI, the domain definition is given. But in organisations, in services, in complex social systems - who defines what states are possible? How do those definitions get constructed? What gets included and excluded, and by whom?
If design constructs the framework and planning navigates within it, then design is logically prior to planning. But organisationally, culturally, institutionally, design is often not recognised as real work. The "real work" is planning and delivery. The consequence is that the most foundational decisions - what the state space looks like - get made implicitly, by whoever happens to be building the system or structuring the programme, or securing its funding.
There is also the question of what happens when design work reveals that the plan is impossible. This is what my concept mapping at SCÖ did. It made visible that the preconditions for the plan did not exist. The organisational response - acknowledged but not engaged - is worth examining in its own right.
And what does this mean for the emerging terrain of generative and agentic technology? The Pathway Generator is a relatively constrained system - it recommends, a human decides, and its state-space requirements are at least explicit. The emergence of large language models and generative AI complicates this picture. LLMs do not operate on explicit state spaces in the PDDL sense; their ontological commitments are embedded implicitly in training data and learned representations, which makes them harder to inspect, contest, or redesign. Yet agentic systems built on top of these models - tool-using agents, workflow orchestration, automated decision pipelines - reintroduce the same structural requirements: they need defined action schemas, state tracking, goal specifications. The presuppositions are the same; they are just distributed across layers of abstraction that make them less visible. If service design cannot participate in constructing those foundations, the consequential decisions about what these systems see, reason about, and render invisible will be made without the contextual understanding that design practice provides.
Why This Matters for Service Design
The questions above - about where state spaces come from, about the relationship between planning and design, about what happens when formalisms meet institutional reality - are not only relevant to my experience at SCÖ. They point toward a challenge that service design as a discipline has not yet adequately addressed.
Public services are increasingly shaped by algorithmic and AI-driven systems. The Pathway Generator at SCÖ was one example - a decision-support tool that recommends rehabilitation interventions based on a person's state. But the same logic operates in risk scoring systems that allocate resources, triage tools that prioritise cases, and the growing range of generative and agentic systems being introduced into public service delivery. Whether these systems make their ontological commitments explicit (as PDDL-style planning does) or embed them implicitly (as large language models do), they all presuppose representations of the domains they operate in: objects with properties, states with transitions, events with preconditions and effects. Someone has to construct those representations - or, in the case of generative AI, someone has to understand what representations the model has absorbed and decide whether they are adequate for the context. These are design decisions of the first order. They determine what the system can see, what it can reason about, and what it renders invisible.
Service design has a rich practice tradition for understanding how people experience services, how organisations deliver them, and how both could work better. But the discipline's representational toolkit - journey maps, blueprints, service ecologies - was developed for a different purpose. These tools foreground actions, experiences, and touchpoints. They do not readily express the ontological commitments that algorithmic systems require: what objects exist, what properties they have, how those properties change over time, and what constitutes a legitimate transition between states. When service designers cannot participate in these foundational questions, the most consequential design decisions get made elsewhere - by data scientists choosing features, by engineers defining schemas, by product managers specifying business rules - often without the contextual understanding that design practice provides.
This series is an attempt to build the conceptual infrastructure that would allow service design to engage with these questions. It proceeds in layers. The first is ontological: what are public services fundamentally about, what kinds of objects do they engage with, what events change those objects, and how do different traditions - cognitive science, information systems, philosophy, social theory - define these categories? The second is representational: how can these ontological categories be formally expressed, whether through Gärdenfors's conceptual spaces as a geometric framework for meaning, Harel's statecharts as a visual formalism for reactive systems, or the underlying mathematical structure of state spaces? The third is grammatical: what compositional systems - Wilkinson's Grammar of Graphics, Iqbal's service grammar built on Promise Theory, Frost's Atomic Design - allow complex service architectures to be described, analysed, and constructed? And the fourth is critical: what are the limits and politics of formalisation itself, and what happens when the messy, contested, contextual reality of public services meets the demands of formal representation?
The series draws throughout on what I experienced at SCÖ - not as a narrative of personal discovery, but as empirical material for understanding how design work operates (and fails to operate) at the boundary between institutional practice and algorithmic reasoning.
The ambition is not to resolve these questions but to equip service designers - and the interdisciplinary teams they work within - with a richer conceptual vocabulary for engaging with the formal and ontological commitments that algorithmic systems require. If design is to have a substantive role in shaping how public services integrate with algorithmic and agentic technology, rather than being confined to the interface layer after the consequential decisions have already been made, it needs this kind of infrastructure.
Next: "Objects, Entities, and Things: What Services Act Upon" - what counts as an "object" in the context of public services, and why the answer matters for state-space construction.
References
Bronfenbrenner, U. (1979). The Ecology of Human Development: Experiments by Nature and Design. Harvard University Press.
Checkland, P. (1981). Systems Thinking, Systems Practice. Wiley.
Jackson, A.P. (2019). A brief history of military design thinking. The Bridgehead.
Rittel, H.W.J. and Webber, M.M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, 4(2), 155-169.
Schön, D. (1983). The Reflective Practitioner: How Professionals Think in Action. Basic Books.
U.S. Army/Marine Corps (2006). The U.S. Army/Marine Corps Counterinsurgency Field Manual (FM 3-24). University of Chicago Press.
Zweibelson, B. (2023). Understanding the Military Design Movement: War, Change and Innovation. Routledge.