Three things converged during my time at SCÖ, a Swedish coordination association for vocational rehabilitation, that set the direction for this series.
The first was a project plan built on premises that did not hold. SCÖ had secured funding to pilot the Pathway Generator - an algorithmic decision-support tool developed in Iceland for recommending rehabilitation interventions. I was employed as the service designer who would study and support its implementation. Except the Swedish context lacked the data infrastructure, governance frameworks, and technical capacity that the Icelandic system presupposed. There were no patient records in a form the algorithm could ingest. More bizarrely, there turned out to be little or no digital infrastructure or data gathering processes or practices, and no governance frameworks or data sharing agreements that would support the use of this technology. There was no service catalogue tagged with compatible codes. The project had milestones, deliverables, and timelines - the form of planning - but the plan and the commitments to the funders assumed a world and a data and service infrastructure that did not exist.
The second was the algorithm itself, and what it required. The Pathway Generator is a genetic algorithm: given a person's current situation (employment status, health conditions, prior interventions), it evolves candidate rehabilitation pathways across a fitness landscape, optimising toward a goal state (sustained employment, improved function). A PhD course in AI at Linköping University was teaching me, concurrently, to think in terms of state spaces and planning - the formal structure of languages like PDDL, where you define the objects that exist, the properties they have, the actions that can change them, and the goals you are trying to reach. The Pathway Generator's evolutionary approach encodes the domain differently from classical planning, but the structural requirement is the same: for the algorithm to operate in a new context, someone or something has to construct that context's formal domain - what counts as a relevant object, what states are possible, what transitions or pathways are legitimate, what constitutes a goal. At SCÖ, none of this existed. The algorithm's computational requirements made legible what had been absent all along: the design work that would make such a system possible.
The third was my position as a service designer caught between these two problems - expected by different stakeholders to be variously a miracle worker, a technical fixer, an evangelist for innovation - and increasingly aware that the work I was actually doing bore little resemblance to what anyone had planned for. I spent some months reverse-engineering the Pathway Generator's requirements, conducting a form of algorithm archaeology and applying the discipline of strong typing to its data structures, documenting the gaps between what had been promised and what was possible, producing concept maps and process diagrams that made things visible. The technoimaginary held together at high abstraction; the closer I looked at the material conditions, the more it dissolved.
What Planning Presupposes
A PhD course in AI at Linköping University gave me a formal language for what I was observing. The course covered planning as a computational problem: finding a sequence of actions that transforms an initial state into a goal state. The teaching materials put it directly:
"Compact descriptions of state spaces serve as input to algorithms... algorithms directly operate on compact descriptions which allows automatic reasoning about the problem: reformulation, simplification, abstraction".
The formalism is clear and precise, but it presupposes a great deal. The state space must already be defined: someone has to have specified what counts as a "state", what variables matter, what values they can take, what configurations are possible. The actions must be enumerable: there must be a finite set of action schemas with defined preconditions and effects. And the goals must be specifiable: someone must be able to describe, in the language of the domain, what they are trying to achieve. The algorithm navigates the state space but does not construct it; it chooses between actions but does not invent them; it optimises toward goals but does not discover what the goals should be. Planning operates within a framework that someone has already built. The domain file is not a planning problem - it is a design artefact. Someone had to decide what matters, what is possible, and what success looks like. The Pathway Generator needed exactly this kind of artefact for the Swedish rehabilitation context. What SCÖ had instead was a project plan that assumed the artefact already existed.
Complexity, Reduction, and the State-Space Problem
None of this should be surprising from a systems thinking perspective. The literature on complexity and wicked problems - Rittel and Webber's (1973) observation that the formulation of a wicked problem is the problem, Bronfenbrenner's (1979) nested ecological systems, Checkland's (1981) insistence that human activity systems resist the kind of clean decomposition that formal methods require - has long established that real-world service contexts do not reduce neatly to enumerable states and transitions. The Pathway Generator's state-space requirements are not just technically demanding. They are ontologically presumptuous - they assume that vocational rehabilitation can be adequately represented as a set of discrete states, transitions, and goals, when in practice it involves contested meanings, overlapping institutional jurisdictions, biopsychosocial complexity, and outcomes that different stakeholders define in incompatible ways.
But algorithmic systems require formal state spaces - that is what makes them computationally tractable. And they will be built regardless. The question is not whether to formalise but who participates in the formalisation, what gets included, what gets excluded, and whether the reduction is made consciously or by default. A state space is always a simplification. Whether it is a deliberate simplification - informed by contextual understanding of what the reduction costs - or an accidental one, where the constraints of the formalism quietly determine what the system can see, depends on whether anyone with that contextual understanding is involved in constructing it.
Planning and Design as Distinct Cognitive Activities
The distinction between planning and design has been articulated in several traditions. In design theory, it connects to Rittel and Webber's wicked problems and to Schön's (1983) reflective practice. In systems thinking, it connects to the difference between first-order problem solving (optimising within given constraints) and second-order inquiry (questioning what the constraints should be). The most operationally precise articulation I have encountered comes from the military design movement - traced by Jackson (2019) and Zweibelson (2023) through systemic operational design and its subsequent doctrinal adoption - which emerged from operational environments that resisted conventional planning approaches. But the stakes involved in getting the cognitive mode wrong are not unique to military contexts. Healthcare - the domain where I have spent most of my working life - demands comparable rigour. As Jones (2013) observes, healthcare environments "require the use of far more rigorous design and development methods than the contemporary trend in user experience and service design" (2013, p. 11), precisely because the consequences of poor design decisions extend to patient harm. The human factors and cognitive work analysis traditions, developed originally for nuclear power and subsequently applied across healthcare, aviation, and military systems (Stanton and Salmon, 2017), recognise all of these as safety-critical domains where the relationship between formal domain modelling and operational performance is not academic but consequential. The U.S. Army / Marine Corps Counterinsurgency Field Manual (2006) offers a useful articulation of the distinction:
"While both [design and planning] seek to formulate ways to bring about preferable futures, they are cognitively different. Planning applies established procedures to solve a largely understood problem within an accepted framework. Design inquires into the nature of a problem to conceive a framework for solving that problem. In general, planning is problem solving, while design is problem setting".
Planning is problem solving - working within an accepted framework. Design is problem setting - constructing the framework itself. The distinction is not about complexity or difficulty; it is about whether the framework exists yet.
The manual's second observation is equally relevant:
"When situations do not conform to established frames of reference - when the hardest part of the problem is figuring out what the problem is - planning alone is inadequate and design becomes essential. In these situations, absent a design process to engage the problem's essential nature, planners default to doctrinal norms; they develop plans based on the familiar rather than an understanding of the real situation".
The SCÖ project defaulted to exactly this pattern. It had a plan but no framework within which that plan could operate; planning was applied to a domain nobody had mapped or understood, and it got nowhere. Meanwhile, the project milestones were quietly redefined as original goals proved unachievable, and the performance of innovation - meetings held, reports filed, workshops facilitated - substituted for the substance of it. The question of why this happened - whose interests the absence of design work served, and what institutional pressures made planning-without-framework not just tolerable but preferable - is one this series will hopefully attend to in due course.
What This Opens Up
The convergence at SCÖ - algorithmic requirements that revealed technoimaginary assumptions about data readiness and institutional capacity, programme leadership that had little grasp of the complexities of data work, and a designer positioned at the intersection who had not been involved in the original project planning and was expected to be the glue holding these disparate assumptions and component parts together - raised questions that extend well beyond a single failed project.
Where do state spaces come from? In classical AI planning, the domain definition is treated as given - the PDDL file arrives fully formed, and the planner's job is to navigate within it. But even in AI, this is a simplification; in machine learning the domain is inferred from data, in evolutionary approaches it is encoded in fitness functions and chromosome structures, and in each case someone has made consequential choices about what to represent and how. In organisations, in services, in complex social systems, these questions become harder still - who defines what states are possible? How do those definitions get constructed? What gets included and excluded, and by whom?
On this account - design as problem setting, planning as problem solving - design is logically prior to planning. The framework has to exist before navigation within it becomes meaningful. But the field manual's clean separation is a particular conception of design, rooted in the systemic design and military design traditions, and it deserves some reflection. Dubberly (2022) argues that framing design as problem solving at all - even as a complementary counterpart to planning - misrepresents the activity; in practice, "the process of designing leads to the discovery of both alternative means and alternative goals", which means design is not just setting the problem for planning to solve but continuously reframing what the problem is. And Drummond (2021) identifies the more common organisational reality: "by the time we turn our attention to the 'user experience' of the service we are designing, we are often limited in scope because of decisions that were made long before we started". Design rarely arrives with the authority to set the framework; it arrives after the plan has been drawn up, the funding secured, the commitments made - and is expected to operate within constraints it had no role in shaping, sometimes by people whose understanding of the problem space is partial or oversimplified.
What matters here, then, is not the theoretical priority of design over planning but the structural observation that follows from it: organisationally, culturally, institutionally, the work of constructing the framework is often not recognised as real work. The "real work" is planning and delivery. The consequence is that the most foundational decisions - how the problem domain is understood and represented - get made implicitly, by whoever happens to be building the system or structuring the programme, or securing its funding.
There is a further dimension here that is worth marking, even if it requires fuller treatment later in the series. The Pathway Generator was a genetic algorithm - it optimised rehabilitation pathways across a fitness landscape rather than navigating a classical PDDL-style state space. The AI course at Linköping taught me to think in terms of state spaces, preconditions, transitions, and goal states; the Pathway Generator's evolutionary approach encodes the domain differently, as a population of candidate solutions evolving toward better fitness. But both presuppose a formally defined domain: both require someone to have specified what variables matter, what values they can take, and what counts as a good outcome. The PDDL framing made this presupposition architecturally explicit; the genetic algorithm had it too, but distributed across fitness functions, chromosome encodings, and selection criteria - less visible, perhaps, but no less consequential. A Bayesian network would have modelled rehabilitation as probabilistic inference over conditional dependencies; a reinforcement learning agent as policy learning from accumulated reward signals. Each formalism encodes different assumptions about what rehabilitation is and what kinds of knowledge count as formally representable. The choice between them is not a neutral technical decision; it shapes what questions get asked, what aspects of the domain become visible, and whose understanding of rehabilitation gets privileged.
There is also the question of what happens when design work reveals that the plan is impossible. This is what my concept mapping at SCÖ did. It made visible that the preconditions for the plan did not exist and the likelihood of building a structure or putting in place enough pre-requisites to even prototype a service were limited, or would take a long time. The organisational response - acknowledged but not engaged - is worth examining in its own right.
And what does this mean for the emerging terrain of generative and agentic technology? The Pathway Generator is a relatively constrained system - it recommends, a human decides, and its state-space requirements are at least explicit. The emergence of large language models and generative AI complicates this picture. LLMs do not operate on explicit state spaces in the PDDL sense; their ontological commitments are embedded implicitly in training data and learned representations, which makes them harder to inspect, contest, or redesign. Yet agentic systems built on top of these models - tool-using agents, workflow orchestration, automated decision pipelines - reintroduce the same structural requirements: they need defined action schemas, state tracking, goal specifications. The presuppositions are the same; they are just distributed across layers of abstraction that make them less visible. This raises a twofold challenge for service design. The first is organisational: whether designers are involved at all in constructing these foundations, or whether they arrive after the consequential decisions have already been distributed across tool configurations and model architectures, or constrained by platform or contextual constraints. The second is representational: whether service design's existing tools - journey maps, blueprints, service ecologies - are capable of engaging with these technologies as components of complex sociotechnical systems, or whether the discipline needs different forms of representation entirely to contend with the formal and infrastructural layers where these commitments are encoded.
Why This Matters for Service Design
The questions above - about where state spaces come from, about the relationship between planning and design, about what happens when formalisms meet institutional reality - are not only relevant to my experience at SCÖ. They point toward a challenge that service design as a discipline has not yet adequately addressed.
Public services are increasingly shaped by algorithmic systems. The Pathway Generator at SCÖ was one example, but the same structural requirements operate in risk scoring systems that allocate resources, triage tools that prioritise cases, and the generative and agentic technologies now entering public service delivery. All of these systems presuppose representations of the domains they operate in - objects with properties, states with transitions, events with preconditions and effects - whether those representations are made explicit (as in PDDL-style planning) or absorbed implicitly from training data (as in large language models). Someone has to construct or interrogate those representations. These are design decisions of the first order; they determine what the system can see, what it can reason about, and what it renders invisible.
Service design has a rich practice tradition for understanding how people experience services, how organisations deliver them, and how both could work better. But the discipline's representational toolkit - journey maps, blueprints, service ecologies - was developed for a different purpose. These tools foreground actions, experiences, and touchpoints. They do not readily express the ontological commitments that algorithmic systems require: what objects exist, what properties they have, how those properties change over time, and what constitutes a legitimate transition between states. When service designers cannot participate in these foundational questions, the most consequential design decisions get made elsewhere - by data scientists choosing features, by engineers defining schemas, by product managers specifying business rules - often without the contextual understanding that design practice provides.
This series is an attempt to build the conceptual infrastructure that would allow service design to engage with these questions. It proceeds in layers. The first is ontological: what are public services fundamentally about, what kinds of objects do they engage with, what events change those objects, and how do different traditions - cognitive science, information systems, philosophy, social theory - define these categories? The second is representational: how can these ontological categories be formally expressed, whether through Gärdenfors's conceptual spaces as a geometric framework for meaning, Harel's statecharts as a visual formalism for reactive systems, or the underlying mathematical structure of state spaces? The third is grammatical: what compositional systems - Wilkinson's Grammar of Graphics, Iqbal's service grammar built on Promise Theory, Frost's Atomic Design - allow complex service architectures to be described, analysed, and constructed? And the fourth is critical: what are the limits and politics of formalisation itself, and what happens when the messy, contested, contextual reality of public services meets the demands of formal representation?
The series draws throughout on what I experienced at SCÖ as empirical material for understanding how design work operates (and fails to operate) at the boundary between institutional practice and algorithmic reasoning.
The ambition is not to resolve these questions but to equip service designers - and the interdisciplinary teams they work within - with a richer conceptual vocabulary for engaging with the formal and ontological commitments that algorithmic systems require. If design is to have a substantive role in shaping how public services integrate with algorithmic and agentic technology, rather than being confined to the interface layer after the consequential decisions have already been made, it needs this kind of infrastructure.
Next: "Objects, Entities, and Things: What Services Act Upon" - what counts as an "object" in the context of public services, and why the answer matters for state-space construction.
References
Bronfenbrenner, U. (1979). The Ecology of Human Development: Experiments by Nature and Design. Harvard University Press.
Checkland, P. (1981). Systems Thinking, Systems Practice. Wiley.
Drummond, S. (2021). Full Stack Service. Snook.
Dubberly, H. (2022). Why we should stop describing design as "problem-solving". Dubberly Design Office.
Jackson, A.P. (2019). A brief history of military design thinking. The Bridgehead.
Jones, P.H. (2013). Design for Care: Innovating Healthcare Experience. Rosenfeld Media.
Rittel, H.W.J. and Webber, M.M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, 4(2), 155-169.
Schön, D. (1983). The Reflective Practitioner: How Professionals Think in Action. Basic Books.
Stanton, N.A. and Salmon, P.M. (2017). Cognitive Work Analysis: Applications, Extensions and the Future. CRC Press.
U.S. Army/Marine Corps (2006). The U.S. Army/Marine Corps Counterinsurgency Field Manual (FM 3-24). University of Chicago Press.
Zweibelson, B. (2023). Understanding the Military Design Movement: War, Change and Innovation. Routledge.