The legibility problem
The previous post argued that governance structures in public sector programmes are better understood as design material than as obstacles to be navigated. But there is a prior question that argument left unexamined: what kind of knowledge do these governance structures produce, and what do they structurally prevent from being known? If governance is a design material, it is one with distinctive properties - properties that determine what the programme can perceive and what it cannot, what it can measure and what remains invisible regardless of anyone's intentions. Understanding those properties requires engaging with a body of literature that programme management scholarship itself rarely draws upon: the work on legibility, audit culture, and governmentality that explains why large-scale governance systems produce the specific blindnesses they do.
Scott (1998) provides the foundational account. His analysis of how states render complex social realities governable identifies a mechanism that applies with equal force to programme management: legibility requires simplification. The state, Scott argues, takes "exceptionally complex, illegible, and local social practices" and creates "a standard grid whereby it could be centrally recorded and monitored" (p. 3). The resulting simplifications function as "abridged maps" that "did not successfully represent the actual activity of the society they depicted, nor were they intended to; they represented only that slice of it that interested the official observer" (p. 3). This is not a criticism of malice or incompetence; it is a description of what governance at scale necessarily requires. A national programme coordinating change across dozens of provider organisations cannot operate on the basis of situated, particular knowledge about each organisational context. It needs standardised categories - products deployed, organisations onboarded, milestones achieved - that are portable enough to travel from local delivery team to regional board to national programme report without requiring the reader to understand any particular context.
Shore and Wright (2024, p. 13) name this mechanism explicitly in their analysis of audit culture: governance operates by "imposing standardised taxonomies and measurements to make the world legible to the state so that it can be governed at a distance". The phrase "governed at a distance" is precise. The distance is not incidental; it is constitutive of the governance relationship. A programme board sitting in London reviewing a highlight report about a data platform deployment in a regional trust is governing at distance by definition, and the reporting infrastructure must produce categories that work across that distance. The categories that survive the journey are those that can be standardised, counted, and aggregated: numbers of deployments, percentages of adoption, RAG-rated risks. What cannot survive is the particular - the situated knowledge of how a specific team in a specific trust actually uses a specific product, whether their workflow has genuinely changed, whether the data quality has improved in ways that affect clinical decisions.
What audit assures
The literature on audit culture identifies a further mechanism that deepens this analysis. Strathern (2003, p. 5), in her account of how audit practices reshape the organisations they are applied to, observes that "what is being assured is the quality of control systems rather than the quality of first order operations. In such a context accountability is discharged by demonstrating the existence of such systems of control, not by demonstrating good teaching, caring, manufacturing or banking". The observation is not that audit is fraudulent or that control systems are unnecessary; it is that the audit relationship systematically privileges second-order evidence (does the organisation have processes for ensuring quality?) over first-order evidence (is the teaching, caring, or service delivery actually good?).
This displacement has direct consequences for programme management. When a programme board reviews a highlight report, what it assesses is whether the programme has governance processes in place - risk registers maintained, gateway reviews completed, benefits realisation plans documented - not whether the services the programme is delivering actually work for the people using them. The programme can demonstrate comprehensive governance while producing services that clinicians cannot use, patients cannot navigate, and operational staff work around rather than with. Strathern's point is that this is not a failure of the audit; it is what audit does. The assurance operates at the level of the control system, and the first-order operations remain, in an important sense, ungoverned - not because nobody cares about them, but because the governance infrastructure was not designed to see them.
Hunter (2015, p. 3) extends the analysis from what audit sees to what it produces. Audit technologies are not merely observation instruments; they are "a means of governing subjects; of making them more governable by constituting them as the sorts of subjects demanded by the programmatic ambitions of government". The programme manager who organises their work around highlight reports, risk registers, and RAG ratings is not simply complying with a reporting requirement; they are being constituted as a particular kind of professional subject - one whose competence is legible through the metrics the governance apparatus can register. The new managerialism, Hunter argues, operates through "economy, efficiency and effectiveness" as mechanisms that "regularise and standardise professional practice into narrowly defined and measurable performance criteria" (2015, p. 3). The programme manager is not choosing to ignore situated, qualitative evidence about whether services work; they are operating within an accountability structure that recognises certain kinds of evidence and not others, and their professional identity is bound to the kinds of evidence it does recognise.
Decomposition as the mechanism
The mechanism through which legibility requirements produce specific blindnesses in programme management is decomposition. Suoheimo and Jones (2025, p. 21) identify the pattern directly: "contemporary governance models tend to deconstruct problems into smaller sub-problems but then isolate them into separate units", resulting in "specialised silos without horizontal information flow". This is not an observation about poor management practice; it is a description of how programmes necessarily organise themselves to be governable. A complex, cross-cutting challenge - redesigning how health data flows between primary and secondary care, say, or building a platform that must serve clinical, operational, and analytical users simultaneously - is decomposed into workstreams because that is what programme governance requires. Each workstream needs a lead, a plan, a set of deliverables, a risk register, and a reporting line. The decomposition makes the challenge legible to the programme board: instead of an irreducible complexity, the board sees a set of manageable components, each with its own status, its own RAG rating, its own trajectory.
The problem is not that decomposition is wrong as an organising principle; it is that the categories of decomposition determine what the programme can subsequently perceive. When a programme decomposes a challenge by product or technology domain - an infrastructure workstream, a data ingestion workstream, a front-end applications workstream, an onboarding workstream - each workstream generates reporting about its own deliverables. What no workstream reports on, because no workstream owns it, is the user journey that crosses workstream boundaries, the service quality that emerges from the integration of components rather than from any component individually, or the behaviour change in clinical practice that the programme was ostensibly commissioned to produce. These cross-cutting concerns are not invisible because they are unimportant; they are invisible because the decomposition that made the programme governable simultaneously made them unreportable.
Scott's (1998) distinction between epistemic knowledge and metis - the practical, situated knowledge that comes only from direct engagement with a particular context - maps precisely onto this structural gap. The programme's reporting infrastructure captures epistemic knowledge: counts, categories, status indicators that can be codified and transmitted across distance. What it cannot capture is the metis of actual service use - the workarounds that clinicians develop when a system does not fit their workflow, the informal practices that emerge around a poorly designed data entry process, the difference between an organisation that has technically adopted a platform and one where the platform has genuinely changed clinical practice. That knowledge exists, but it exists in the particular contexts that the governance-at-distance relationship necessarily abstracts away.
Mismeasurement and the performative fix
Christiansen and Lægreid (2007, p. 14) give this structural blindness a name: "mismeasurement happens when less important but quantifiable aspects of organizational activities are reported, whereas more crucial but non-quantifiable aspects remain unreported". The formulation is careful: the problem is not that nothing is measured, but that the wrong things are measured, and the measurement creates the appearance of knowledge where there is in fact ignorance. A programme that reports 85% deployment across target organisations appears to know something about its progress; what it does not know - and what its reporting infrastructure cannot tell it - is whether the deployed product is being used, whether the use has changed anyone's practice, or whether the change in practice has produced the outcome the programme was commissioned to deliver.
The consequences of mismeasurement extend beyond mere ignorance. Shore and Wright (2024) document how performance indicators, once established, reshape the organisations they measure. When a programme is judged by deployment percentages, organisational energy flows toward deployment - toward the activities that move the metric - and away from the situated work of understanding whether deployed products serve their intended users. This is not gaming in the cynical sense; it is a rational response to the incentive structure that the measurement apparatus creates. The programme manager who prioritises deployment over adoption is doing exactly what the governance structure rewards. The programme board that celebrates a deployment milestone is responding to exactly the evidence the reporting infrastructure provides. The problem is structural, not motivational: the governance apparatus produces evidence about what it can see, and what it can see is determined by the categories of legibility it was designed to operate with.
Strathern's observation about first-order and second-order operations becomes particularly acute here. A programme can demonstrate that it has a benefits realisation framework (second-order) without demonstrating that any benefits have been realised (first-order). It can document a theory of change without testing whether the theory holds. It can report stakeholder engagement metrics - the number of show-and-tells held, the number of organisations consulted - without establishing whether anyone's understanding of the domain has actually deepened. In each case, the governance apparatus provides assurance about the control system while the first-order operations remain unexamined. The performance-and-substance gap that Meyer and Rowan (1977) identified - where formal structures decouple from actual practice - is not an aberration in programme governance; it is what the legibility requirement, operating through decomposition and audit, systematically produces.
What this means for design's position
The argument of this post is not that programme governance is illegitimate or unnecessary. The pressures that produce it - political accountability, clinical safety, public spending scrutiny - are real, and the structures that respond to those pressures serve genuine functions. The argument is that these structures have a specific property: they determine what the programme can see, and what they render invisible is precisely the kind of knowledge that design practices produce.
Design's cross-cutting perspective - the user journey that traverses workstream boundaries, the service experience that emerges from integration, the behaviour change that requires situated understanding - is structurally excluded not because anyone decided to exclude it but because the categories of legibility through which programme governance operates do not have a place for it. The programme can see products but not services; outputs but not outcomes; deployment but not adoption; adoption but not practice change. Each of these distinctions corresponds to a transition between layers in the five-layer model this series has been developing: from L1 (component) to L2 (product) to L3 (service) to L4 (governance) to L5 (policy). Programme governance is comfortable reporting at L1 and L2 - components built, products deployed - but the transitions to L3 and beyond require exactly the situated, relational knowledge that the legibility requirement abstracts away.
This is why governance functions as a design material in the specific way the previous post described: not because governance is a neutral medium that design can shape however it likes, but because governance structures carry embedded assumptions about what counts as evidence, what counts as progress, and what counts as an outcome. Working with governance as a design material requires understanding these assumptions - understanding that the decomposition into workstreams is not just an organisational convenience but a legibility mechanism that determines what the programme can see, that the reporting infrastructure is not just an administrative burden but an epistemological commitment about what kinds of knowledge count, and that the audit relationship assures control systems rather than first-order operations.
The practical strategies that the next post develops - finding insertion points in existing governance rhythms, speaking the language of risk, packaging design evidence in programme-legible forms - are strategies for operating within a system whose properties this post has tried to make explicit. They are strategies for making design's cross-cutting knowledge visible within an infrastructure that was not built to carry it; for creating what Scott might call legible representations of metis, without losing the situated particularity that gives metis its value. Whether that translation is possible without the knowledge being fundamentally transformed in the process - whether situated understanding can be made legible without becoming merely another standardised category - is the tension that runs through the remainder of this series.
References
Almquist, R., Grossi, G., Van Helden, G.J. and Reichard, C. (2012) 'Public sector governance and accountability', Critical Perspectives on Accounting, 24(7-8), pp. 479-487.
Christiansen, T. and Lægreid, P. (2007) Organisation Theory and the Public Sector. London: Routledge.
Hunter, S. (2015) Power, Politics and the Emotions: Impossible Governance? London: Routledge.
Meyer, J.W. and Rowan, B. (1977) 'Institutionalized organizations: Formal structure as myth and ceremony', American Journal of Sociology, 83(2), pp. 340-363.
Scott, J.C. (1998) Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. New Haven: Yale University Press.
Shore, C. and Wright, S. (2024) Audit Culture: How Indicators and Rankings Are Reshaping the World. London: Pluto Press.
Strathern, M. (ed.) (2003) Audit Cultures: Anthropological Studies in Accountability, Ethics and the Academy. London: Routledge.
Suoheimo, M. and Jones, P.H. (2025) Systemic Service Design. London: Springer.