What changes when governance becomes a design input?
This series has been circling around the concept of governance for some time without confronting it directly. The first post described the epistemological tension between design and programme management, and noted that programme management cultures are shaped by real institutional pressures - political accountability, clinical safety, audit - that make governance not a bureaucratic preference but a structural necessity. The second post argued that design offers programme management a cross-cutting view and the ability to surface invisible decisions; the third examined the translation work required to make design insights legible within governance structures. But in each case, governance appeared primarily as the context within which design operates - the landscape to be navigated.
What I want to explore here is a different possibility: that governance might be better understood not as the landscape but as part of the material. This is a shift I have been working through gradually, and I am not sure I have fully resolved what it implies. But the direction of the thought is from treating governance - approval steps, consent architecture, access controls, audit requirements - as a set of obstacles to navigate around or fixed criteria needing met, and constraining design practice, toward treating these structures as design inputs: things to work with rather than friction to minimise or obstacles to jump over.
The instinct trained into designers, particularly those formed in the tradition of digital service design with its emphasis on speed, iteration, and the removal of friction, is to design the service first and accommodate governance afterwards, or in the final stages of delivery, 'pass the accessibility tests', 'sign-off with key stakeholders'. In low-stakes contexts - a marketing website, a content management system, an internal tool - this ordering does relatively little harm.
But in healthcare, or other contexts where services handle sensitive personal data and where decisions affect clinical outcomes, something about this ordering starts to feel wrong. The approval steps, the consent architecture, the access controls, the audit trails: for the people who use these services, these structures are not peripheral to the experience. They may, in important respects, be the experience or absolutely intrinsic to the success of the service to carry the trust and engagement of users with it. The earlier work on performance and substance established that organisations adopt structures for legitimation as much as for effectiveness; the question here is whether governance in healthcare data contexts might be one of the cases where the formal structure is genuinely constitutive of what the service is, rather than ceremonial.
What does it mean for a constraint to be generative?
Metcalf (2014, p. 8) describes formal and informal governance systems as working together to "constrain or liberate" what design can do. The phrasing is suggestive: constrain or liberate, not constrain and then, separately, liberate. There is an implication that constraints do not only close down options; they might also create the conditions for trust, which in turn opens up possibilities that would not exist without them. Participants share whole genomes because they trust the governance. Researchers get access to sensitive datasets because the access controls are robust. Clinicians adopt new tools in part because the clinical safety processes give them confidence. If this holds, then weakening governance does not merely create risk; it erodes the foundation on which participation and adoption depend.
Cooper (2019) frames something similar at an institutional level: governance provides stability, and that stability is what allows innovation to occur within a trusted framework. The claim, if taken seriously, is counterintuitive - that more governance, not less, might expand the design space, provided the governance itself is well-designed and sympathetic to the particular approach that design as a practice takes, or how design as a practice functions. Whether this holds in practice presumably depends on the quality of the governance; the argument is for thoughtful constraint, not for constraint as such. And the distinction matters, because it is easy to observe cases where governance does not function generatively - where strategic abstraction displaces delivery, or where process accumulates in response to historical failures without anyone revisiting whether it still serves a protective function. The question is not whether constraint is always generative but under what conditions it becomes so.
Designing with governance
If governance requirements are design inputs rather than external constraints, the question facing a designer shifts. Instead of "how do we minimise this friction?", the question becomes "what is this requirement protecting, and how can we design an experience that honours that protection while serving the user?" These are different design problems, and they tend to produce different solutions.
The dashboard work from the parallel series illustrates this concretely. The distinction between measurement for improvement and measurement for accountability, and the assumptions embedded in transparency as a policy doctrine, both point to the same observation: the same data, governed differently, serves fundamentally different purposes. A performance metric published with accountability governance - named trusts, ranked comparisons, public transparency - shapes behaviour through reputational pressure. The same metric shared with improvement governance - anonymised benchmarks, contextual analysis, learning or tools to aid identification of peer networks - shapes behaviour through insight and professional development. The governance model is not a wrapper around the data; it constitutes the meaning of the data.
Vogd and Knudsen (2014, p. 17) identify a related point from systems theory: constraints can have "enabling effects for organisations in terms of how they take account of ethics". In healthcare, the clinical safety requirements that designers sometimes experience as bureaucratic obstacles are the mechanisms through which the organisation discharges its duty of care. Reading them this way - as expressions of institutional ethics rather than as administrative overhead - changes the design problem. The task becomes not to streamline the clinical safety process but to design a service that makes the clinical safety process work well for the clinicians who depend on it.
Why this matters in healthcare data
In health data contexts, the stakes of getting this wrong become particularly visible. Often, health data cannot be anonymised in any reliable sense, and the consequences of governance failure extend beyond individual harm to institutional trust. If a health service loses public trust through a data breach or a consent violation, the damage is not limited to the affected individuals; it undermines the willingness of future participants to contribute their data, which undermines the research that depends on large-scale participation, which undermines the clinical applications that depend on the research. The chain is long, and each link depends on the one before it.
This suggests that the governance architecture of a health service might be better understood not as a compliance layer but as the infrastructure of a social contract between participants and the service. Every consent step, every access control, every audit trail is a promise made to participants about how their data will be used. If that framing holds, then part of the designer's job is to make those promises legible - to design experiences in which participants can understand what they are consenting to and feel confident that the consent will be honoured.
Working this way requires understanding why each governance element exists, not just that it is required, but what trust relationship it sustains. The boundary objects analysis from the previous post becomes concrete here. The governance documentation - the data protection impact assessments, the information governance frameworks, the clinical safety cases - are boundary objects that coordinate between regulatory, clinical, technical, and design communities. Reading these documents as design specifications, seeing in the access control requirements the shape of a user experience, translating consent requirements into (ongoing) interaction patterns and the basis of a relationship: this is what treating governance as a design material looks like in practice.
From navigating to shaping
There is a further implication worth exploring: if governance shapes the design space, then shaping governance is itself a design activity. The earlier post on programme management cultures described the institutional response to failures like NPfIT: more governance, more process, more documentation. Shore and Wright (2024) trace this trajectory in detail for the NHS specifically, showing how medical audit - originally a profession-led activity concerned with clinical outcomes - was progressively absorbed into the apparatus of New Public Management, becoming clinical governance, then performance management, then market-making. In the process, "what constitutes 'value' shifted from the concerns of highly skilled medical professionals to the pecuniary interests of investors" (Shore and Wright, 2024, p. 10). The governance structures that designers now encounter in NHS programmes are not neutral administrative frameworks; they are the sedimented product of four decades of reform in which the meaning of accountability itself has changed.
But not all governance is well-designed. Some governance requirements are genuinely burdensome without being genuinely protective; some approval processes exist because they were created in response to a specific historical failure and have never been reviewed; some documentation requirements serve no audience because the intended audience has changed or the risk they were managing has been mitigated by other means.
If governance is a design material, then designers are potentially in a position to contribute to the design of the governance itself - not by arguing against governance in principle, but by applying the same user-centred lens to governance processes that they apply to services. Who uses this approval process? What decisions does it support? What happens to the documentation it produces? Is the governance achieving what it was designed to achieve, or has it become what Meyer and Rowan (1977) would call ceremony - adopted for legitimation rather than effect?
Parkhurst (2016) offers a sharper vocabulary for the epistemic mechanism Meyer and Rowan identify. Where Meyer and Rowan describe the what - governance adopted for legitimation rather than effect - Parkhurst names the how: what he calls issue bias, in which a supposedly evidence-based argument is made by reference to a body of evidence that "only represents a limited number of relevant social concerns", converting a contested value judgement into an apparently technical finding and in doing so foreclosing the harder questions that a more candid accounting would require. Governance that operates through issue bias is not merely ceremonial; it is actively protective of a particular settlement, using the appearance of rigour to close down contestation rather than to invite it.
The FDP Benefits Toolkit provides a concrete instance from NHS programme contexts. The toolkit requires product teams to produce an attribution estimate - a percentage (5–50%) expressing how much of any measured improvement can be credited to the product. This convention derives from Full Business Case methodology, where structured expert estimates are a defensible approach to investment decisions under uncertainty. But the toolkit asks the same estimate to serve as post-pilot evidence that benefits were actually realised - a different epistemic task that the estimate cannot perform: it has no comparison group, no mechanism specification, no counterfactual. The percentage converts a governance judgement (is this product worth continuing?) into a technical-looking finding that satisfies programme board reporting while foreclosing the questions a genuine evaluation would require. In practice, benefits monitoring in FDP products has frequently developed retroactively: in one product context, "preparing and accelerating towards GA (including the benefits thesis)" appeared at priority three in a project priorities table, below resolving critical bugs. The benefits artefact existed; its function was governance compliance rather than design input. The toolkit's structural position - as a compliance checkpoint rather than an early-stage design instrument - produces this outcome almost inevitably. Governance designed as a gate will be treated as a gate.
This is not a counsel of despair about governance in programme contexts. It is a design challenge. The question of whether a benefits framework could be redesigned to function as a genuine input - shaping decisions about what to build, how to measure it, and what would constitute evidence that it worked - is as much a design problem as a governance one. The later post on value hypotheses in this series examines what that redesign might require.
The final post in this series takes up the practical dimension: finding insertion points, building alliances, knowing what can be influenced and what cannot, and the documentation practices that allow design decisions to survive contact with programme governance.
References
Cooper, S. (2019) Are We There Yet?: The Digital Transformation of Government and the Public Sector. Canberra: Department of the Prime Minister and Cabinet.
Metcalf, G.S. (2014) Social Systems and Design. Tokyo: Springer.
Meyer, J.W. and Rowan, B. (1977) 'Institutionalized Organizations: Formal Structure as Myth and Ceremony', American Journal of Sociology, 83(2), pp. 340-363.
Parkhurst, J. (2016) The Politics of Evidence: From Evidence-Based Policy to the Good Governance of Evidence. Abingdon: Routledge.
Shore, C. and Wright, S. (2024) Audit Culture: How Indicators and Rankings Are Reshaping the World. London: Pluto Books.
Vogd, W. and Knudsen, M. (2014) Systems Theory and the Sociology of Health and Illness. Abingdon: Routledge.