Working With the Grain: Practical Strategies for Designers in Programme Cultures

From theory to practice

The preceding posts in this series have established a set of arguments: that many public sector programme management cultures are optimised for accountability and risk reduction, not hostile to design by accident; that service design offers specific capabilities that programme management cannot generate from within its own logic; that deploying those capabilities requires translation work through artefacts that function across professional boundaries; that governance is a design material, not an obstacle to be navigated around; and that the legibility requirements through which programme governance operates determine what the programme can see, rendering cross-cutting, situated knowledge structurally invisible. This post turns to the practical question: given all of this, what can a designer actually do when working in a programme management culture? What are the strategies for working with the grain of programme management rather than against it?

The answer is not a methodology. Programme management cultures already have too many methodologies, and adding a design methodology on top of the programme methodology on top of the agile methodology on top of the clinical safety methodology is not a recipe for coherence. What follows are strategies - ways of working that have emerged from practice in public sector programme environments and that can be adapted to other contexts where design meets governance.

Finding the insertion points

Programme management cultures have regular rhythms: sprint reviews, programme boards, gateway reviews, show-and-tells, stand-ups, retrospectives. Each of these is a moment where the programme's attention is collectively focused, and each represents a potential insertion point for design intelligence - if it is packaged in the right form.

The question is not "how do I get a design review meeting on the calendar?" Adding a new meeting to an already overloaded governance structure is a losing strategy; it will - most likely, but not inevitably - be seen as additional overhead, attendance will be poor, and the decisions made there will lack authority because they sit outside the governance structures where authority is invested. The more productive question is "how do I make the existing meetings carry design content?" - a sprint review that includes a user research finding expressed as a delivery risk, a programme board paper that includes a cross-cutting journey map expressed as an integration dependency, a gateway review that includes evidence of user need alongside evidence of technical readiness.

Each governance event can absorb a different kind of design contribution. A sprint review can absorb a specific, actionable finding about a feature under development. A programme board can absorb a strategic argument about scope or direction, but only if it is framed in programme language - risks, dependencies, options with trade-offs. A gateway review can absorb evidence of user readiness alongside technical readiness, but only if it is structured as evidence against the gateway criteria rather than as a standalone design assessment. The skill is matching the design insight to the governance moment.

Speaking the language of risk

Programme managers are professionally accountable for managing risk. If a designer can articulate user needs as delivery risks, they are speaking a language that programme management already respects. This is not cynical manipulation; it is accurate translation. In healthcare systems, poor user experience is a genuine delivery risk: clinicians who cannot use a system will develop workarounds that undermine data quality; patients who cannot navigate a pathway will present at more expensive points of care; operational staff who find a tool unusable will resist adoption, and the programme's benefits case will collapse.

The translation from design concern to delivery risk follows a pattern: identify the user need, identify the consequence of not meeting it in terms the programme cares about (cost, timeline, adoption, safety, reputation), and present the design work as risk mitigation rather than as an aesthetic or ethical preference. "Users find this confusing" is a design observation. "If we ship this without addressing the navigation issues, we will see the same adoption failure that happened with [comparable system], and we will need to fund a remediation programme" is a delivery risk. Both statements describe the same reality; the second is, perhaps, more legible to programme governance.

Malmberg (2017) identifies the organisational discordance between design culture and public sector structures as a fundamental challenge for building design capability. But the discordance is not symmetrical; programme management holds the institutional authority, the budget, and the decision rights. The designer who waits for programme management to come to them - to recognise the value of design on design's own terms - will wait a long time. The designer who translates design value into programme terms can start creating change immediately.

Building alliances with operations

Operational staff - the clinicians, administrators, analysts, and managers who actually use the systems that programmes build - are the designer's most natural allies in programme management cultures, though they rarely describe themselves in those terms. They do not say "the user experience is poor"; they say "this doesn't work for us" or "we've had to build workarounds" or "the data quality is terrible because nobody fills in the mandatory fields properly". These are the same insights expressed in different professional languages, and the designer's role is to connect them.

The alliance works because operational staff have institutional credibility that designers often lack. When a clinician says "this pathway does not work", the programme listens in a way it might not listen to a designer saying "the user research shows unmet needs". The designer's contribution to this alliance is analytical: taking the operational staff's experiential knowledge and giving it structure, connecting individual complaints to systemic patterns, linking specific workarounds - the Excel shadow systems, the doctors' notebooks - to design decisions that were made upstream.

The operational staff's contribution, in turn, is legitimacy. Their concerns carry weight in programme governance because they are the people the programme exists to serve.

The work on shared themes in public-sector organisational boundaries demonstrates this at the level of pattern: consistent operational concerns across fourteen trusts, expressed in operational language but describing design problems. The trusts had independently developed similar workarounds for similar shortcomings; the design contribution was to recognise this convergence and to articulate what it implied for the platform being built.

Knowing what you can and cannot influence

Not every programme management culture can be reshaped from within. Some organisational contexts are structurally closed to design input - not because individual programme managers are resistant, but because the institutional architecture does not create the conditions for design to operate. The programme may be vendor-led, with design authority residing in a commercial relationship that the designer cannot access. The governance structure may be so tightly specified that there is no space for the exploratory work that design requires. The political timeline may be so compressed that the programme literally cannot afford the time that user research would take.

The professional skill is recognising the difference between "this is hard but possible" and "this is structurally excluded". In the first case, the strategies described above - finding insertion points, translating into risk language, building operational alliances - can create real change. In the second case, the same strategies will produce frustration, burnout, and the painful experience of doing good work that the organisation cannot absorb.

Blomkamp (2021) identifies this as a pattern in systemic design practice: limited capacity and capability, combined with ways of working that do not accommodate design-led approaches, create inconsistent conditions for practice. The honest assessment of which kind of context you are in is not defeatism; it is a prerequisite for choosing where to invest energy. The reflections on working without a process and on when the brief is the problem document what this assessment looks like in practice - the gradual realisation that some contexts require not more effort but different expectations about what effort can achieve.

Documentation as design practice

In programme management cultures, what is not documented does not exist. This is not a figure of speech; it is a literal description of how institutional memory works. Design decisions made in conversation but not recorded in programme documentation will be overwritten by the next sprint, the next programme board, the next change of personnel. The designer who relies on shared understanding rather than documented decisions will find that the understanding was not as shared as they thought.

Treating documentation as a design material means writing clear, well-structured decision records that carry design rationale into the governance record. It means writing options papers that present design decisions in the format that programme governance expects: options, trade-offs, risks, recommendations, and the evidence base for each. It means ensuring that user research findings are not stored in a separate design repository but are referenced in the programme's risk register, in the business case, in the benefits realisation plan.

This is tedious work, and it is beneath the dignity of no designer. The programme board member who encounters a well-written options paper that integrates design evidence alongside technical and commercial considerations is encountering design work - even if it does not look like a journey map or a prototype. The decision record that captures why a particular interaction pattern was chosen, what user research supports it, and what risks would arise from changing it is a design artefact as surely as a wireframe. It is also, in programme management terms, a far more powerful one, because it exists within the governance infrastructure where decisions are actually made.

The long game

Working with the grain of programme management is a long game. The strategies described in this series - finding insertion points, translating between professional languages, building operational alliances, treating governance as a design material, documenting design decisions in programme terms - do not produce dramatic transformations. They produce gradual shifts: a programme board that begins to ask about user evidence alongside technical evidence; a sprint review that includes user research findings as a matter of course; a governance structure that allocates decision rights for design alongside decision rights for architecture and commercial.

These shifts are slow, and they require persistence through the frustration of working in a culture that was not designed for you. But they are also cumulative. A programme that has learned to include design intelligence in its governance does not easily unlearn it, because the design contributions make the programme's own objectives more achievable. The programme manager who has seen design-led risk identification prevent a delivery failure becomes an advocate for design involvement in the next programme. The clinical lead who has seen user research evidence improve adoption rates begins to expect it.

The ambition is not to replace programme management with design but to make programme management smarter by incorporating design's distinctive capabilities into the governance structures where decisions are made. Junginger (2015) describes designers finding themselves working "in environments where specific design legacies seemingly suffocate and frustrate their efforts"; the response is not to fight the legacy but to understand it, to work with its grain, and to reshape it from within - one governance decision, one programme board paper, one well-translated insight at a time.

References

Blomkamp, E. (2021) Systemic design practice for participatory policymaking. Policy Design and Practice.

Junginger, S. (2015) 'Organizational Design Legacies and Service Design', The Design Journal, 18(2), pp. 209-226.

Malmberg, L. (2017) Building Design Capability in the Public Sector: Expanding the Horizons of Development. Linköping University.