Design theory rests on an assumption so foundational it's rarely examined: that making things visible enables change. Prototypes surface problems or prove viability, or provide something tangible that can be tested further with users, or prove technical integration. Maps reveal terrain. Visualisations create shared understanding. From this visibility, alignment and action follow.
In my previous posts, I've described the algorithm archaeology at SCÖ, reflected on what the discipline of type specification revealed about the project's data dependencies, and documented watching the project's milestones silently pivot as original goals proved unachievable.
Now I want to reflect on what happened when visibility increased - and why it didn't produce what design theory would predict.
The Orthodox View
Design literature consistently emphasises visibility's productive power. Howlett and Mukherjee argue that "prototyping is central to design thinking as a practically focused and tangible mechanism for soliciting feedback from users" (Howlett & Mukherjee, 2018, p. 24). Visualisation and materialisation are understood as core mechanisms through which design works - making abstract problems concrete so stakeholders can respond.
The mechanism seems clear: you make something visible, stakeholders respond, understanding improves, better decisions follow. Design artefacts create affordances for dialogue through their physical form, enabling stakeholders to engage through pointing, placing, and moving. Visibility creates shared ground for conversation.
This assumption runs deep. Wastell, writing about managers as designers in public services, argues that design involves making problems and ideas visible, creating frameworks to make visual sense of complex information (Wastell, 2011). The International Standards Organisation's guidance on human-centred design emphasises how scenarios, simulations, models and prototypes enable designers to communicate proposed designs to users and stakeholders (ISO, 2019). Visibility is design's core contribution.
What Actually Happened
My design work made things visible. The concept maps showed that stakeholders meant different things by "data science". The process diagrams showed the gap between idealised ML development and available infrastructure. The analysis documented why the Pathway Generator couldn't be piloted as planned.
This visibility was accurate. The silent pivot I described was enabled partly by my work providing an alternative narrative: we investigated, we learned, we documented conditions. There was talk of an "insight report" that would capture these findings - a deliverable that could replace the Pathway Generator pilot that was no longer possible.
But the visibility didn't produce what I'd expected. It didn't produce a strategic reassessment of the project's direction, honest conversation about what had been promised versus what was possible, changes to how future projects would be scoped, or clarity about my role or the PhD's viability.
Instead, what followed was harder to name.
The Texture of Response
I want to be careful here, because what I'm describing was largely unspoken - which is part of what made it difficult.
There was no meeting where someone said "your analysis is wrong" or "we disagree with your findings". The concept maps weren't contested. The process diagrams weren't critiqued. The silence around the findings was, in some ways, an acknowledgment of their accuracy.
But accuracy didn't translate into engagement. Meetings continued. Reports were filed. The PhD was discussed in terms of "pivoting" and "finding a new direction". Life went on as if the analysis existed in a parallel track - noted but not integrated.
What I experienced was something like avoidance. Not hostile, exactly. More like... the organisation developing an immune response to uncomfortable information. The information was present, but it wasn't metabolised.
Concurrent with this was an organisational restructuring driven by funding pressures - the ESF application for continued funding had been rejected. Up to eleven people would be made redundant, including me. Whether the restructuring was related to the project's difficulties, or was genuinely coincidental financial pressure, was never clear. Perhaps both.
Alternative Explanations
Before attributing this to organisational pathology, I should consider other interpretations.
Bandwidth constraints: Public sector organisations face relentless operational demands. Staff were managing service delivery, reporting requirements, and now impending redundancies. Perhaps the findings weren't ignored - they simply couldn't be prioritised amid more urgent pressures. Change management and communication are challenging regardless of organisational context.
Appropriate scope limitation: Perhaps the project's leadership understood the findings but judged them outside their authority to address. The structural issues I documented - data governance across multiple agencies, technical capacity gaps, coordination challenges - may have been correctly seen as requiring interventions at levels the project couldn't influence. Not engaging might have been realistic rather than defensive.
Different reading of the findings: I experienced the concept maps as documenting impossibility. Others may have read them differently - as useful context, as work in progress, as one input among many. What seemed to me like avoidance might have been reasonable disagreement about significance.
The researcher's position: I was a new staff member, a PhD student, an outsider with limited organisational history. My analysis may have carried less weight than I assumed it should. This isn't necessarily wrong - earned trust matters in organisations, and a one-year employee's dramatic conclusions might reasonably be treated with some scepticism.
How Organisations Respond to Exposure
These alternative explanations deserve serious consideration. But organisational research also suggests that when design artefacts expose problems, organisations can respond in patterned ways that go beyond individual judgement or bandwidth constraints. The literature on whistleblowing, workplace ostracism, and what McDonald, Graham and Martin (2010) call "outrage management" suggests a taxonomy worth developing - roughly ordered from most benign to most adversarial.
Benign Responses
Recognition and correction: "Thank you for showing us this. Let's change course". This is the response design theory imagines. Rare in my experience.
Dispute: "Your analysis is wrong. Here's why". At least this is engagement. Dispute treats the exposure as a claim that can be evaluated. It respects the epistemic status of the work, even while rejecting its conclusions.
Co-optive Responses
Deflection: "Interesting findings. Anyway, about the next milestone"... The exposure is acknowledged but not engaged. The conversation moves on. Ahmed (2019) describes how complaints can be stopped through conversations: "if those you speak to refuse to act on what you say, the path is blocked".
Collective silence: [no response] Different from deflection, which at least acknowledges before moving on. This is the point raised in a workshop or meeting that falls into a void - met with "difficult or uncomfortable silences" (Open University, 2012), averted gazes, a collective non-acknowledgement. The exposure isn't disputed or deflected; it simply isn't picked up.
Smithson and Venette (2013) describe "stonewalling" as an image-defence strategy where "silence involves withholding a response and relinquishing control". But in group settings, it functions differently - the silence isn't strategic withdrawal but social signalling. Everyone present learns that this is not a point to engage with. Brown (2010), writing about design practice, warns: "Don't take silence for complacency, agreement, or buy-in". The silence might mean the opposite.
This response is insidious because it leaves no trace. There's nothing to dispute - no counter-argument was made. The point was simply... not received.
Incorporation: "Great work. This will go in the report". The exposure is promised a place in some official record. The artefacts are positioned to authenticate a narrative rather than to enable change. This is what I've started calling design capture. What makes incorporation particularly effective as a neutralising move is that the promised report doesn't need to materialise. The promise itself does the work - it acknowledges the findings while deferring engagement indefinitely.
Adversarial Responses
Reinterpretation: "That's not what happened / what it means". McDonald et al. (2010) identify reinterpretation as a key tactic: reframing events to minimise their severity or shift their meaning. The exposure is acknowledged but its significance is contested. "Yes, we lack data infrastructure, but that's actually an opportunity for innovation".
Procedural absorption: "We'll need to take this through the proper channels". Official channels - grievance procedures, review boards, formal processes - can function to legitimise inaction rather than address concerns. As Ahmed (2019) observes: "If organizations can disqualify complaints because they take too long to make, they can also take too long to respond to complaints". The process becomes the point. McDonald et al. note that official channels "are more likely to work against low-level perpetrators who do not have the support of organizational allies" - suggesting that when exposure threatens those with power, procedural responses serve containment rather than correction.
Devaluation: "Consider the source". Attacking the credibility, competence, or character of whoever produced the exposure. McDonald et al. (2010) found devaluation "common in cases of sexual harassment, openly and/or through rumors". Khan (2015) describes "smear campaigns" and "distortion campaigns" as tactics to "negatively affect a target's social reputation". This doesn't require overt hostility. It can operate through implication: questioning methodology, suggesting inexperience, noting that the person "doesn't really understand the context".
Isolation: "We'll handle this internally". Ostracism and exclusion - withdrawing support, excluding from meetings, limiting access to information. Williams and Nida (2016) note that ostracism "can clearly play a functional role" in organisations, enabling members to sanction those who threaten group cohesion. Ellemers and de Gilder (2022) observe that "in the process of legally containing and averting responsibility for misbehavior in the workplace, those who express concern about behavioral violations can be perceived as disloyal troublemakers". The person who exposes becomes the problem.
Intimidation: "Think carefully about your next steps". Threats - explicit or implied. Poor references, unwelcome assignments, damage to career prospects. McDonald et al. (2010) found intimidation "identified in 18 cases, including threats of reprisals (39%), physically intimidating behavior (3 cases)". In academic-practitioner contexts, this might be subtler: implications about PhD progression, future collaboration opportunities, or professional reputation.
Cover-up: "This conversation didn't happen". Concealment, secrecy, destroying evidence. The most adversarial response - the exposure is treated as something that must itself be suppressed.
What I Experienced
Applying this taxonomy to my situation: what I experienced was mostly deflection, collective silence, and incorporation. The findings weren't disputed; they were acknowledged and filed, or simply not acknowledged at all. In workshops and discussions, points about missing data infrastructure or governance gaps would land in the room and... stay there. Unacknowledged. The conversation would continue as if the point hadn't been made.
This is the insidious thing about collective silence: it leaves you questioning whether you spoke clearly, whether the point was valid, whether you understood the context. The absence of response becomes a kind of gaslighting - not through contradiction but through non-recognition.
The formal outputs were stranger still. There was supposed to be an "insight report" - a document that would capture the project's learnings about conditions for AI implementation. My work was positioned as contributing to this report. But the report itself never materialised, at least not in any form I ever saw. The closest thing was a presentation containing generic reflections - nothing that confronted the project's actual difficulties or documented what had been learned about why the original goals were unachievable.
This is incorporation taken to its logical extreme: design capture without the captured object. The promise of documentation does the work of acknowledgment while the documentation itself remains perpetually deferred. The findings were going to be in the report; the report was going to be produced; the project had therefore succeeded in producing insight. The syllogism completes without the middle term ever existing.
I find myself curious about what drives this pattern. Is it conscious strategy or emergent defence? The report functioned as what Beckert might call a "fictional expectation" - an imagined future that organises present action without requiring realisation. Like the Pathway Generator itself, the insight report operated as a technoimaginary: something that could be invoked, promised, and celebrated without confronting the awkward question of whether it existed.
I also experienced something adjacent to isolation - a gradual exclusion from conversations where decisions were being made, a sense that the work that exposed problems was less welcome than work that confirmed progress.
The taxonomy helps me see that my experience was relatively benign. The literature documents far worse. But it also helps me understand the range of possible responses - and why recognition and correction, the response that design theory imagines, may be the exception rather than the rule.
Defensive Routines: A Theoretical Frame
Organisational theory offers deeper frameworks for understanding these patterns.
Chris Argyris spent decades studying what he called "defensive routines" - patterns that protect organisational members from embarrassment or threat, but also prevent learning. One synthesis describes the core dynamic: "individuals keep their premises and inferences tacit, lest they lose control" (Argyris, cited in Ramage & Shipp, 2020, p. 39). Information that threatens existing commitments gets neutralised rather than engaged.
Defensiveness can arise when it seems that undiscussable information might be surfaced - the risk of embarrassment or loss of face is substantial (Zuber-Skerritt & Wood, 2019). The key insight is that the information doesn't have to be contested - it can simply be rendered undiscussable. Acknowledged in principle, but not engaged in practice.
Wastell, whose work on "technomagic" I've drawn on elsewhere, describes how social defence mechanisms become the antithesis of genuine organisational learning (Wastell, 1999). Defensive routines don't require conspiracy or ill intent. They emerge naturally when people face information that threatens investments they've made - reputational, financial, psychological.
Fotaki and Hyde develop the concept of "organisational blind spots" to explain how organisations remain committed to failing strategies. They argue that "individual psychic processes of idealization, splitting, and blame contribute to the creation of social defences operating at group and organizational levels" (Fotaki & Hyde, 2014, p. 7). The organisation doesn't refuse to see - it develops systematic ways of not-seeing.
What I observed fits this pattern. The project had accumulated commitments: funding secured on the basis of AI/ML promises, academic reputations linked to the collaboration, a job description (my job description) that specified federated learning, monthly reports that had declared progress toward now-abandoned milestones. Acknowledging that the foundational premise was wrong would threaten all of these. Easier to let the findings exist - technically acknowledged - while continuing as if they hadn't fundamentally changed what was possible. This is decoupling in Meyer and Rowan's sense: formal structures - findings acknowledged, reports promised - maintain a legitimate surface while actual operations continue undisturbed.
The Designer's Position
This puts the designer in a difficult position.
Design work worked - in the sense that it successfully made visible what needed to be seen. The concept maps were clear. The diagrams were accurate. The synthesis was sound. By the standards of design practice, I did my job.
But the design work failed - in the sense that visibility didn't produce the outcomes design theory predicts. There was no productive dialogue. No strategic pivot based on new understanding. No "aha" moment where stakeholders aligned around a better path forward.
The visibility I created seems to have been absorbed into something like a defensive routine. It was positioned as contributing to an insight report - a promised deliverable that never quite arrived. My work helped the organisation gesture toward victory on redefined terms, toward documentation that was always forthcoming. Whether I wanted this or not, my design artefacts may have ended up legitimising a retreat from the original goals - not by appearing in an actual report, but by their existence making the promise of a report more credible.
Boundary Objects and Exposure Devices
Design theory has concepts that help here. Bergman, Lyytinen and Mark define a boundary object as "an artifact or a concept with enough structure to support activities within separate social worlds, and enough elasticity to cut across multiple social worlds" (Bergman, Lyytinen & Mark, 2007, p. 5). Boundary objects succeed partly because they're ambiguous - different stakeholders can project different meanings onto them.
Boland and Collopy describe boundary objects as artefacts that serve "as an intermediary in communication between two or more persons or groups who are collaborating in work" (Boland & Collopy, 2004, p. 46). The flexibility is a feature, not a bug. It enables collaboration without requiring agreement.
But what I produced wasn't flexible in this way. The concept maps didn't enable multiple interpretations - they documented specific gaps. The process diagrams didn't support ambiguity - they showed absent steps. These artefacts forced confrontation with material specificity.
I've started thinking of this as the difference between boundary objects and what might be called exposure devices. Boundary objects maintain productive ambiguity. Exposure devices force confrontation with material reality. Both are design artefacts. They operate differently.
When institutional investment in a particular imaginary is high, boundary objects may enable continued collaboration while exposure devices might trigger defensive routines. I think my artefacts exposed rather than enabled - and perhaps the organisation responded accordingly. Design capture is possible because design artefacts are ambiguous in their function. A concept map can be a tool for strategic reorientation or evidence that strategic work happened. A process diagram can guide implementation or demonstrate that implementation was considered. The same artefact can serve different purposes depending on how it's used.
What Design Theory Doesn't Prepare You For
Design education prepares you for resistance in the form of disagreement. Stakeholders might push back on your prototypes. Users might reject your proposals. You iterate, you refine, you try again.
Design education doesn't prepare you for resistance in the form of non-engagement. For findings that are acknowledged but not acted upon. For artefacts that are praised and filed. For work that is technically successful but organisationally inert.
The assumption beneath design's visibility paradigm is that organisations want to see clearly. That accurate information, well-presented, will be welcomed and used. That design's contribution is to provide clarity that stakeholders are seeking.
But what if organisations have investments in not seeing clearly? What if accurate information threatens commitments that matter more than accuracy? What if the design artefact reveals something the organisation has reasons to avoid?
The framework of sacred and profane concepts offers one explanation. When concepts function as sacred - when their meaning is bound up with collective identity rather than empirical content - making them visible risks profaning them. Design artefacts that force specificity threaten the protective ambiguity that sacred performance depends on. The defensive response isn't irrational; it's the predictable consequence of exposing what ritual is designed to protect.
A Different Kind of Design Work
I'm starting to think that what I've done here is design work of a different kind than I was trained for.
Not design-for-implementation: creating artefacts that enable building something.
Rather, something like design-for-understanding: creating artefacts that reveal conditions, even when those conditions are inhospitable.
Or perhaps design-as-diagnosis: using design methods to surface what's actually happening, even when what's happening is that the project can't succeed.
This might be valuable work. Understanding why technology projects fail in public sector contexts matters. Documenting the gap between technological promises and material conditions has theoretical and practical significance. The case study I'm living through could inform future projects, future policy, future design education.
But it's not what I was employed for. And it requires a frame I'm still developing.
What Happens Now
My funding runs out in May. The organisation is restructuring - up to eleven of us may be made redundant. The PhD's future is uncertain - technically still connected to the university, but without the industrial partnership that was supposed to sustain it.
I don't know what comes next. Whether the work I've done will matter beyond my own learning. Whether I'll be able to continue the PhD in some form, or whether this is the end of it.
What I'm increasingly convinced of is that design theory needs to reckon with its visibility assumption. Making things visible is not automatically productive. In contexts where visibility threatens commitments, something happens that neutralises even accurate, well-crafted design work. I'm still working out what to call that something, and what it means for design practice.
The limits of making visible may be the limits of design's theory of change. But I'm not certain yet. I need to think about this more.
References
Ahmed, S. (2019). What's the Use? On the Uses of Use. Duke University Press.
Argyris, C. (1990). Overcoming Organizational Defenses: Facilitating Organizational Learning. Allyn and Bacon.
Bergman, M., Lyytinen, K. and Mark, G. (2007). Boundary Objects in Design: An Ecological View of Design Artifacts. Journal of the Association for Information Systems, 8(11), 546-568.
Boland, R. and Collopy, F. (2004). Managing as Designing. Stanford University Press.
Brown, D.M. (2010). Communicating Design: Developing Web Site Documentation for Design and Planning (2nd ed.). New Riders.
Ellemers, N. and de Gilder, D. (2022). The Moral Organization. Cambridge University Press.
Fotaki, M. and Hyde, P. (2014). Organizational blind spots: Splitting, blame and idealization in the National Health Service. Human Relations, 68(3), 441-462.
Howlett, M.P. and Mukherjee, I. (2018). Routledge Handbook of Policy Design. Routledge.
ISO (2019). ISO 9241-210:2019 Ergonomics of human-system interaction - Part 210: Human-centred design for interactive systems.
Khan, R. (2015). Avoidant Abuse: A Primer.
McDonald, P., Graham, T. and Martin, B. (2010). Outrage Management In Cases Of Sexual Harassment As Revealed In Judicial Decisions. Psychology of Women Quarterly, 34(2), 166-180.
Open University. (2012). Managing People and Organisations. The Open University.
Ramage, M. and Shipp, K. (2020). Systems Thinkers (2nd ed.). Springer.
Smithson, J. and Venette, S. (2013). Stonewalling As An Image-Defense Strategy: A Critical Examination Of BP's Response To The Deepwater Horizon Explosion. Communication Studies, 64(4), 395-410.
Wastell, D. (1999). Learning Dysfunctions in Information Systems Development: Overcoming the Social Defenses with Transitional Objects. MIS Quarterly, 23(4), 581-600.
Wastell, D. (2011). Managers as Designers in the Public Services: Beyond Technomagic. Triarchy Press.
Williams, K.D. and Nida, S.A. (2016). Ostracism, Exclusion, and Rejection. In D. Mashek and A. Aron (Eds.), Handbook of Closeness and Intimacy. Psychology Press.
Zuber-Skerritt, O. and Wood, L. (2019). Action Learning and Action Research: Genres and Approaches. Emerald.