My employment at the coordination association ends this month. The ESF funding wasn't renewed. Up to eleven of us are being made redundant. The PhD continues, technically, but without the industrial partnership that was supposed to be its foundation.
This is the final post in a series called "The Limits of Making Visible" - not what I was supposed to learn about federated learning, but what I actually learned about technology projects, design practice, and organisational life.
What This Series Has Documented
Over eight posts, I've traced an arc from technical promise to organisational reality.
I began by explaining what federated learning is - a genuine technology with real applications, but one that was invoked here for reasons that had little to do with its actual requirements. I explored the service context that vocational rehabilitation operates within, and considered whether mission-oriented framing might have helped or hindered. I documented what federated learning would actually require - data infrastructure, governance frameworks, technical capacity - none of which existed.
Then I described what happened when design work made these gaps visible. Algorithm archaeology at SCÖ revealed the gap between what the Pathway Generator required and what the Swedish context could supply. The silent pivot documented how project milestones were quietly redefined as original goals proved unachievable. And in the limits of making visible, I developed a framework for understanding how organisations respond when design artefacts expose inconvenient truths - from benign correction through co-optive incorporation to adversarial isolation.
What follows is an attempt to synthesise what I've learned.
On Federated Learning as Technoimaginary
Federated learning is real. Google uses it for keyboard prediction. Hospitals are piloting it for collaborative medical imaging research. The technical papers are sophisticated and the engineering is impressive.
But FL was invoked in this project for reasons that had little to do with its actual properties or requirements. It was invoked because it sounded advanced and innovative, because it appeared to solve the privacy problem that blocks data collaboration, because it created a PhD-shaped research question, and perhaps because it deflected attention from whether basic data science was possible.
The gap between what FL is and what FL was imagined to do here is the gap between technology and technoimaginary - between material capability and projected promise.
Beckert's work on "fictional expectations" helps explain this gap. He describes how "imagined futures help to explain actors' willingness to commit themselves to endeavors despite the incalculability of outcomes and environmental pressures to conform to established behaviors" (Beckert, 2016, p. 37). The fictional expectation of FL - that it would enable Swedish organisations to benefit from Icelandic algorithms without the hard work of building data infrastructure - mobilised resources and created positions. Whether the expectation was achievable was a question the fiction didn't require anyone to answer.
FL cannot conjure data infrastructure into existence. It cannot create governance frameworks. It cannot build technical capacity in organisations that lack it. It cannot make possible what the preconditions for possibility don't support.
On Techno-Limerence
I've been trying to find language for the phenomenon I observed - the persistent attachment to a technology project despite accumulating evidence of its impossibility.
The term I keep returning to is techno-limerence: an infatuated attachment to an imagined technological future that persists regardless of material conditions. Like romantic limerence, it involves idealisation, obsessive focus, and resistance to disconfirming information.
The limerent attachment was not to the actual technology - the specific algorithms, data requirements, infrastructure needs - but to the imaginary of the technology. The imaginary is immune to material objections because it operates in a different register. "Yes, but imagine if we could"...
Rip's work on technology expectations describes how the "possibility of inflated promises leading to disappointment" keeps recurring in innovation cycles (Rip, 2012, p. 8). The hype cycle is well-documented. But what interests me is not the cycle itself but the persistence of attachment even as the cycle turns. Organisations don't abandon fictional expectations just because evidence accumulates against them.
Techno-limerence is sustained by several mechanisms: temporal displacement ("it will work eventually, even if not now"), abstraction (at high enough abstraction, any technology seems possible), sunk costs (having invested this much, we can't admit it won't work), and identity (we are the kind of organisation that does innovative things).
Breaking techno-limerence might require forcing confrontation with material specificity - which is what my design artefacts attempted to do. Whether that's why they were absorbed rather than engaged, I'm not certain.
On What I Experienced
In the limits of making visible, I developed a taxonomy of how organisations respond to exposure - from benign recognition through co-optive deflection to adversarial isolation. What I experienced was mostly in the middle ground: deflection, collective silence, and incorporation.
Points made in meetings would land and... stay there. Unacknowledged. The silence leaves you questioning whether you spoke clearly, whether the point was valid. That's the insidious thing about collective silence: it's gaslighting through non-recognition.
The formal outputs were stranger. There was talk of an "insight report" that would capture the project's learnings. My findings were positioned as contributing to this report. But the report itself never materialised - the closest thing I saw was a presentation containing generic reflections that didn't confront what had actually happened. The promise of documentation did the work of acknowledgment while the documentation remained perpetually deferred. That's design capture taken to its limit - artefacts serving to authenticate a narrative that doesn't need to exist in written form to function.
There's a theoretical frame for this that I developed earlier. Matthews' work on sacred service design describes how myth, ritual, and symbol sustain collective commitments that don't require - and may actively resist - empirical verification. The project meetings, the milestone reports, the promises of deliverables: these were constitutive performances that produced the project's reality through their enactment. The "insight report" didn't need to exist as a document because it already functioned as a ritual object - invoked in meetings, promised in emails, doing its work through perpetual imminence.
I want to be clear: my experience was relatively benign. The literature documents far worse - devaluation, intimidation, cover-up. But recognising where my experience sits on the spectrum helps me understand the range of possible responses, and why the correction that design theory imagines may be the exception rather than the rule.
On Design in Impossible Contexts
I was employed as a designer. I used design methods: stakeholder synthesis, concept mapping, process visualisation, workshop facilitation. I produced design artefacts: maps, diagrams, frameworks, presentations.
By conventional measures, this work was competent. The artefacts were clear. The synthesis was accurate. The findings were valid.
But the work didn't enable what design is supposed to enable. It didn't help stakeholders converge on a solution. It didn't facilitate implementation. It didn't produce the collaborative alignment that design theory promises.
Instead, my design work exposed. It made visible the gap between what was promised and what was possible. It documented an absence - the absence of data, infrastructure, capacity, and governance that would be needed for "data science" in any form.
Exposure is not the same as enablement. Design theory under-theorises what happens when design artefacts expose rather than enable - when they reveal inconvenient truths rather than productive possibilities. This is territory that design practitioners surely encounter, but that design education doesn't prepare you for.
On Academic-Practitioner Collaborations
This project involved a Swedish coordination association, an Icelandic occupational rehabilitation organisation, a couple of Swedish universities, and a research group from a UK university. The collaboration was structured around knowledge transfer: academic expertise flowing into practitioner contexts.
What I observed was a different dynamic. The academics had developed a tool (the Pathway Generator) in one context (Iceland) and wanted to extend it to another (Sweden). The practitioners wanted innovation and the legitimacy that comes with academic partnerships. The funders wanted "data-driven" approaches to public services.
But no one seems to have done the due diligence to determine whether the transfer was feasible. The research-practice gap runs in both directions - academics may not understand practitioner contexts, and practitioners may not interrogate academic claims. One could argue that some of this due diligence should have been done before my PhD position was approved, but, one could also argue that my research and employment was part of that due diligence or exploratory process. But if the latter justification holds, its strange that my employment was the only role that was terminated when the exploratory process revealed impossibility - everyone else just avoided acknowledging the points I was raising, and I was silently pivoted out of the project and out of the organisation. It seems that the very earnestness of my attempts to help us - as a designer and researcher - ended up with me being burned, rather than shedding any light on the situation, or leading to more constructive or generative responses.
The UK academics had not assessed Swedish data conditions. The Swedish organisations hadn't interrogated what "federated learning" would require, or even that do perform the most basic of data science one needs raw data, and a governance framework to support its collection and use. The universities facilitating the PhD hadn't verified that the industrial partner could support the research. Everyone else - I guess - assumed someone else had done the groundwork, or were quite happy to sustain the fiction for as long as possible.
I'd like to think that none of this is malice. It's the structure of academic-practitioner collaborations in funding environments that reward ambition over feasibility. The incentives favour proposing bold things and figuring out the details later. By the time the details reveal impossibility, commitments have been made and face must be saved, those most responsible for the funding application or who were involved in crafting the proposal are also, generally, or so it seems, those most shielded from the effects of failure.
This connects to something I raised in the previous post about the silent pivot: the same structures that enable goal displacement also produce the impossible goals in the first place. ESF funding logic, academic career incentives, and multi-stakeholder consortium governance together create a systematic pressure to propose things at high abstraction - where they sound achievable and transformative - without requiring anyone to verify whether the material conditions exist. "Federated learning for vocational rehabilitation" is precisely the kind of object these structures produce: technically plausible in the abstract, impressive in a funding application, generative of PhD positions and academic publications, but never grounded in the material reality of the organisations that would need to implement it. The technoimaginary isn't an accident or a mistake in judgment. It's the predictable output of a funding and governance system that rewards ambition and penalises caution. Which means the design work I described in earlier posts - the algorithm archaeology, the typed interface definitions, the architecture diagrams - wasn't just exposing an unfortunate gap between promise and reality. It was exposing the structural mechanism by which certain kinds of collaboration systematically produce things that can't survive contact with specificity. The performance-and-substance distinction I explored earlier frames this as a systemic feature, not an anomaly: when organisational legitimacy depends on adopting rationalized myths, the accountability structures that enforce those myths will also systematically resist the specificity that would expose them.
On the PhD
My PhD exists in a strange state. The industrial partnership is over. The original research question - exploring federated learning for vocational rehabilitation - is moot. The funding that was supposed to support five years lasted one.
But I have material. A year of observations. Design artefacts. Meeting notes. This series of blog posts. A case study of a technology project that promised one thing and delivered another.
The PhD I'll write is not the PhD I was employed to write. It won't be about implementing federated learning. It will be about why federated learning was proposed in a context that couldn't support it, what happened when design work made this visible, and what this reveals about technology projects in public sector contexts.
This is legitimate research. Understanding failure matters. Greenhalgh's work on technology implementation finds that "the overarching reason why technology projects in health and social care fail is multiple kinds of complexity occurring across multiple domains" (Greenhalgh, 2018, p. 4). Documenting the gap between technological promise and organisational reality has theoretical and practical value.
But I'd be lying if I said this was the plan.
What This Might Mean for Design Practice
I'm writing this for design practitioners who may recognise these patterns.
If you've ever produced work that was praised and filed, you've encountered design capture. If you've ever raised a finding in a meeting and watched it fall into silence, you've encountered collective non-recognition. If you've ever documented something true that nobody wanted to hear, you've encountered the limits of making visible.
Design education prepares you for resistance in the form of disagreement - stakeholders who push back, users who reject proposals. You iterate, you refine, you try again. Design education doesn't prepare you for resistance in the form of non-engagement - for findings that are acknowledged but not acted upon, for work that is technically successful but organisationally inert.
The assumption beneath design's visibility paradigm is that organisations want to see clearly. But what if organisations have investments in not seeing clearly? What if accurate information threatens commitments that matter more than accuracy?
I don't have answers. But I think the questions matter - for how we train designers, for how we structure technology projects, for how we understand the gap between what design promises and what organisational contexts allow.
What Comes Next
I'm leaving a job. I'm continuing a PhD on different terms. I'm carrying forward questions that this experience has sharpened:
- What is design's role in contexts where the foundations for design work don't exist?
- How do organisations maintain commitment to impossible technology projects?
- What happens when design artefacts expose rather than enable?
- How should we theorise the limits of making visible?
These questions connect to larger concerns about public sector digital transformation, about academic-practitioner collaborations, about the gap between technological imaginaries and material conditions.
The Swedish case was small - one coordination association, one ESF project, one failed PhD premise. But I suspect the patterns I observed operate at larger scales. The same dynamics of techno-limerence, goal displacement, and design capture may be at work in bigger, more consequential technology programmes.
That's a hypothesis I'll be testing. But that's for future posts.
References
Beckert, J. (2016). Imagined Futures: Fictional Expectations and Capitalist Dynamics. Harvard University Press.
Greenhalgh, T. (2018). How To Improve Success Of Technology Projects In Health And Social Care. Public Health Research & Practice, 28(3), e2831815.
Rip, A. (2012). The Context of Innovation Journeys. Creativity and Innovation Management, 21(2), 158-170.