In previous posts, I've described what federated learning is and what it would actually require. Now I want to describe what happened when I tried to map what "data science" means to the various stakeholders in this project.
The result was a concept map of nothing - or more precisely, a map that documented an absence. I'm not sure yet what to make of this.
The Attempt
As a designer entering a complex multi-stakeholder project, my instinct was to do what designers do: make things visible. Map the terrain. Create shared artefacts that stakeholders can respond to.
I set out to answer what seemed like a foundational question: What do people mean when they talk about "data science" or "AI" or "machine learning" in this project?
The project involves multiple organisations - a Swedish coordination association, a Swedish university, a UK university, various municipal and regional partners - all ostensibly collaborating on "data-driven" approaches to vocational rehabilitation. The Pathway Generator algorithm from Iceland. The promise of federated learning. An ESF-funded programme with milestones to hit.
Surely, I thought, there must be some shared understanding of what "data science" means here, even if the details vary.
The Method
I reviewed project documentation. I attended meetings. I talked to people individually. I asked variations of the same questions:
- What data exists that might be relevant to this work?
- What questions might data help answer?
- What systems capture information about rehabilitation processes?
- What outputs do you imagine from "data science" in this context?
Then I tried to synthesise what I heard into concept maps - visual representations of how different concepts relate to each other across stakeholders.
What the Map Revealed
The map didn't reveal a coherent shared understanding with variations at the edges. It revealed that "data science" was functioning as what might be called a floating signifier - a term that different stakeholders filled with different content, enabling apparent agreement without actual alignment.
For some stakeholders, "data science" meant sophisticated machine learning: predictive models, algorithm-guided decisions, AI-powered tools. The Pathway Generator. Federated learning.
For others, it meant basic analytics: dashboards, reports, visualisations of service delivery patterns. Being able to see what they couldn't currently see.
For others still, it meant data collection: getting caseworkers to record information systematically. Having a database at all.
For a few, it seemed to mean something closer to "the digital transformation we keep hearing about but don't quite understand".
These don't feel like points on a spectrum to me. They seem like different things entirely. The gap between "we need a database" and "we need federated learning" doesn't look like a gap in ambition - it looks like a gap in what world we're describing.
The Absence
More significant than the divergence was what no one could articulate clearly:
-
What data actually exists. People knew their organisations collected some information, but couldn't specify what, where, or in what format.
-
What questions data would answer. "Better outcomes" was invoked, but what outcome, measured how, for whom?
-
What would change if data science succeeded. The operational implications of ML-supported decisions were vague.
The concept map, rather than documenting what people agreed about, seemed to document an absence - the absence of the foundations that "data science" would require. It started to feel like we weren't debating implementation details - we were debating whether there was anything to implement.
A Workshop Moment
During a project planning workshop, the group - a mix of Swedish practitioners and UK academics - was asked to articulate the challenges and problems facing the collaboration.
What emerged was striking. Participants wrote things like:
-
"If a project is a group of people coming together to collaborate on a fixed problem for a fixed amount of time - is this really a project?"
-
"It is hard to see the shared objectives of the group"
-
"Any mention of 'real data' seems to provoke anxiety and stress"
-
"We are operating a 'technology-push' model that is not necessarily grounded in real needs"
-
"Is a coordination association actually a 'product-oriented' organisation?"
They articulate precisely the structural problems that my concept mapping was revealing: no clear shared objective, no data to ground the work, a technology being pushed into a context that might not be able to receive it.
I don't know what to do with this yet. The problems are being named, but naming them doesn't seem to change anything. The workshop produced Post-it notes. Whether it will produce a change in direction, I can't tell.
The Materialisation Diagram
I also created a diagram titled "Extended and Idealised Model of a Design-Led Machine Learning Development Process". It traces the steps from understanding context and users, through collecting data, visualising and cleaning it, creating data models, evaluating them, and communicating results.
I'm uncertain about what this diagram is actually doing. On one level, it describes what should happen - a normative model of how design-led ML development ought to proceed. On another level, each step in the process is a step that can't be taken without infrastructure, data, and capacity that I'm increasingly unsure exists here.
Is the diagram a planning tool or an audit? I built it as the former, but it might be functioning as the latter.
What Kind of Design Work Is This?
I'm trained as a designer. Designers make things. We prototype, we iterate, we build.
But what do you build when the foundations for building don't exist?
The concept maps and diagrams I produced are design artefacts. They required design skills - synthesis, visualisation, stakeholder engagement, iterative refinement. They were presented in design language to design-oriented audiences.
But I'm not sure they're enabling artefacts - helping stakeholders converge on a solution. They might be something else. Design theory talks about boundary objects - artefacts with "enough structure to support activities within separate social worlds, and enough elasticity to cut across multiple social worlds" (Bergman, Lyytinen & Mark, 2007, p. 5). Boundary objects maintain productive ambiguity.
My artefacts don't feel ambiguous in that productive way. They feel like they're forcing specificity - making visible a gap between what was promised and what's possible. Whether that's useful or just uncomfortable, I genuinely don't know.
Alternative Interpretations
I should consider other ways to read what's happening.
Normal early-stage confusion: Perhaps this level of conceptual divergence is typical for complex multi-stakeholder projects in their early phases. The absence I documented might be a starting point rather than a verdict. Many successful projects begin with participants meaning different things by the same words.
My own framing: As a newcomer asking probing questions, I may have inadvertently constructed the "absence" through my method of inquiry. Different questions might have surfaced different (more coherent) understandings.
Different timescales: The practitioners and academics may be operating on different timescales - with some focused on immediate deliverables and others on longer-term capacity building. What looks like confusion might be appropriate tolerance for ambiguity in a developmental phase.
These are genuine possibilities. The concept map documented what I found, but what I found was shaped by how I looked.
Questions I'm Sitting With
The concept map documented an absence. The workshop surfaced a self-diagnosis. The materialisation diagram shows a process that presupposes conditions I'm not sure exist.
I don't know what this means for the project, or for my PhD. Some questions I'm carrying:
- Is this the kind of finding that leads to productive reorientation? Or the kind that gets filed and forgotten?
- What's my role here - to keep surfacing uncomfortable truths, or to find a way to be useful within constraints I can't change?
- Is there a version of this project that works, or am I documenting something that was never going to succeed?
I don't have answers. I'm going to keep paying attention.
References
Bergman, M., Lyytinen, K. and Mark, G. (2007). Boundary Objects in Design: An Ecological View of Design Artifacts. Journal of the Association for Information Systems, 8(11), 546-568.
Boland, R. and Collopy, F. (2004). Managing as Designing. Stanford University Press.
Howlett, M.P. and Mukherjee, I. (2018). Routledge Handbook of Policy Design. Routledge.