In my previous post on metaphor, I explored how metaphors aren't just decorative language but constitutive of thought itself. Lakoff and Johnson showed that we think through metaphors; Schön demonstrated how generative metaphors set problems rather than merely describing them. The implication: when different people use different metaphors to understand the same phenomenon, they may be constructing different realities while believing they're discussing the same thing.
This has become acutely relevant at SCÖ, where I'm meant to be exploring "federated learning" and "data science" for vocational rehabilitation. What strikes me is how much of this work involves not technical systems but language - the metaphors through which people imagine what these technologies might do.
A recent paper by Murray-Rust, Nicenboim and Lockton (2022) on "Metaphors for designers working with AI" has helped me think about this more carefully. Their argument is that metaphors don't just describe AI systems - they shape what designers and stakeholders think is possible. The language we use constrains and enables what can be imagined.
This matters for my situation. If "federated learning" is framed through certain metaphors, those metaphors will shape what people expect from it - and from me. Understanding the metaphorical landscape might be as important as understanding the technical possibilities.
"AI" as Abstract Signifier
Before examining specific metaphors, it's worth pausing on the term "AI" itself. It functions less as a technical description than as what I'll call an abstract signifier - borrowing from Laclau's (1996) concept of the empty signifier, a term capacious enough to absorb radically different meanings depending on who's using it. Laclau argued that certain signifiers become productive precisely through their emptiness: they hold coalitions together by allowing different groups to invest incompatible demands in the same word. Lévi-Strauss's earlier signifiant flottant - the floating signifier - captures a related idea: a signifier whose meaning is indeterminate and thus available for projection.
When a project manager says "AI", they may be imagining automated decision-making that reduces workload. When a data scientist says "AI", they may be thinking about specific model architectures and training procedures. When a politician says "AI", they may be invoking modernisation, innovation, competitive advantage. When a rehabilitation client hears "AI", they may fear being reduced to a number, judged by an algorithm they can't question.
Same term. Different source domains. Different mappings. Different realities constructed.
This is the pseudo-understanding problem I discussed in my earlier post - the illusion of shared meaning where different conceptual structures coexist. "AI" is a particularly potent example because the term is simultaneously technical (it refers to specific computational approaches) and aspirational (it evokes intelligence, autonomy, capability). The gap between these registers creates space for projection. People fill "AI" with their hopes and fears, their existing understandings of intelligence and automation.
Murray-Rust and colleagues don't use the term "abstract signifier", but their analysis points in this direction. The metaphors circulating around AI don't describe a stable technical reality; they construct different possible realities, centring different concerns, enabling different conversations.
Metaphors as Material Practice
Murray-Rust and colleagues draw on Philip Agre's insight that artificial intelligence "has always been a material practice... that relies on metaphors to construct the links between human thought, mathematical properties and the exigencies of technological possibility" (Agre, 1997, cited in Murray-Rust et al., 2022). The metaphors aren't decorative - they're constitutive of the practice itself.
This echoes Donna Haraway's provocation: "It matters what matters we use to think other matters with; it matters what stories we tell to tell other stories with" (Haraway, 2016, cited in Murray-Rust et al., 2022). The tools we use in discourse don't just represent reality; they reconfigure it, constraining and enabling what can be said and thought.
For designers, this means paying attention to the metaphors circulating in a domain - noticing which aspects of a technology they illuminate and which they hide. Metaphors "both illuminate and hide, simplifying and connecting to existing knowledge, centring particular ideas, marginalising others, and shaping fields of practice" (Murray-Rust et al., 2022).
This connects directly to Schön's analysis of generative metaphor. Just as "blight" versus "natural community" generates different problem-settings for urban housing, different metaphors for AI generate different problem-settings for technology design. The choice of metaphor isn't neutral; it carries normative implications.
Troublesome Metaphors in Machine Learning
Murray-Rust and colleagues identify several common metaphors in machine learning that cause trouble - not because they're wrong, but because they bring unwarranted associations, suggest impossible goals, or frame problems in ways that aren't generative of solutions.
Training and Learning
One of the most pervasive metaphors is that models "learn". While models certainly change their behaviour in response to data, the learning metaphor "brings with it the possibility that they learn in a similar way to humans" (Murray-Rust et al., 2022). This makes it easy to forget that impressive surface performance doesn't imply generalisation, reasoning, or common sense.
The authors suggest that "fitting" is a cleaner description - it captures the idea that a model is being fitted to particular data, rather than learning a general principle. It also "highlights the idea that there is something that is being fitted, inviting questioning of what the process is, rather than the somewhat magical property of learning".
If we say a model "learns", we might expect it to learn the way a human rehabilitation caseworker learns - accumulating wisdom, developing judgement, understanding context. If we say a model "fits" data, we're more likely to ask: what data? Whose data? What patterns is it fitting to?
Explanation
The term "explanation" in Explainable AI (XAI) is "another almost unnoticed metaphor". There's a gap between computational artefacts that provide information about model operation and the social processes by which humans actually construct understanding. An explanation, outside formal settings, "is a process, that unfolds through engagement" (Murray-Rust et al., 2022). The model doesn't explain itself the way a colleague would explain their reasoning.
Bias
"Bias" has multiple technical meanings within machine learning - from neuron parameters to data distributions to labelling practices. But the troublesome implication is that talking about bias "raises the possibility of an unbiased model. This is deep conceptual metaphor, that brings in the possibility of a universal model, devoid of context, somehow pure" (Murray-Rust et al., 2022).
This gets in the way of the more useful task of understanding which biases are useful, which are harmful, which are intended, which are accidental. "A better metaphor would maintain the idea that biases are always relative to something, and that something needs articulation".
Black Boxes
The "black box" originated in electronic circuit theory - a component known by its operation rather than its materiality. In cybernetics, it offered an experimental stance: what can we discover about a system's properties through inputs and outputs? But contemporary uses often convey "a sense of powerlessness, both to understanding mechanism and to making sense of output".
The metaphor implies that if we could just open the sealed box, we'd understand what's been hidden. This "diverts attention from asking how this box came to be" - the data, the decisions, the power relations that shaped its construction.
What Makes a Good Metaphor?
Given these problems, what qualities should designers look for in metaphors for AI? Murray-Rust and colleagues suggest several criteria:
Know that it is a metaphor. Many terms present as accurate descriptions when they're actually metaphorical. Simply noticing this "creates space for critical engagement, a separation between map and territory".
Understand the bounds. Metaphors hold more strongly in some areas than others. Where does the analogy break down? What aspects does it illuminate well, and where does it mislead?
Ask what it centres and marginalises. Casting an algorithm as a "blank slate" centres user responsibility; casting it as an "autonomous agent" implies lack of user control. Every metaphor makes some things visible and others invisible.
Evaluate its usefulness. Does the metaphor support exploration? Does it provoke critical thinking or encode existing assumptions? Does it open up possibilities or close them down?
These criteria echo Schön's call to bring generative metaphors to "reflective and critical awareness". The task isn't to eliminate metaphor - that's impossible, since we think through metaphors - but to engage with metaphors consciously rather than unconsciously.
Alternative Metaphors
The paper offers several metaphors that illuminate different aspects of AI systems:
Models as Collections of Examples
Rather than a magic black box, think of a model as "a collection of examples, whether of images, sounds, decisions or sentences". This captures the idea that models need to be able, somewhere, to represent their training data. It immediately raises questions: what examples? Whose examples? What happens when something new appears?
This metaphor is "extremely open to illustration" - artists like Memo Akten and Anna Ridler have made work that shows models reconstructing based on what they've seen, or the labour practices behind data collection.
For designers, this "provides a way to think about AI during the design process that sidesteps the idea of a magic bullet or an unknowable superintelligence", instead framing models as sets of examples with properties and limitations - much like other design models such as personas or prototypes.
Implications for SCÖ
Reading this paper alongside my experience at SCÖ, I notice several things.
First, the language around "data science" and "federated learning" in this project is heavily metaphorical. FL as far as it is even understood, is imagined as enabling organisations to "benefit from each other's data" without sharing it - as if data were a liquid that could flow, or a resource that could be extracted. These metaphors shape expectations about what FL can deliver.
Second, the "magic bullet" framing is prevalent. There's an assumption that technology - the right technology - will solve problems that aren't actually technical. This connects to Wastell's concept of technomagic that I've been thinking about: technology as a magical solution that transforms situations without requiring the hard work of understanding preconditions.
But the sacred concepts framework I have been developing alongside these posts suggests a more uncomfortable possibility: the "magic bullet" framing might not be a cognitive error to be corrected. It might serve a constitutive function - performing collective aspiration, constituting the project's shared identity, holding together a coalition that would fracture if forced to specify what "data science" actually means in this context. If so, replacing "bad" metaphors with "better" ones is not straightforwardly helpful. The metaphors may be doing work that accurate description cannot.
Third, many of the metaphors in use are first-order metaphors - they describe what the technology does in terms that assume a single, universal understanding. What's missing are second-order metaphors that acknowledge different stakeholders might understand these technologies very differently.
If I'm to do useful work here, part of my contribution might be surfacing and questioning the metaphors in play - helping people notice that "learning", "intelligence", and "data science" are metaphors, not descriptions. That noticing might create space for more productive conversations about what's actually possible.
Whether the organisation wants that kind of contribution is another question.
Beyond Technology
This analysis has focused on metaphors for AI - how language shapes what we think technology can do. But the same logic applies more broadly.
If metaphors structure our understanding of technologies, they also structure our understanding of organisations - what they are, how they work, what's possible within them. Gareth Morgan's work on "images of organisation" suggests that the same institution can be understood as a machine (focus on efficiency), an organism (focus on adaptation), a brain (focus on learning), a political system (focus on power), or a psychic prison (focus on how we trap ourselves in our own constructions). Different metaphors generate different possibilities for action.
And if metaphors structure understanding of organisations, they also structure understanding of roles - what it means to be a designer, a manager, a caseworker, a client. When stakeholders hold different metaphors for what "service design" is and does, they hold different expectations. The same person can be seen as a facilitator, a visualiser, an expert, or - more troublingly - a miracle worker expected to transform situations through mysterious powers.
These extensions are for future posts. But it's worth noting that metaphor analysis doesn't stop at technology. The tools we use to think about AI are continuous with the tools we use to think about institutions and roles. Understanding the metaphorical landscape means understanding not just how people imagine technology, but how they imagine the contexts in which technology is embedded.
References
Agre, P. (1997). Computation and Human Experience. Cambridge University Press.
Haraway, D. J. (2016). Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press.
Laclau, E. (1996). Emancipation(s). Verso.
Lakoff, G. & Johnson, M. (2003). Metaphors We Live By (2nd ed.). University of Chicago Press.
Lévi-Strauss, C. (1950). Introduction to the Work of Marcel Mauss. In M. Mauss, Sociologie et anthropologie. Presses Universitaires de France.
Morgan, G. (2006). Images of Organization (Updated ed.). Sage Publications.
Murray-Rust, D., Nicenboim, I., & Lockton, D. (2022). Metaphors for designers working with AI. Proceedings of DRS2022. https://doi.org/10.21606/drs.2022.667
Schön, D.A. (1993). Generative metaphor: A perspective on problem-setting in social policy. In A. Ortony (Ed.), Metaphor and Thought (2nd ed., pp. 137-163). Cambridge University Press.
Stross, C. (2017). Dude, you broke the future! Talk at 34C3.
Wastell, D. (2011). Managers as Designers in the Public Services: Beyond Technomagic. Triarchy Press.
Revision Note: This post was written in early September 2022, building on the foundational discussion of metaphor theory in the previous post. The Murray-Rust paper provided a concrete application of Lakoff & Johnson and Schön to the AI domain - exactly what I needed to think through the dynamics at SCÖ.
The "abstract signifier" framing - the idea that "AI" is a term capacious enough to absorb radically different meanings - would prove central to later work. In the NORDES paper with Ana Kustrak, we would make a parallel argument about "service designer" as an abstract signifier onto which different stakeholders project different metaphoric understandings.
The final section, gesturing toward metaphors for organisations and roles, was added in revision to connect this post to the series that developed. At the time of writing, I hadn't yet fully articulated how Morgan's organisational metaphors or the "miracle worker" dynamic would become central to my analysis. But the seeds were there.