From Deficit Models to Sense-Making

This article draws on an interpretivist analysis of public sector dashboard strategy, treating dashboards as socio-technical systems embedded in organisational and political contexts rather than neutral information tools.

Transparency as policy doctrine

Before examining how public sector performance dashboards work - and fail to work - it is necessary to understand why they exist at all. These dashboards are not simply technical tools; they are the contemporary expression of a policy doctrine with roots stretching back to the Enlightenment: the belief that making information visible to the public eye will improve governance.

The quasi-religious status of transparency

Christopher Hood, in his historical analysis of transparency as a governance doctrine, observes that transparency has attained "quasi-religious significance in debate over governance and institutional design" (Hood, 2006, p. 3). The term pervades mission statements, reform documents, and international governance standards; one might almost say that "more-transparent-than-thou" has become the secular equivalent of "holier than thou" in modern debates over organisation and governance. Yet, like many quasi-religious notions, "transparency is more often preached than practised, more often invoked than defined, and indeed might ironically be said to be mystic in essence" (Hood, 2006, p. 4).

This quasi-religious quality - transparency invoked for its legitimating function rather than its empirical effects - connects to patterns examined in earlier posts. Matthews's framework of sacred service design identifies how public sector design practices function as Durkheimian rituals sustaining collective solidarity through symbolic performance; Meyer and Rowan's concept of "rationalised myths", explored in Performance and Substance, describes exactly this dynamic of ceremonial adoption without functional effectiveness.

From Bentham to Brandeis

The word "transparency" derives from Latin roots meaning "to show through" and entered English usage for governance in the late eighteenth century. Jeremy Bentham appears to have been the first to use it in its modern governance-related sense, declaring: "I do really take it for an indisputable truth, and a truth that is one of the corner-stones of political science - the more strictly we are watched, the better we behave" (Bentham, [1790s] 2001, p. 277). This Benthamite principle - that visibility produces virtue - remains the foundational assumption of contemporary transparency policy, including NHS performance dashboards.

The transparency doctrine developed through several distinct strains that Hood identifies in pre-twentieth-century thought. The first is rule-governed administration: the classical ideal of government operating according to fixed, predictable, published rules rather than arbitrary discretion. The second is candid social communication: Rousseau's vision of a transparent society serving as a mechanism for avoiding destabilising intrigues and cabals. The third is making society knowable: the Enlightenment project of rendering the social world legible through methods analogous to natural science - what Foucault later termed "governmentality" and James Scott called "seeing like a state".

These strains converged in Bentham's proposals for public management, where the principle of "transparent management or publicity" was always central. Publication of public accounts, fees for office, and exposure of expenditure to general scrutiny would, Bentham argued, harness even base human motivations for public benefit: "the worst principles have their use as well as the best; envy, hatred, malice, perform the task of public spirit" (Bentham, [1802] 1931, p. 411).

The twentieth century extended these doctrines into specific institutional forms: freedom of information legislation (beginning with Sweden's 1766 Freedom of the Press Act, but proliferating after the US Freedom of Information Act of 1966); corporate disclosure requirements (accelerating after each major scandal from 1929 to Enron); and - most relevant here - public reporting of healthcare performance data.

The healthcare transparency movement

The specific application of transparency doctrine to healthcare quality emerged in the 1980s and accelerated through the 1990s. The US Health Care Financing Administration began publishing hospital-specific mortality rates in 1986; New York State initiated its Cardiac Surgery Reporting System in 1989, publishing risk-adjusted mortality rates for individual surgeons and hospitals. The UK followed with the NHS Performance Assessment Framework (1999), star ratings (2001-2005), and subsequently Care Quality Commission ratings, NHS Choices, and the contemporary ecosystem of performance dashboards.

The theory of change embedded in these initiatives follows what Justice Louis Brandeis articulated in his 1914 book Other People's Money: "Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman" (Brandeis, 1914, p. 92). Though Brandeis was writing about financial transparency, his aphorism was subsequently applied far beyond its original context. Public disclosure of performance information, the theory holds, creates accountability pressure through two pathways: a selection pathway, where consumers use performance information to choose higher-quality providers; and a change pathway, where providers use performance information to identify and address quality problems.

This theory has been subjected to substantial empirical investigation. The RAND Corporation's systematic review (Shekelle et al., 2008), commissioned by the Health Foundation, synthesised evidence from 50 studies of public reporting effects. Their conclusion was notably qualified: the empirical literature on using publicly-reported performance data to improve health outcomes remained "scant", with "limited assessments of their usefulness to improve patient safety and patient-centredness" (Shekelle et al., 2008). Most studies focused on the same few reporting systems, and results were mixed.

Hibbard, Stockard and Tusler's (2003) controlled experiment in Wisconsin provided more direct evidence. Hospitals subject to public reporting showed significantly more quality improvement activity than those receiving only private feedback - but the mechanism was telling. Hospitals reported that "concern for public image was a key motivator for their quality improvement efforts" (Hibbard et al., 2003); public reporting affected hospitals' sense of their public image but not their market share. This suggests that the reputation pathway, not the selection pathway, drives improvement - a distinction with profound implications for dashboard design.

NHS dashboards as the latest incarnation

The current ecosystem of public sector performance dashboards - centralised data platforms, national oversight frameworks, and transparency publication programmes - represents the most recent incarnation of this transparency policy tradition. These initiatives inherit assumptions about how visibility produces accountability that have been embedded in healthcare governance for four decades.

Yet the evidence reviewed here suggests these assumptions require fundamental revision. The Benthamite principle that "the more strictly we are watched, the better we behave" appears, as Hood notes, to be "only a half-truth" (Hood, 2006, p. xi). Under certain conditions, as Andrea Prat demonstrates using modern agency theory, "the more strictly we are watched the worse we are likely to behave" (Hood, 2006, p. xi). Gaming, risk selection, and metric fixation are not implementation failures but predictable responses to transparency regimes.

Moreover, the transparency doctrine embeds what this post calls a deficit model of data comprehension: the assumption that users lack information, dashboards provide it, and deficits are filled. This model treats data as something that can be "transmitted" from source to user through the dashboard "channel". The argument that follows is that this model fundamentally misunderstands how performance data becomes meaningful - and that this misunderstanding has practical consequences for performance dashboard design.

Understanding performance dashboards as policy instruments in a longer transparency tradition - rather than as neutral technical tools - is essential for understanding both their appeal and their limitations. The critique that follows is not a rejection of transparency as a value, but an interrogation of the specific mechanisms through which transparency is assumed to work and the design implications of those assumptions.

The deficit model of data comprehension

Public sector dashboard strategy has typically proceeded on what might be called a deficit model: users lack performance information, dashboards provide it, deficits are filled. If executives lack operational insight, give them real-time dashboards. If the public lacks transparency, publish the data. If regional teams lack benchmarks, display comparative league tables. The logic is intuitive and rarely questioned.

This deficit model treats data comprehension as a pipeline problem - what Reddy (1979) calls the "conduit metaphor" for communication. In this view, meaning exists in the data, waiting to be transmitted; the dashboard is a channel through which information flows; users receive what is sent. The only barriers are access (solved by publication) and presentation (solved by good design). Once these barriers are removed, understanding follows.

The model fails because meaning is not transmitted but constructed - a claim that Krippendorff places at the centre of his trajectory of artificiality, where design's proper concern shifts from the form of objects to the meanings stakeholders construct around them. Users do not receive performance data; they interpret it through frames shaped by their roles, capabilities, prior knowledge, and organisational context. There is also a deeper question - not addressed in this post but developed in the Boulding-Armstrong comparison - about why the conduit model persists despite its evident inadequacy. The psychodynamic tradition suggests that the deficit model is not merely a cognitive error but a defence: treating dashboards as information channels avoids confronting the possibility that the system does not want to know what the data would reveal.

Two senses of "frame" are operative here, both explored in Three Frames: Fillmore's semantic frames - the structured conceptual expectations that a term like "A&E performance" activates differently for a clinician, an analyst, and a journalist - and Goffman's social frames, which determine what kind of encounter this is (a governance review, a news story, an accountability exercise). Dorst's design frames address a different level of the problem: how the dashboard designers frame the problem they are solving, which is upstream of how users interpret the result.

Clarke's situational analysis adds a further dimension, insisting that the unit of analysis is not the individual's cognitive frame but the full situation - human, nonhuman, discursive, temporal - in which interpretation occurs. A dashboard metric is not a neutral datum but an event interpreted through whatever combination of semantic expectation, social situation, and material context the user is embedded in; the same data point registers differently depending on whether it is encountered as a state change, a movement in quality space, or a shortfall against a promise.

As I argue in the companion piece on how dashboards constitute publics, this is not a one-way process: users are themselves constituted as particular kinds of political subjects by the design choices embedded in the dashboard. A 4-hour A&E wait time means something different to a Trust CEO preparing for a board meeting, a regional analyst comparing peer performance, and a journalist seeking a story angle.

The actual challenge is not filling deficits but making sense - the interpretive, social, and material work through which performance data becomes meaningful in specific organisational contexts. This reframing has profound implications for dashboard design, because sense-making is not a passive reception of transmitted meaning but an active, situated, and often collaborative accomplishment.

The gap between the ideal and the actual

The deficit model persists partly because it is embedded in the workflow models that structure dashboard development. The academic literature on dashboard authoring presents these workflows as orderly stages (Kandel et al., 2012):

Loading diagram…

Such models serve heuristic purposes - they decompose complex processes into manageable analytical units - but their sequential logic embeds assumptions about what happens at the endpoint. The implicit theory: once data is "published", it is "received".

Gedenryd (1998), analysing similar gaps in design methodology, identifies a fundamental problem:

"The problem is ... a discrepancy between the received, theoretical views of how things ought to work, [or how they need to be simplified to work as discursive or political constructs] and how they have turned out to work in reality - a gap between the ideal and the actual".

This gap pervades NHS dashboard strategy - and connects to the broader distinction between planning and design developed elsewhere in this series: planning operates within a known state space, while design (or politics) constructs the state space in the first place. Dashboard workflow models assume the state space is given; the actual practice of data work involves constructing it.

The ideal is that executives access dashboards, interpret performance data, identify issues, and take action. The actual is that executives rely on intermediaries who translate dashboard outputs into board papers; they consume printed summaries in brief windows between meetings (user research in this programme consistently identified 5-10 minute consumption windows as typical); they trust verbal briefings over interactive exploration.

Kandel et al.'s (2012) landmark interview study with 35 enterprise analysts revealed that 80% of analyst time goes to data preparation rather than visualisation. One analyst captured the reality:

"I spend more than half of my time integrating, cleansing and transforming data without doing any actual analysis. Most of the time I'm lucky if I get to do any analysis. Most of the time once you transform the data you just do an average... the insights can be scarily obvious. It's fun when you get to do something somewhat analytical".

This observation is typically interpreted as an inefficiency that better tooling might address. But the dominance of data preparation work suggests something more fundamental: the raw materials of dashboard construction are not neutral givens but require substantial human intervention to become legible as "data" at all. The pipeline model assumes data flows; the reality is that data must be made to flow through considerable interpretive labour.

Data as intervention, not discovery

Muller et al.'s (2019) five-part taxonomy of data intervention provides theoretical machinery to understand why the deficit model fails. Their CHI paper, based on interviews with 21 data science professionals, proposes a "monotonic scale of human intervention with data, from least ('discovery') to greatest ('creation')" (Muller et al., 2019). The taxonomy moves from discovery - finding pre-existing, well-structured datasets (what Drucker (2011) calls "given" data) - through capture (selectively extracting data from larger pools, Drucker's "capta" - things taken rather than given), curation (organising data for specific audiences or analytical purposes), and design (actively shaping data structures to meet algorithmic requirements) to creation: generating data that did not previously exist, including ground truth labels.

Loading diagram…

Drucker's distinction between "data" (the given) and "capta" (the taken) is foundational here: the very word "data" implies passive receipt of pre-existing facts, while "capta" acknowledges the active selection involved in any measurement. Muller et al. extend this insight to show that most data science work occurs in the upper registers - curation, design, and creation - rather than the passive discovery that pipeline models assume. One of Muller's informants stated bluntly: "I am the ground truth" (Muller et al., 2019). This admission reveals that even supposedly objective reference standards often require human construction.

If data itself requires intervention to exist, the deficit model's assumption that meaning can be "transmitted" through dashboards becomes untenable. What is transmitted is not meaning but representations - numbers, charts, colours - that users must interpret through their own sense-making processes, mediated and shaped by their own political contexts. The dashboard does not fill a deficit; it creates new interpretive demands.

For public sector performance dashboards, this taxonomy exposes fundamental questions about what is being measured and by whom. Oversight scores are not discovered but designed - the methodology that weights and aggregates metrics reflects policy priorities, not natural categories. Performance benchmarks are not captured but curated - decisions about peer groupings, exclusions, and adjustments shape what counts as "comparable". Target trajectories are not given but created - they encode political commitments about what improvement rates are achievable or desirable.

The mythology of neutral data

Muller et al. explicitly challenge what boyd and Crawford (2012) call the "mythology" of big data - the "aura of truth, objectivity, and accuracy" (boyd and Crawford, 2012) that surrounds computational analysis. Each step up the intervention taxonomy moves further from this mythology; as researchers transit through human-influenced interventions from discovery to curation to creation, they move farther from whatever objectivity the raw data might have possessed. This mythology proves "particularly powerful with regard to ground truth data" (Muller et al., 2019). When performance dashboards display oversight scores, users encounter them as facts about the world - but the scores are outcomes of methodological choices that could have been made differently.

The practical implication is that dashboards do not reveal reality; they construct particular versions of reality that serve particular purposes. This connects to Boland and Collopy's argument, explored in Representations: Product Management vs Service Design, that our vocabulary of representations is critical in determining how well or poorly we manage - the representational choices embedded in a dashboard determine what questions can be asked and what questions are structurally invisible.

Understanding whose purposes these representations serve requires examining the theories of change embedded in dashboard design - and recognising that the deficit model's apparent neutrality ("we just provide the data") obscures the political choices involved.

Dashboard user goals

Tory et al.'s (2021) interview study with 20 dashboard users and analysts provides the most empirically-grounded framework for understanding how dashboards are actually used. Their research revealed that dashboard users' data tasks went well beyond the "consumption" for which dashboards are typically designed:

"The reality is that dashboard users frequently need to enrich and shape their data, repurpose visualization content, and construct new artifacts and narratives, often in anticipation of conversations with others through and around data".

This finding challenges the implicit assumption in most dashboard design - that users passively receive information that designers have prepared. Instead, users actively work with, through, and around data in ways that current dashboards poorly support.

Two categories of goals

Tory et al. identify two high-level categories of dashboard user goals: conversation with data (information extraction) and conversation through and around data (social coordination).

Loading diagram…

Conversation with data (information extraction):

GoalDefinitionFrequency
SummariseObtain a summary or overview of the data18/20
MonitorStay up to date with key metrics and performance indicators18/20
ExplainFind the underlying cause behind an observation14/20
CompareCompare two or more entities12/20
PredictPredict an outcome under different scenarios12/20
LookupLook up a fact11/20
ExperimentObserve the effect of an event or deliberate change8/20
Find AnomalySeek or identify anything out of the ordinary5/20
AuditRecord by record inspection and validation4/20

Conversation through and around data (social coordination):

GoalDefinitionFrequency
Discuss DataConverse about data, insights, and actions to take17/20
CirculateDisseminate among organisational actors14/20
Discuss ToolsConverse about creating or changing dashboards13/20
DocumentRecord decisions and history3/20

The distinction between these categories is crucial. The first involves extracting information from data; the second involves using data as a medium for organisational communication. Many participants served as "data intermediaries", making sense of their data and then repackaging and sharing a summary for leadership - a pattern directly observable in NHS performance analyst roles, where analysts bridge dashboard outputs and governance conversations.

The breakdown framework

When dashboards fail to support user goals, Tory et al. found that users employ workaround strategies: switching tools (moving data to a spreadsheet or other tool), requesting service (asking an engineer or analyst to complete the task), manual workarounds (awkward manual processes), or simply giving up and abandoning the task. The most common workaround was the "data dump" - extracting data from dashboards for ad hoc analysis in spreadsheets:

"By far the most common tool switch was the 'data dump': extracting data out of a dashboard for ad hoc analysis in a spreadsheet. This practice was so common that several participants described dashboards deliberately set up as data dumps, often apologizing for 'misusing' their dashboarding tools in this way".

This finding has significant implications for dashboard strategy: if users routinely extract data to work elsewhere, the dashboard is failing as a self-contained tool. Rather than treating this as user deficiency, it should be understood as structural feedback about dashboard limitations.

Applying the goal framework to a national oversight dashboard

Systematic analysis of a national oversight dashboard - one designed to provide centralised performance visibility across healthcare providers - reveals a characteristic architecture: an overview providing provider-level summary with metric cards and performance area indicators; a league table ranking providers; a metrics table showing provider-level data; a geographic visualisation colour-coded by metric score; diverging bar charts with selected providers highlighted; provider average scores with confidence intervals; and metric metadata.

Goal-component mapping

Mapping the dashboard's components to Tory et al.'s goal framework reveals a distinctive profile:

GoalSupport LevelTypical Components
SummariseStrongOverview cards, geographic map, performance area indicators
MonitorStrongLeague table, RAG indicators, colour coded blocks
CompareStrongLeague table, provider charts, geographic map, metrics table
LookupStrongMetrics table, provider selection
ExplainModerateContextual charts, uncertainty visualisation
AuditModerateMetric metadata, confidence intervals
Find AnomalyWeakRAG indicators (threshold-based only)
PredictNoneNo forecasting capability
ExperimentNoneNo what-if scenarios
Discuss DataWeakNo annotation or commenting
CirculateWeakLimited sharing features
DocumentWeakStatic documentation only

Implications

This profile reveals a dashboard designed for accountability, not analysis. There is strong support for accountability goals: Summarise, Monitor, and Compare enable quick status assessment; RAG indicators support exception-based governance; and league tables facilitate provider comparison. Support for analytical goals is weak: no capability to predict or forecast, no what-if experimentation, and anomaly detection limited to pre-defined thresholds.

Support for communicative goals is minimal: no annotation or commenting functionality, limited sharing and circulation features, and no collaborative interpretation tools. This profile is appropriate for a transparency-oriented dashboard but creates problems when users attempt to use it for analytical work. The emergence of intermediary "oversight lead" roles is predictable from this analysis: someone must perform the Explain, Predict, and Circulate functions that the dashboard does not support.

Predicted breakdowns

Based on Tory et al.'s framework, users of such dashboards will predictably experience breakdown when attempting the tasks outlined below.

ScenarioGoal BlockedPredicted Strategy
Compare provider to custom peer groupCompareSwitch Tools (Excel)
Forecast future performancePredictSwitch Tools (Excel)
Add local context/narrativeDocumentSwitch Tools (Word/PPT)
Share specific view with colleagueCirculateScreenshot
Understand why metric changedExplainRequest Service (ask analyst)
Combine with local dataAugmentSwitch Tools

Each of these breakdowns represents a moment where the dashboard's theory of change fails - where the implicit assumption that dashboard access enables self-service proves false. The competing theories of change that emerge from this analysis, and the empirical evidence on the sunlight hypothesis, are developed in the posts that follow.

References

Bentham, J. ([1790s] 2001). Writings on the Poor Laws, Vol. 1 (M. Quinn, Ed.). Clarendon Press.

Bentham, J. ([1802] 1931). The Theory of Legislation (C. K. Ogden, Ed.). Kegan Paul, Trench, Trubner & Co.

boyd, d., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662-679.

Brandeis, L. D. (1914). Other People's Money and How the Bankers Use It. Frederick A. Stokes Company.

Drucker, J. (2011). Humanities approaches to graphical display. Digital Humanities Quarterly, 5(1).

Gedenryd, H. (1998). How designers work: Making sense of authentic cognitive activities. Lund University Cognitive Studies 75.

Hibbard, J. H., Stockard, J., & Tusler, M. (2003). Does publicizing hospital performance stimulate quality improvement efforts? Health Affairs, 22(2), 84-94.

Hood, C. (2006). Transparency in historical perspective. In C. Hood & D. Heald (Eds.), Transparency: The key to better governance? (pp. 3-23). Oxford University Press.

Kandel, S., Paepcke, A., Hellerstein, J., & Heer, J. (2012). Enterprise data analysis and visualization: An interview study. IEEE Transactions on Visualization and Computer Graphics, 18(12), 2917-2926.

Muller, M., Lange, I., Wang, D., Piorkowski, D., Tsay, J., Liao, Q. V., Dugan, C., & Erickson, T. (2019). How data science workers work with data: Discovery, capture, curation, design, creation. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Paper 126). ACM.

Reddy, M. J. (1979). The conduit metaphor: A case of frame conflict in our language about language. In A. Ortony (Ed.), Metaphor and thought (pp. 284-324). Cambridge University Press.

Shekelle, P. G., Lim, Y.-W., Mattke, S., & Damberg, C. (2008). Does public release of performance results improve quality of care? A systematic review. The Health Foundation.

Tory, M., Bartram, L., Fiore-Gartland, B., & Crisan, A. (2021). Finding their data voice: Practices and challenges of dashboard users. IEEE Computer Graphics and Applications, 41(6), 5-14.