The previous posts in this series have examined dashboards as policy instruments, explored how they constitute publics, and traced the competing theories of change that underpin NHS transparency work. This post turns from theory to structure: how do you actually organise a performance dashboard that must serve audiences ranging from concerned citizens to strategy analysts, without overwhelming the former or patronising the latter?
The answer we arrived at - progressive disclosure - is a familiar interaction design principle. But applying it to a politically charged, multi-audience NHS performance dashboard raises structural questions that go beyond standard UX practice. This post documents the information architecture of the NOF dashboard as it stands in February 2026, critiques the tab ordering within the Compare view, and presents findings from nine user research sessions that tested whether the progressive disclosure logic holds up in practice.
The design premise
The NOF dashboard is built on a "concerned citizen" starting point. The design philosophy recognises that dashboards are not neutral data displays but active policy instruments that structure how different stakeholders relate to NHS performance data. The information architecture follows a principle of progressive disclosure: the leftmost tabs present the broadest, most accessible views (suited to citizens and general audiences), while tabs further to the right expose increasingly detailed analytical capabilities (suited to analysts, researchers, and specialist users).
This approach draws on earlier User Research persona validation work, which confirmed that NHS executives rarely interact with dashboards directly, instead relying on intermediaries (NOF Leads, analysts) to interpret data on their behalf. The personas cluster around four fundamental citizen questions:
- "Can I get care when I need it?" (Finding Care)
- "Will I be safe?" (Trust Quality)
- "Is my community getting healthier?" (Population Health)
- "Can the NHS keep going?" (System Sustainability)
The tab structure moves from broad orientation (Overview) through ranked comparison (League Table) to detailed analytical reflection (Compare), with Reference material available throughout.
Provider type as scoping mechanism
Before any tab content, users select a provider type which scopes all subsequent views:
| Provider Type | Description | Example Organisations |
|---|---|---|
| Acute | Hospital trusts providing acute services | University Hospitals, Royal Free, etc. |
| Ambulance | Emergency ambulance service providers | London Ambulance, West Midlands, etc. |
| Non-Acute | Community, mental health, and integrated providers | Oxleas, Berkshire Healthcare, etc. |
The default landing is Acute > Overview, reflecting that acute trust metrics (A&E waits, elective care, cancer treatment) carry the highest public salience and media coverage.
The five tabs
Overview: "Where?"
The Overview is the front door - the view designed for the concerned citizen persona. It provides a geographic, at-a-glance picture of provider performance across England, answering: "How is my local area doing?"
An interactive map of England with providers plotted geographically, colour-coded by their performance segment. A collapsible sidebar allows filtering by NHS region and Integrated Care Board (ICB). Provider summary cards display headline scores.
| Goal | Support Level | How |
|---|---|---|
| Summarise | Strong | Geographic overview of national performance at a glance |
| Lookup | Strong | Find a specific provider by location or name |
| Monitor | Moderate | Colour-coded segments signal which providers are in difficulty |
| Compare | Weak | Spatial proximity enables informal geographic comparison |
This is the broadest, least detailed view - the entry point. It prioritises orientation and wayfinding over analytical depth. Users who want more detail are guided rightward to Summary, League Table, or Compare.
Summary: "What pattern?"
The Summary tab moves from geographic orientation to aggregate statistical analysis. It answers: "What does the overall picture look like across regions and domains?"
Segment distribution tables showing how many providers fall into each performance band. Regional breakdowns with embedded maps. Domain-level averages (Access, Effectiveness, Finance, Safety, Workforce). Analysis of segment changes between quarters - which providers have improved or deteriorated.
| Goal | Support Level | How |
|---|---|---|
| Summarise | Strong | Aggregate statistics across all providers |
| Monitor | Strong | Segment change analysis highlights movement |
| Compare | Moderate | Regional and domain breakdowns enable structural comparison |
| Find Anomaly | Moderate | Segment changes flag unexpected movement |
One step deeper than Overview - shifts from individual provider locations to aggregate patterns and distributions. Still primarily a reading/scanning view rather than an interactive analytical tool.
League Table: "Who is where?"
The League Table provides a ranked list of all providers, ordered by their NOF score. It directly addresses the accountability function of the framework, answering: "Where does each provider rank relative to peers?"
A sortable table of all providers ranked by average score, with columns for segment classification, overall rank, and key metrics. Region filtering allows scoping to a specific geography. Rows are colour-coded by performance segment. Clicking a provider drills through to the detailed Provider page.
| Goal | Support Level | How |
|---|---|---|
| Compare | Strong | Direct ranked comparison across all providers |
| Monitor | Strong | Threshold-based RAG status on every row |
| Lookup | Strong | Find a specific provider's rank and scores |
| Find Anomaly | Moderate | Colour-coded rows highlight outliers |
| Explain | Weak | Shows position but not causation |
The contextual research conducted in the early phases of this work extensively discusses the risks of league tables: single-score reductionism, gaming incentives, collaboration penalties, and data quality inequity across provider types. The design draws on Bevan and Hood's (2006) historical analysis of targets and gaming in NHS performance measurement, and the NHS Confederation and NHS Providers' (2025) four tests for league tables. The current implementation represents a careful balance between the transparency value of ranking and the known risks of crude comparison.
A significant step-up in analytical specificity from Summary. The League Table surfaces individual provider performance in a competitive frame. It creates the natural question "why is this provider ranked here?" - which the Compare tab is designed to answer.
Compare: "Why and how?"
The Compare tab is the analytical heart of the dashboard, designed for detailed, reflective analysis of NOF data. It supports the question: "How and why does performance vary, and what contextual factors might explain the differences?"
A unified comparison interface with multiple selectable views:
| View | What It Compares | Typical Use |
|---|---|---|
| By Metric | All providers on a single metric (bar chart) | "How does my trust compare on A&E 4-hour?" |
| Over Time | Quarter-over-quarter trends for selected providers | "Is performance improving or deteriorating?" |
| By Domain | Provider scores across all five performance domains | "Where are the strengths and weaknesses?" |
| Multiple Providers | Selected set of providers compared across metrics | "How do these three trusts compare side-by-side?" |
| Within Region | All providers within a selected region | "How do trusts in the North East compare?" |
| Within ICB | All providers within a selected ICB | "How do trusts in my local system compare?" |
| Between Regions | Two or more regions compared | "Regional inequality analysis" |
| Between ICBs | Two or more ICBs compared | "System-level comparison" |
| Goal | Support Level | How |
|---|---|---|
| Compare | Very Strong | Multiple comparison dimensions and views |
| Explain | Strong | Contextual views (domain, time, region) support causal reasoning |
| Summarise | Moderate | Each view provides focused summaries |
| Monitor | Moderate | Time-based views track trajectory |
| Find Anomaly | Moderate | Cross-metric comparison reveals inconsistencies |
| Audit | Moderate | Multiple views allow triangulation of performance claims |
This is the deepest level of analytical engagement available through the main navigation. The Compare tab assumes the user has already oriented themselves (Overview), understood the aggregate picture (Summary), and identified specific questions from the League Table. It provides the tools to interrogate those questions in detail.
The multiple sub-views within Compare represent a further layer of progressive disclosure within the tab itself - users select the analytical lens most relevant to their question rather than being overwhelmed by all comparison dimensions simultaneously.
Reference: "How is this measured?"
The Reference tab provides methodological transparency and contextual documentation, answering: "How is this data collected, scored, and what do the terms mean?"
Three sub-tabs: About the Framework (explanation of the NOF, its regulatory purpose, and how scores are calculated), Glossary (searchable definitions of terms), and Metric Metadata (detailed methodology for each metric - data sources, calculation methods, scoring thresholds).
Available at any point but positioned rightmost as supporting material rather than primary analysis. It serves a "reference shelf" function - always accessible but not the starting point for most users.
The progressive disclosure journey
Each tab assumes progressively more domain knowledge and analytical intent. The "concerned citizen" begins at Overview and may never leave it. The strategy analyst lives in Compare. The Reference tab serves everyone, but primarily those who need to defend or challenge the numbers.
Inside the Compare tab
The Compare tab uses the NHS Design System tabs pattern to present eight distinct analytical views. These are rendered as a horizontal tab strip within the page, with each view loading a dedicated panel component. The tab ordering itself follows its own internal progressive disclosure logic - from the most user-driven, bespoke comparisons on the left through to structural, system-level comparisons on the right.
Multiple Providers (leftmost tab)
Allows users to hand-pick individual providers from across any region or ICB to build a custom, bespoke comparison. Uses a searchable dropdown to add providers, then displays their metric scores side by side in a colour-coded heatmap table.
This is the most user-directed view. It assumes the user already knows which providers they want to compare - perhaps neighbouring trusts, organisations with similar characteristics, or a peer group they've identified from the League Table. Placing it first honours user agency: "I know what I want to compare, let me get on with it".
Selected providers are persisted in the URL query parameters, meaning the comparison can be bookmarked or shared with colleagues. This directly supports the Tory et al. (2021) "Circulate" goal - a user can construct a comparison, copy the URL, and email it to a board member or regional director.
Over Time
Tracks how a provider's scores and segment classification have changed between quarterly assessment periods. Highlights which specific metrics drove any segment change.
Temporal comparison is the natural next question after "how do these providers compare right now?" - namely, "is it getting better or worse?" This view shifts the analytical frame from cross-sectional (comparing peers at a point in time) to longitudinal (tracking trajectory).
The quarter-over-quarter comparison table includes a mechanism for selecting which comparison quarter to use, and explicitly shows the delta (change) alongside absolute scores. This supports the management information function that NHS executives rely on for board reporting.
By Domain
Compares providers across the five NOF performance domains - Access to Services, Effectiveness and Experience, Finance and Productivity, Patient Safety, and People and Workforce. Reveals the shape of a provider's performance profile rather than a single aggregate score.
After comparing specific providers and tracking their trajectory, the natural analytical deepening is to ask "where specifically are the strengths and weaknesses?" The domain view answers this by decomposing the aggregate score into its structural components.
The domain breakdown directly addresses one of the most significant critiques of league tables - that a single composite score can mask important variation. A trust ranked mid-table overall might be excellent on patient safety but struggling on finance. This view makes that visible.
By Metric
Compares all providers against a single selected metric, showing the national distribution as a bar chart and highlighting where each provider sits relative to peers. Effectively answers: "on this specific measure, how does the full field look?"
This is the most granular level of comparison - drilling from domain-level patterns down to individual metric performance. The metric selector allows users to cycle through all available metrics. The bar chart format makes outliers immediately visible - both high and low performers stand out. This supports the framework's quartile-based segmentation approach, where relative position matters as much as absolute score.
Within Region
Filters to providers within a single NHS England region and compares their performance side by side. Reduces the comparison set from the national picture to a regional peer group. The shift from metric-level analysis to geographic scoping represents a different analytical lens - moving from "what" to "where". Providers operate within regional systems and are often compared within those contexts by regional directors and local media.
Within ICB
Filters to providers within a single Integrated Care Board, providing the most local system-level comparison. ICBs are the operational unit for NHS service planning, so this view shows how providers within a single local system compare. A further geographic drill-down from regional to local system level.
Between Regions
Compares aggregate performance across NHS England regions to identify geographic patterns and structural inequalities. Shifts the unit of analysis from individual providers to regional systems. The final two views represent the most structural, system-level comparisons. They move beyond individual provider performance to ask questions about geographic equity and system-level variation.
Between ICBs (rightmost tab)
Compares aggregate performance across Integrated Care Boards. The most granular system-level structural comparison available. This is the most specialist view, of primary interest to national policy analysts examining whether integrated care is reducing variation. It represents the furthest point on the progressive disclosure spectrum within Compare.
The tab ordering problem
The eight Compare sub-views are currently ordered as: Multiple Providers, Over Time, By Domain, By Metric, Within Region, Within ICB, Between Regions, Between ICBs. An initial reading might suggest a "most user-directed → most structural" continuum, but this framing doesn't survive close scrutiny. There are at least three problems with it.
The continuum conflates two distinct dimensions. "User-directed" (the user chooses what to compare) and "structural" (the comparison is pre-defined by system geography) are not opposite ends of the same scale. By Metric, for instance, is highly user-directed (the user picks which metric) but also highly structural (the comparison set is the entire national field). Over Time is user-directed (the user picks the provider and quarters) but also quite analytical. The single axis doesn't capture the actual variation.
The middle of the order is incoherent. By Domain and By Metric sit between the temporal view (Over Time) and the geographic views (Within Region, Within ICB), but there is no clear logic for why domain decomposition should precede metric drill-down, or why either should precede geographic scoping. A user asking "how does my region compare?" has no reason to pass through By Domain first.
Within and Between are different analytical operations that happen to share a geographic unit. Within Region (providers compared inside a region) and Between Regions (regions compared against each other) serve fundamentally different audiences and answer different questions. Placing them adjacently suggests they are a pair, which is helpful - but grouping all four geographic views together obscures the fact that "within" views are provider-level comparisons (like Multiple Providers, but with a geographic filter), while "between" views are aggregate-level comparisons (a different unit of analysis entirely).
Alternative groupings
Analysing the eight views more carefully, they differ along three independent dimensions:
| Dimension | Options |
|---|---|
| Unit of analysis | Individual providers vs. geographic aggregates (regions, ICBs) |
| Comparison scope | User-selected subset vs. pre-defined grouping vs. full national set |
| Analytical frame | Cross-sectional (point in time) vs. longitudinal (over time) vs. structural (decomposition) |
Mapping each view against these dimensions:
| View | Unit of Analysis | Comparison Scope | Analytical Frame |
|---|---|---|---|
| Multiple Providers | Individual providers | User-selected subset | Cross-sectional |
| Over Time | Individual providers | User-selected subset | Longitudinal |
| By Domain | Individual providers | Full national set | Structural decomposition |
| By Metric | Individual providers | Full national set | Cross-sectional |
| Within Region | Individual providers | Pre-defined geographic group | Cross-sectional |
| Within ICB | Individual providers | Pre-defined geographic group | Cross-sectional |
| Between Regions | Geographic aggregates | Full national set | Cross-sectional |
| Between ICBs | Geographic aggregates | Full national set | Cross-sectional |
This reveals a cleaner grouping based on unit of analysis and comparison scope.
Option A: Group by Unit of Analysis
This is the cleanest structural distinction. The six provider-level views all answer variants of "how does provider X compare to provider Y?" The two system-level views ask "how does region/ICB A compare to region/ICB B?" - a fundamentally different question with a different unit of analysis.
Between Regions and Between ICBs could live in a visually separated group (e.g., a divider within the tab strip, or a second row of tabs labelled "System-level comparisons"). This would signal to users that these views answer a structurally different question.
Option B: Group by Comparison Scope
This grouping foregrounds the user's mental model: "Am I choosing what to compare, filtering by place, or looking at everything?" However, it creates an awkward overlap - Between Regions could logically sit in either "Geographic" or "National", depending on whether you think of it as "geographic scoping" or "full national comparison of aggregates".
Option C: Group by User Question
This grouping aligns most naturally with how users frame their questions. It has the advantage of placing Over Time as its own distinct category (it is the only longitudinal view, and it feels out of place next to the cross-sectional comparisons). The geographic group then clusters all place-based analysis together.
The "Within" views occupy an ambiguous middle ground - they are provider-level comparisons (like Multiple Providers or By Metric) but with a geographic filter applied. They could equally sit with the provider-level views as "provider comparisons, filtered by place" or with the system-level views as "geographic analysis". Option C resolves this by grouping all geography-related views together, which better matches the user's intent: "I want to understand how place affects performance".
Recommendation
Option A (Unit of Analysis) is the most analytically rigorous grouping and would be the easiest to implement in the UI - a visual separator or labelled section divider between the six provider-level tabs and the two system-level tabs.
Option C (User Question) is the most user-centred grouping and better reflects how the personas actually frame their analytical questions. It would require a more significant UI change - either nested tab groups or a two-level selection (first choose the question type, then the specific view).
A pragmatic middle ground might be to keep the current flat tab strip but introduce visual grouping cues - subtle dividers or section labels within the tab bar:
This preserves the single-level navigation (no nested tabs to manage) while signalling to users that the eight views cluster into three meaningful groups. The dividers also make the tab strip more scannable - eight undifferentiated tabs is a lot for users to parse without grouping cues.
Drill-through pages
Two additional page types sit outside the main navigation but play important roles in the progressive disclosure structure.
Provider Detail - accessed by clicking a provider in the League Table or using the provider search. Shows a single provider's complete performance profile across all domains and metrics. Includes a provider selector for switching between organisations. The primary audience is trust executives, patients researching a specific hospital, and regulatory teams conducting provider-level review.
Timeseries Views - historical performance data in chart and table formats. Accessible via URL but not prominently surfaced in navigation, consistent with the progressive disclosure principle that detailed temporal analysis is a specialist need.
Personas and their primary tabs
| Persona | Role | Primary Tab(s) | Time Spent | Key Question |
|---|---|---|---|---|
| Sarah, 42 | Patient seeking care | Overview, League Table | 10–15 min | "Is my local hospital doing OK?" |
| James, 58 | Family member of inpatient | Overview, League Table | 5–10 min | "Is this hospital safe?" |
| Maya, 35 | Community health advocate | Compare, Summary | 30–60 min | "Are health gaps closing in my area?" |
| Angela, 52 | Chief Operating Officer | (Via intermediary) League Table | 5 min | "Which trusts need intervention?" |
| David, 48 | Regional Director | Summary, Compare | 60 min | "How does my region compare to peers?" |
| Priya, 31 | Strategy Analyst | Compare, Reference | 2–8 hours | "What explains this performance pattern?" |
| Robert, 55 | Trust CEO | Provider Detail, League Table | 15–30 min | "How do I present our performance to the board?" |
What user testing revealed
Nine user research sessions were throughout February 2026, covering a deliberately wide range of stakeholders: ICB analysts, NOF policy leads, members of the public, think tank researchers, and a senior clinical leader.
| Session type | Participant role(s) | Category |
|---|---|---|
| ICB analyst interview | Two ICB performance analysts | Internal NHS |
| NOF team demo | NOF analyst and NOF team lead | Internal NHS |
| NOF policy lead demo | NOF policy lead | Internal NHS |
| Public user testing | Member of the public (former systems tester) | Public |
| Think tank session | Policy researcher (King's Fund) | Think tank |
| Public user testing | Member of the public (retail manager, no NHS background) | Public |
| NOF team playback | NOF analyst and NOF team lead | Internal NHS |
| Think tank session | Digital/design manager and head of public affairs (Nuffield Trust) | Think tank |
| Clinical leader testing | Deputy Chief Nurse (acute trust) | Internal NHS |
Overview (Map)
The map view divided opinion sharply along audience lines.
Public users found the map intuitive and engaging. One zoomed in, located her provider by geography, and used the colour coding to form an immediate national impression: "everywhere's pretty average... don't go to the North-East, generally". Another, on an iPad, immediately tried to pinch-zoom to her local area. The Deputy Chief Nurse said he "quite liked it straight away" as a visualisation.
Analysts and think tanks were more sceptical. The ICB analysts called the map "totally useless" for their work, jumping straight to the Summary tab. The King's Fund used it only for geographic sample selection ("I need a case study site in the southwest"). One of the Think Tank's digital lead said "I always feel a bit sceptical about maps like this" - the registered trust address doesn't map to how patients experience geography. Their public affairs lead suggested the map would become much more useful if users could filter it by specific domain or metric, essentially turning it into a thematic map rather than just a segment display.
The map serves the citizen wayfinding purpose well but should be treated as an entry point only. Analyst users bypass it.
Summary
The Summary tab received the most consistently positive feedback of any tab, particularly for the segment changes section. One public user found the segment change cards "very similar to a snapshot you get for a school - most people will be able to look at and go, right, cool". Another navigated it confidently despite having no NHS background. The King's Fund researcher said the segment changes section was "really useful" and immediately wanted to click through from the count ("I wanted to click on it - I want to see the 8 that have declined").
However, several significant concerns emerged. The King's Fund researcher described the national average as "a funny number" - it's an average of averages, and because the segmentation is quartile-based, it will always converge around 2.5 and never meaningfully improve. The NOF policy lead also expressed reservations about the average metric score display, suggesting we should either remove or reframe the headline average card. The King's Fund researcher also noted that the coloured rectangles in the segment distribution chart implied their width was proportional to the segment count, when it wasn't.
Both the NOF team and the NOF policy lead noted that domain-level averages at the national aggregate are not very informative - they become much more relevant at a trust level. One of the Think Tanks cautioned that quarter-on-quarter changes are "not necessarily a massive source of information" due to intrinsic volatility and seasonal effects; they suggested longer-term trend analysis would be more meaningful once more quarters of data accumulate. The NOF policy lead stressed that segment changes must be based on statistical significance, not merely on a segment number changing.
One of the Think Tank's digital lead said the Summary page has "too much information to pass well" - his instinct would be to use words to describe what's important. This tension between quantitative display and narrative explanation recurred across sessions.
League Table
The League Table was the most instinctively understood tab across all user groups. Public users grasped ranking immediately - one said "this is your ranking" and navigated confidently; the other searched for her trust and quickly understood its position. The King's Fund used it as their primary tool for case study site selection - their researcher described literally doing Ctrl+F to find trusts and noting their segment in a spreadsheet.
The financial override information was consistently valued once users discovered it. One public user clicked through and found it "really interesting" that a provider's score would have placed them in Segment 1 without the deficit. However, no users discovered this unprompted - it required clicking a specific element.
Compare
The Compare tab generated the most varied and most detailed feedback, consistent with it being the most complex tab.
Multiple Providers was the single most requested feature across sessions. The ICB analysts wanted to compare two specific providers from different regions. One public user wanted to add more providers to her comparison. The King's Fund described a "comparing cameras" use case of selecting specific trusts. One of the Think Tank's public affairs lead raised the Shelford Group analogy - trusts benchmark against self-selected peer groups, not geographic neighbours. The Deputy Chief Nurse confirmed this: "we probably compare ourselves to more Shelford organisations". The current implementation was praised when participants found it, but discoverability was poor - public users didn't notice the Compare sub-tabs at first.
By Metric - the NOF policy lead called this "probably one of the more important bits of functionality". One of the Think Tanks we spoke to agreed it was the most useful compare view for their work ("more common to want one metric across many trusts than one trust, many metrics"). The King's Fund wanted to be able to export the metric view directly to a spreadsheet for quintile analysis.
By Domain - the NOF team lead found the hover-over tooltips showing metric detail within each domain "really useful". However, the term "domain" itself was flagged as confusing - one public user asked "what does domain mean?" and the Nuffield Trust confirmed "the word domain in our work is not good - people don't know what that means".
Within Region / Within ICB - the NOF policy lead identified these as matching what regional support groups actually use: heat map comparisons of providers within their geography. Strong validation of these sub-views for the systems audience.
Between Regions / Between ICBs - The least tested views across sessions. The NOF team said regional-level distribution comparisons "might be nice to see" at the overall level but at individual metric level "it gets really complex". This supports the current positioning as the most specialist, rightmost tabs.
Over Time - The NOF team suggested adding a time series chart to individual metric views. The NOF team lead emphasised the value of showing raw value change over time (not just score change) because "the score could stay the same but your value has gone up a lot… it's masked by what everyone else is doing". A significant insight for the design of the Over Time view.
Tab count and cognitive load. The NOF team lead directly raised the concern: "there's quite a few tabs on there… do you have like, this tab is for these people and it solves this problem?" The NOF analyst noted that the progressive journey works - "you can start to walk through it and think, I've got everything I need now, or I haven't, I'm going to keep going" - but acknowledged it's "becoming more complicated". The designer reflected in the team playback that the tabs "started as four and turned into seven" and that "we maybe need to break that up into a separate page". This directly validates the tab ordering critique and supports the case for visual grouping cues.
Provider Detail
The "about this metric" expandable explanations received consistent praise across all sessions. One public user said they helped her understand "what the score is, what the percentage is, and why the score is one". The King's Fund researcher used the highest/lowest rated metrics section and found it "quite useful - that is essentially exactly what we would want to know".
The Deputy Chief Nurse had the most revealing response. He went straight to the patient safety domain but found the domain score and colour coding unclear: "I don't know what the green actually means… this isn't probably helping me". He understood his specific metrics (MRSA, E. coli) in absolute terms but couldn't map them to the NOF's relative scoring system. His reaction - "three is too many, so I'm concerned, but the dashboard is telling me I'm green" - encapsulates a fundamental tension between absolute standards (zero tolerance for certain infections) and relative quartile-based scoring.
Cross-cutting themes
1. Colour coding is the single most effective design decision
Every single user across all nine sessions praised the colour coding. One public user called it "easy to see and understand". Another said "the colour coordination helps a lot - I'm a very visual person". The Deputy Chief Nurse observed: "I probably don't know what the score means, but I know it's probably not very good just because it's red". This is the design feature doing the heaviest lifting for the public audience and should be protected and made consistent across all views.
2. The segmentation scheme is poorly understood
No public user correctly understood what "Segment 3" means on first encounter. One interpreted it as "three things that need improvement". The other was confused: "not knowing what segment means can be a bit difficult to understand". One of the Think Tank's digital lead suggested labelling with categories people already understand (ABC, gold/silver/bronze). Multiple participants drew Ofsted analogies unprompted, suggesting that a categorisation scheme with descriptive labels (Outstanding, Good, Requires Improvement, Inadequate) would map more naturally to public understanding than numbered segments.
3. The financial override is confusing but highly valued once understood
Every user who encountered the financial override struggled initially but found it one of the most interesting pieces of information once explained. The King's Fund raised a deeper point: "there are value judgments placed on top of this, because the idea of the financial override is that finances matter more" - which from a patient perspective "probably doesn't matter" because "everyone knows the financial situation's bad". This suggests the override should be displayed prominently but framed carefully as context rather than as the defining performance feature.
4. "Domain" is jargon - rename it
Flagged independently by a public user ("what does domain mean in the trusts?"), the Nuffield Trust ("we find using the word domain in our work is not good"), and implicitly by other users who struggled with the domain filter. "Performance areas" or "performance themes" would be more accessible alternatives.
5. Score direction is counter-intuitive
One of the Think Tank's digital lead noted that "lowest score is best" runs against cultural norms: "higher equals better - I feel there's more sort of culturally common". This is embedded in the NOF methodology and not within the design team's power to change, but it underscores the need for clear labelling and visual cues (colour coding) to compensate for the unintuitive scale.
6. Data export is the most requested technical feature
The King's Fund analysts wanted to extract data directly from views. The ICB analysts had built bespoke Excel packs to circumvent dashboard limitations. The NOF team playback identified export as a key need. Multiple users described wanting to take a snapshot and "plunk it straight into a PowerPoint". This should be a high-priority development item across all tabs.
7. Patient choice is the stated policy aim, but the data doesn't support it
This theme emerged independently from the NOF policy lead (who remains "unconvinced that the public want this information"), the King's Fund (who noted that "the thing that sways the decision for most people" is waiting list length for a specific clinic), the Nuffield Trust (who pointed out patient choice "only really works for elective care"), and multiple public users who wanted department-level and specialty-level data that NOF doesn't contain. There is a significant gap between the political framing of NOF as a "patient choice tool" and what it actually enables.
This connects directly to the sunlight hypothesis analysis - the market pressure theory (patients choose high-performing providers) requires granularity of data that NOF doesn't provide. The mechanism through which these dashboards actually produce effects is more likely reputational than informational, as the policy instruments post explored.
8. NHS insiders experience data lag that undermines operational utility
The ICB analysts noted the data is already months old by the time it reaches NOF. The Deputy Chief Nurse's team reports weekly while NOF updates quarterly. One of the Think Tanks we spoke to observed that most "drivers of performance are quite long-term in nature" and quarter-on-quarter changes have limited analytical value. This reinforces the design decision to position the dashboard as a transparency and accountability tool rather than an operational management tool - the people who need real-time data have their own internal sources.
9. Think tanks use NOF as a temperature check, not an analytical tool
Both the King's Fund and Nuffield Trust described using NOF primarily for quick orientation - selecting case study sites, checking which segment a trust is in, scanning for obvious red flags. Their serious analytical work uses source data downloaded from elsewhere. This has implications for how much analytical depth we build into the public dashboard: the professional analytical audience may not need it here because they have other tools, while the public audience may not be able to use it.
10. The intermediary layer is real and should be explicitly designed for
The Deputy Chief Nurse's session crystallised this finding. He doesn't use the dashboards directly - his analyst team provides the data, and he provides the narrative. His use of NOF is entirely mediated through that relationship. Similarly, the King's Fund and Nuffield Trust act as intermediaries between the raw data and public understanding. The dashboard should therefore support the intermediary workflow: easy export, shareable URLs, screen-grab-friendly views, and clear enough visualisations that a non-analyst can brief an executive from a screenshot.
This intermediary dependence was anticipated in the publics theory - the dashboard doesn't inform a pre-existing public directly, but works through chains of interpretation where each intermediary layer re-frames the data for their audience.
Open questions
The current single interface serves both public transparency and professional accountability. The project documentation proposes a dual-dashboard model (public + internal) but the current implementation uses a unified approach; the progressive disclosure from left (citizen) to right (analyst) partially addresses this tension, but the sunlight hypothesis analysis suggests these may require architectural separation.
Bevan and Hood's (2006) work and the NHS Confederation and NHS Providers' (2025) four tests highlight risks of reductionism, gaming, and collaboration penalties inherent in league tables. The Compare tab exists partly to mitigate these risks by providing richer contextual analysis beyond simple ranking. The persona research confirms that executives use dashboards through intermediaries, meaning the dashboard design must serve both the intermediary (who needs analytical depth) and the executive they're briefing (who needs clear, simple narratives).
Mental health and community trusts have structurally different data infrastructure from acute trusts, making cross-type comparisons problematic - the provider type selector partially addresses this by scoping views to comparable organisations. There is also an ongoing tension between Statistical Process Control icons and traffic-light RAG indicators, with user testing not yet completed on comprehension.
References
- Bevan, G. & Hood, C. (2006). What's measured is what matters: Targets and gaming in the English public health care system. Public Administration, 84(3), 517–538.
- NHS Confederation & NHS Providers. (2025). Flawed league tables risk confusion and harm. Policy Brief.
- Tory, M., Bartram, L., Fiore-Gartland, B. & Crisan, A. (2021). Finding their data voice: Practices and challenges of dashboard users. IEEE Computer Graphics and Applications, 41(3), 22–30.