Setting useful metrics is difficult. This poster builds on the Principles for Metrics report prepared by the Task and Finish Group I chaired for the Knowledge for Healthcare Programme of Health Education England. It explains a bit more about what is meant by the four headline principles of Meaningful, Actionable, Reproducible and Comparable. Hitting the MARC is a terrible pun aimed at librarian types but the principles are not limited in applicability to library settings.
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
Do your metrics hit the MARC?
1. Do your metrics hit the MARC?
Alan Fricker
Head of NHS Partnership & Liaison
Aiming for better metrics?
We want to have interesting and convincing discussions with our stakeholders.
A carefully created metric can be at the heart of these.
The Principles for Metrics report examines practice around metrics from a
wide range of settings including NHS and academic libraries. A Quality Metrics
Template accompanied the report to help with the creation, recording and
sharing of metrics.
Examining the four principles advanced by the report helps us understand the
how and why of metrics. Do your metrics hit the MARC?
References
Metrics Task and Finish Group (June 2016) Principles for Metrics. Available at: http://tinyurl.com/PrinciplesForMetrics
Metrics Task and Finish Group (June 2016) Quality Metrics Template. Available at: http://tinyurl.com/MetricsTemplate
Meaning – who cares?
The metric needs to be something people (not just you) care about.
Meaningful metrics often combine more than one facet – e.g. usage by a particular staff group
They should be aligned to organisational objectives and readily understood by stakeholders.
Framing metrics as a target should be approached with caution. Is your target meaningful?
Talk about how you are performing and then discuss whether this is more or less than is needed.
Metrics need regular reviews for changing priorities to ensure they remain meaningful.
Action – can you make a difference?
For a metric to be useful it needs to be in an area you can influence.
If you cannot improve it then you don’t want to be held accountable for it.
A good metric should drive changes to behaviour and service development.
The metric is only an indicator prompting research to understand what it might be telling you.
No numbers without stories
– no stories without numbers
Reproduce – can you measure it again?
A metric is a piece of research that should be repeatable.
This requires up front and ongoing transparency about methods.
You should see consistent results if you, or others, repeat research in a similar period.
Metric data collection needs to not be excessively burdensome. If it takes two solid months to
crunch the data then it probably is not reproducible.
You should use the best data available.
Compare – who with?
Metrics should allow you to see change over time.
Internal comparisons are most reliable as you can control more variables.
Be cautious and realistic if attempting to benchmark externally.
Even with reproducible metrics it is difficult to establish consistent data and avoid confounding.
For example – what is the impact of being in a Trust three times bigger? Or with three sites? Or
thirty? What kind of staffing model is in place? How is the service funded and delivered?