As promised here are my slides from #HLG216 in Scarborough.
A few thoughts follow to build on the content in the slides for those not around at the time / to make them a bit more helpful generally.
As my talk progressed I became very aware of how the metrics work linked closely to my day to day. We operate in a functional structure and I am part of a team responsible for Partnership & Liaison. This translates for me into wanting to have lots of meaningful conversations with people. Metrics are a means to that end. I didn’t mention it during my talk but this year I produced Annual Action Plan style reports (a la York) for the NHS Trusts I work with. These proved much more engaging than my old annual report. They featured a small number of carefully picked metrics with explanations of what I thought they might mean.
The presentation builds through the various models / methods we considered as we researched the use of metrics. It was good to tap into experience in the room of why our measures can be unconvincing, hard to share and obscure. I was really pleased to find by chance (picked up from the weeding trolley at work) a 1990 text by some of the greats “Quality assurance in libraries: the healthcare sector” which strongly affirmed the areas people were focusing on and some of the approaches under way. Not much changes in the end.
We ended up with four overarching principles for good metrics
Meaningful – the core of this is that the metric must be something people (other than you) care about. It should be aligned to organisational objectives and be readily understood by stakeholders. You need to talk them about it! An extension of this is to remember that framing metrics as a target should be approached with caution. There is the potential to set targets for the sake of it that will lack meaning for stakeholders. Tell them how you are performing and then discuss whether this is more or less than they need. Metrics need to be kept under regular review to reflect changing priorities and remain meaningful.
Actionable – for a metric to be useful for us it needs to be in an area we can influence. If we cannot make it move one way or the other it then we don’t want to be held accountable for it. A good metric will drive changes of behaviour and service development. We also need to remember that the metric is only an indicator and we need to carry out appropriate research to back up what we suspect the figures are telling us.
No numbers without stories – no stories without numbers
Reproducible – this principle contains quite a lot. It starts from the position that tracking a metric is a piece of research. Accordingly we need to be transparent about our methods and we need to be so before we collect the data. We should use the best data available to us. Replication implies that we should get consistent results if two people examine the same thing at the same time. Finally we also want our metric data collection to not be excessively burdensome. If it takes two solid months to crunch the data then it probably isn’t reproducible (or you would really have to get an awful lot from it).
Comparable – finally we want metrics that allow us to see change over time. Often we will need to recognise that this can only be internally. We may be able to look to benchmark externally but we should be realistic. Even if we are transparent it will remain difficult to establish consistent data and there are frequently influencing factors that we may or may not be aware of. For example – what is the impact of being in a Trust three times bigger? or with three sites? or thirty? what kind of staffing model is in place? how is the service funded and delivered?
All this is a fair bit to keep in mind so the Metrics task and finish group prepared a Quality Metrics Template. This is designed to support people in creating, documenting and sharing their metrics. The slides include a worked example and others were distributed in the room for the final group work section were people had a chance to start drafting some out or just discuss the principles.
In discussion the potential was seen to use completed templates as the basis of a process of refinement seeking best of breed metrics around particular questions. Hopefully a tool will be available to collect them in the first instance and then an approriate group might be assembled. There was some concern that metrics might be imposed but this strikes me as unlikely. The diversity of services and the needs of local stakeholders mean that one size will definitely not fit all. There was discussion of the NHS national statistics return and the importance of considering these in the light of the principles.
I hope people will find the principles and template useful. It was great to talk to such an enthusiastic audience. More conversations please!