Stats and stories for impact

I really enjoyed participating in the latest UHMLG autumn forum (not least as the London Mathematical Society is a fun little venue with a surprise garden).

LMS Garden

You can find all the slides from the day here

The day started with an inspiring presentation by Kay Grieves from the University of Sunderland.  What shone out was the importance of having a cohesive and strategic approach to engagement.  Many of the things she presented nicely foreshadowed my own presentation (on making annual reports more useful) and the whole programme hung together nicely.

I liked the process Kay presented of moving from articulating / contextualising through engagement to sharing the narratives and insights gleaned.  It can be easy in engagement work to get pulled in all directions and the careful focus on key strategic objectives / issues for the service is a lesson most could use with applying.  The quality of the presentation of their resulting campaigns was striking and you could well imagine that these would help with getting people interested.

The talk included a worked example around journals showing careful capture of qualitative and quantitative data so that there are stories with the numbers that can often be all we have to go on. The outcome was a positive campaign combining with skills development, academics and the whole library to help people understand the role of journals in learning at university level.

My own talk was tweaked from one I gave at EAHIL this year.  This pulled together the work I have been doing on using a visual action plan format with the work on principles for good metrics prepared for Knowledge for Healthcare.  Placing the metrics work in this context seems to have been an effective way of framing it.  I received very positive feedback from the talk with a number of people planning to take action to improve their own annual reports.

Advertisements

On reading – Libraries and Key Performance Indicators (2017) Appleton

One of the fun things I did last year was contribute a case study based on my work with the KfH Metrics Task and Finish Group to the book “Libraries and key performance indicators: a framework for practitioners” by Leo Appleton.

Cover of book

I was really pleased to have the opportunity to share our work in this way and to get my name in print!

Prompted by reading a review of the book (in the December issue of the HLG Newsletter) and by an upcoming workshop I am preparing for health librarians in the North I thought I would have a read myself.

It is a compact book at 150 or so pages including references.  I think brevity has a lot to recommend it in a practical text and this could be dipped into or read completely fairly quickly.  It covers a lot of ground in a short time including a useful review of past efforts at performance management in library services and the influence of current trends around user experience approaches.  There are a number of examples from different library sectors which is useful for widening the perspective.

There were areas where I would differ – for example around the amount of confidence that can be placed in the various statistical return series.  Changes are coming to the long standing NHS statistics return reflecting careful consideration of how useful a number of these measures are in practice – particularly given likely variation in collection.

The chapters on methods provide good overviews with references to follow up. The librarian tendency to count anything that moves has been exacerbated by the opportunities offered by digital resources to do this and the book is good on tempering this enthusiasm.  I would perhaps have liked more on how to manage a regular flow of qualitative data in such a way as to support KPIs.  A contribution to a bundle of performance indicators across a single KPI perhaps?

Terminology is a bit of a muddle and I found myself confused at times about what was being referred to.  A definition of a KPI is provided but merits clearer flagging.  While there were a lot of excellent warnings about potential pitfalls and dead ends I wonder if more could be done to highlight the positive ways forward?  The various case studies were useful in providing some idea of how people have been able to advance with this work.

It was a relief to read my case study in context and I think it makes a useful contribution to the book. The principles advanced in the NHS Metrics work are widely applicable and certainly supported by the wider research presented in the book.

Having declared my bias up front – I think this is a useful book and I hope people will read it!

Making Metrics that hit the MARC

I always enjoy preparing a poster for the #NHSHE2016 Conference organised by London Health Libraries (now with added KSS).  There are always good prizes and the chance to create something to make the office look less dull while sharing a piece of work.  We were tasked with the theme of Knowledge for Healthcare which was pretty straightforward as this encompasses pretty much anything you care to look at professionally these days.

My main direct involvement with KfH has been around metrics.  The presentation I gave at HLG2016 Scarborough brought home to me the need to make the materials we had produced in the Metrics Task and Finish group more accessible.  It was also clear that people were interested if things were put to them clearly. So a poster on Metrics was the obvious outcome.  I went with trying to hammer home the message about the four principles and what they mean in practice. Using MARC as an acrostic had the bonus of chucking in a feeble nerdy library pun.

The poster was well received. While it came only 6th out of 9 in the popular vote this was a step forward on last years metrics poster which was a rare non prizewinner. A few people verbally told me how clear and helpful they had found it. I was really pleased to see a tweet afterwards sharing the poster with a group of other libraries after it had been raised at a network meeting.  I am hoping that people will share with me examples of how they have used the metrics work.

Next steps are to create a version of this post for the KfH blog and move on with the plans to set up a national metrics collection tool.

#HLG2016 Metrics made practical

As promised here are my slides from #HLG216 in Scarborough.

A few thoughts follow to build on the content in the slides for those not around at the time / to make them a bit more helpful generally.

As my talk progressed I became very aware of how the metrics work linked closely to my day to day. We operate in a functional structure and I am part of a team responsible for Partnership & Liaison. This translates for me into wanting to have lots of meaningful conversations with people.  Metrics are a means to that end. I didn’t mention it during my talk but this year I produced Annual Action Plan style reports (a la York) for the NHS Trusts I work with. These proved much more engaging than my old annual report. They featured a small number of carefully picked metrics with explanations of what I thought they might mean.

The presentation builds through the various models / methods we considered as we researched the use of metrics.  It was good to tap into experience in the room of why our measures can be unconvincing, hard to share and obscure. I was really pleased to find by chance (picked up from the weeding trolley at work) a 1990 text by some of the greats “Quality assurance in libraries: the healthcare sector” which strongly affirmed the areas people were focusing on and some of the approaches under way. Not much changes in the end.

We ended up with four overarching principles for good metrics

Meaningful – the core of this is that the metric must be something people (other than you) care about. It should be aligned to organisational objectives and be readily understood by stakeholders.  You need to talk them about it! An extension of this is to remember that framing metrics as a target should be approached with caution. There is the potential to set targets for the sake of it that will lack meaning for stakeholders. Tell them how you are performing and then discuss whether this is more or less than they need. Metrics need to be kept under regular review to reflect changing priorities and remain meaningful.

Actionable – for a metric to be useful for us it needs to be in an area we can influence. If we cannot make it move one way or the other it then we don’t want to be held accountable for it. A good metric will drive changes of behaviour and service development.  We also need to remember that the metric is only an indicator and we need to carry out appropriate research to back up what we suspect the figures are telling us.

No numbers without stories – no stories without numbers

Reproducible – this principle contains quite a lot.  It starts from the position that tracking a metric is a piece of research. Accordingly we need to be transparent about our methods and we need to be so before we collect the data. We should use the best data available to us.  Replication implies that we should get consistent results if two people examine the same thing at the same time.  Finally we also want our metric data collection to not be excessively burdensome. If it takes two solid months to crunch the data then it probably isn’t reproducible (or you would really have to get an awful lot from it).

Comparable – finally we want metrics that allow us to see change over time.  Often we will need to recognise that this can only be internally. We may be able to look to benchmark externally but we should be realistic. Even if we are transparent it will remain difficult to establish consistent data and there are frequently influencing factors that we may or may not be aware of. For example – what is the impact of being in a Trust three times bigger? or with three sites? or thirty? what kind of staffing model is in place? how is the service funded and delivered?

All this is a fair bit to keep in mind so the Metrics task and finish group prepared a Quality Metrics Template. This is designed to support people in creating, documenting and sharing their metrics.  The slides include a worked example and others were distributed in the room for the final group work section were people had a chance to start drafting some out or just discuss the principles.

In discussion the potential was seen to use completed templates as the basis of a process of refinement seeking best of breed metrics around particular questions.  Hopefully a tool will be available to collect them in the first instance and then an approriate group might be assembled. There was some concern that metrics might be imposed but this strikes me as unlikely.  The diversity of services and the needs of local stakeholders mean that one size will definitely not fit all.  There was discussion of the NHS national statistics return and the importance of considering these in the light of the principles.

I hope people will find the principles and template useful. It was great to talk to such an enthusiastic audience. More conversations please!