HILJ CPD reading Volume 35 No 3 – Developing a generic tool to routinely measure the impact of health libraries

Welcome to the second experimental online reading group aimed at encouraging discussion of interesting articles in HILJ.  The first attempt took place around Volume 35 No 2 on CILIP Social Link (link may require CILIP login and may not take you to the right place).  Unfortunately we found SocialLink did not really offer quite what was needed so future editions will rove across any ones blog that cares to host.

I raised the possibility of having a regular discussion on articles from HILJ at HLG2018 having muttered about it for some time and as others expressed an interest (in particular Lisa Burscheidt, Morag Clarkson, Catherine Mclaren and Tom Roper) here we are.

As an HLG Member you should have access to HILJ via the link below https://archive.cilip.org.uk/health-libraries-group/health-information-libraries-journal/access-health-information-libraries-journal-hilj though many have it in a Wiley bundle and that maybe easier! The article this time is OpenAccess anyway so should be straightforward.

The idea is that an article will be selected from each issue to be discussed. The group have picked an article but there might be a vote in future or we may carry on picking a favourite by some other means (perhaps the host blogger gets to choose). The intention is to select articles with practical applications. We will offer some questions as prompts but the discussion can go where interest takes it.

The article selected this time is:

Developing a generic tool to routinely measure the impact of health libraries

Stephen Ayre, Alison Brettle, Dominic Gilroy, Douglas Knock, Rebecca Mitchelmore, Sophie Pattison, Susan Smith, Jenny Turner

Pages: 227-245 | First Published: 18 July 2018


Health libraries contribute to many activities of a health care organisation. Impact assessment needs to capture that range of contributions.


To develop and pilot a generic impact questionnaire that: (1) could be used routinely across all English NHS libraries; (2) built on previous impact surveys; and (3) was reliable and robust.


This collaborative project involved: (1) literature search; (2) analysis of current best practice and baseline survey of use of current tools and requirements; (3) drafting and piloting the questionnaire; and (4) analysis of the results, revision and plans for roll out.


The framework selected was the International Standard Methods And Procedures For Assessing The Impact Of Libraries (ISO 16439). The baseline survey (n = 136 library managers) showed that existing tools were not used, and impact assessment was variable. The generic questionnaire developed used a Critical Incident Technique. Analysis of the findings (n = 214 health staff and students), plus comparisons with previous impact studies indicated that the questionnaire should capture the impact for all types of health libraries.


The collaborative project successfully piloted a generic impact questionnaire that, subject to further validation, should apply to many types of health library and information services.

I picked this article as this has been a hot topic for some time now.  I expect many of us will have experience and views on the generic impact questionnaire so there should be useful discussion.  I have not read the article before selecting it!

Starter Questions –
What? What do you think of this article / the generic impact questionnaire / etc?
So what? Does this change your view of the tool?  What changes might we want to see with the tool?
Now what? Are you going to do anything with it?

The next edition of the HILJ CPD Reading experiment (name suggestions welcome! #HILJClub perhaps?) will appear when volume 35 no 4 appears and be hosted by Lisa Burscheidt over at That Black Book.

Look forward to the discussion!  The comments box is further down in this template than I realised so do scroll down to reach it!

5 thoughts on “HILJ CPD reading Volume 35 No 3 – Developing a generic tool to routinely measure the impact of health libraries

  1. I might as well get the ball rolling – people are hopefully busy reading the article and preparing their thoughts!

    I was glad to read this article on the development of the generic impact questionnaire. It was reassuring to see the thoroughness with which the authors had gone about their work.

    I found the article format slightly confusing – perhaps as this is something of a tale of work done with some testing along the way rather than being structured around testing a research hypothesis. There were things in the Background that likely belonged in the discussion for example.

    I would have liked to have seen more detail of the analysis of LQAF fully compliant returns under 1.3C. This representation of what is actually happening feels like an important part of the picture.

    I did get tangled up in the various survey tools, survey pilots and generic surveys (I think one is mislabeled in the article “sub head pg230?”).

    I think the pilot may have over represented certain types of activity. For me this flagged a weakness generally – it feels like the numbers are being held up as rather accurate / capable of strong scrutiny when there are a lot of variables that cannot be controlled for. An example is “less than half noted that the incident saved them time” – checking the figures this was 45% of respondents. In most contexts that would sound pretty good to me but it is compared unfavourably with other higher percentages and research elsewhere.

    I am concerned that the ability to tick multiple boxes is likely to be leading to over statement of impacts. When this is combined with the difficulty, recognised by the authors, to get people to focus on a single incident I would expect exaggerated scores.

    So what? I liked the emphasis on moving from measuring activity to action. I am glad that the tool is available and I have used it. I am slightly concerned about the validity of the results. We used the tool quite widely across South London as part of a larger survey (I am aware this means we were not using it in a pure fashion) and a quick look at the results (ie I haven’t done the work to check) gave me the impression that results were very similar from one organisation to the next despite different service offers, settings and scales.

    Now what? we have embedded the tool in some of our channels and will continue to consider how to use it. I consider that qualitative approaches may have more to offer in terms of creating a compelling picture but I am not above using a nifty headline figure. Speaking of figures – I liked figure 1 (the logic model) a lot!

  2. What? As the lead on impact for my library service, I found this article gave some valuable insight into how the tool was created and especially what its limitations are. I thought the baseline survey really highlighted that there was a need for standardisation of how health libraries measure impact.

    I was confused about the “fully compliant submissions” from LQAF too and what role they played in the development of the tool.

    One central limitation that is pointed out in the article is that the tool isn’t really designed to measure potential negative impacts of information. No one likes to think that we can potentially have a negative impact, but it’s something we should think about. The tool also has the limitation of all surveys in that certain sections of it will be subjective, and that there is a difference between those who respond and those who don’t (those who respond are more likely the ones who feel that the library has been beneficial to them in some way, or those who have a “use it or lose it” mentality, ie think that filling out the survey and saying something nice will help make sure that they can keep their library service, etc.)

    So what? The article has given me some useful context for our use of the tool. I think a big issue is how to define the “critical incident” when it’s not a training session or an evidence search. So it would be interesting to see how people use it in practice. We usually change the first question slightly to make it clear which service we’re surveying for. And we send the survey out only once to people who requested a lot of articles all to do with the same topic, because asking them to quantify the impact of eveery single article they get from us, especially when it’s a long list, is not practical. For current awareness services, it’s even more difficult if not impossible to define a critical incident, I find the only thing you can reasonably do is to refer to eTOCs or certain bulletins they’re subscribed to, and ask them to quantify the impact of those as a whole. The validity of the results that you get from that is likely shaky.

    I’m not sure I agree that the ability to tick multiple boxes is going to overstate impacts. I think libraries and the services they offer are complex organisms and impacts might be felt in more than one area. The alternative would be along the lines of asking where people think the biggest impact/biggest difference was felt, and with the options being what they are, that would be comparing apples to oranges.

    Now what? We’ll continue to use the tool, but I’m more aware of its limitations. I also think that it’s important to use iti in conjunction with the other impact tools. I think of it as a way to get people to tick some boxes and leave their details for case studies/interviews, which are an opportunity to get much more in-depth impact data.

    • Hi Lisa,

      Thanks for your points.

      I think the degree of variation in how we are both using this “standard generic” tool is worth noting. I have a concern about what happens when these things then get aggregated.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s