Metrics matter: measuring learning impact in healthcare training

Written by Briony Frost, Learning & Development Specialist on Friday 9th July 2021

Training is mission-critical for medical affairs professionals to maintain their credibility as partners in the healthcare industry and to contribute to improving patient experience and outcomes. But how do we measure impact and effectiveness to check our training programmes work?

Why measure training impact?

Whether you organise, facilitate, or participate in training, you need to know that it’s worth the investment of time, money, and energy. As a learner, you want to see evidence that your commitment is being rewarded, that you are able to do your job better than before, to maintain your quality of services and progress in your profession, to support your healthcare practitioner colleagues effectively, and through them improve standards of care and experience for patients. As a facilitator, you want to know that your training programmes are having the desired effect, that your learners are able to meet the intended outcomes that will enable them to develop and enhance their knowledge, skills, and confidence, and that you’ve provided a positive and productive learning experience. As a person responsible for instigating training for your staff, you want to know that you are maximizing the benefit of that training for your team, that the budget and time spent on getting the program ready had literally paid off, and that you can report back to them with measurable progress against learner key performance indicators (KPIs), company critical success factors (CSFs) and strategic priorities, and hopefully secure funds to meet future training needs. The big question is always, how can this be achieved?

What to measure, when, and how?

The above question can be broken down into smaller ones: what do we need to measure in terms of effectiveness and impact? When do we need to take those measures? And how do we measure them? Let’s tackle those questions in order.

What do we measure?

The most familiar model of measuring learning is still the one developed by Kirkpatrick in the 1950s and updated by Jack Philips in the 1980s, although it is most effective when applied using Wiggins and McTighe’s Understanding by Design approach. This provides five possible levels of evaluation that can be used:

  1. Return on expectation (RoE) – has the training has met the stakeholders’ expectations?
  2. Participant reaction – have suitable conditions been created for learning to take place?
  3. Measure of learning – did learning actually take place?
  4. Job impact – did the training influence the learners’ on-the-job performance?
  5. Return on investment (RoI) – has the financial payoff in terms of upskill staff, improving professional performance, and meeting strategic measures and priorities justified the expense of creating the training in the first place?

A comprehensive evaluation of training effectiveness involves appraising impact at all five stages. However, it is equally possible to select the measures that are most relevant to your organisation’s priorities and interests. There are a variety of other training evaluation models available too, and it is possible combine approaches create a bespoke picture that is best suited to your organisations needs.

A word of caution is worth sharing here: while it is often tempting to skip stage 2 (participant reaction) as it does not necessarily correlate with stage 3 (job impact), enabling your learners to feed back on their experience not only provides valuable information to enable you to improve future training offerings, but makes learners feel as though their voices and engagement in the training matter, which can assist with maintaining motivation for ongoing professional development. Furthermore, remembering that your learners are human, need to feel like they belong, and their company cares about them is a key part of sustaining engagement. Learner participation in future training is likely to improve if you take into account their feedback and respond by making changes to enhance the experience – few people wish to sit through tedious or unpleasant training experiences, whether or not it ultimately improves their performance!

When do we measure?

While most evaluation of training effectiveness takes place at the end, it’s vital to build a schedule of evaluation points that starts right at the beginning of program production. We’ve talked before about the necessity of undertaking a needs’ analysis before launching into a training program and a robust evaluation for all five stages rides on that taking place. Everything that you assess will be done against the expectations of your stakeholders: your organisation’s strategic priorities and CSFs, your team’s KPIs, and your learners’ needs and training gaps. As part of this phase of program development, you will also identify the objectives of your training, which are aligned to your stakeholders’ expectations, and which can be tested against end-of-program evaluations to gauge how well the training met stage 1 (RoE).

There are two subsequent steps early on. One is to create in response to, or as part of, your needs analysis a series of quantitative benchmarking questions that your learners can answer upfront that they will then respond to again at the end of training to measure distance travelled from initial levels of knowledge, skills, and confidence. It is well worth integrating scope for qualitative answers here as well as you will obtain a more comprehensive picture of their progress and experience. The other is to construct clear, shared, and measurable learning outcomes from your objectives, which can be tested through specific knowledge and skills assessment points during and at the end of the program. Confidence measures, which by their nature tend to map more unpredictable patterns, are best arranged throughout the training to keep track of increases and declines, which can be aligned to the results of your graded assessments to see whether confidence and learning gain rise or fall together to identify areas for further training development or revisions to the program itself. These activities will help you to assess whether you have met learner expectations and whether learning gain has been achieved.

Job impact and return on investment are methods of measuring training impact that extend beyond the end point of the training program. The former involves measuring behavioural change, which can take place through activities such as:

  • Additional follow-up surveys to reassess the original measures of learning gain and expectations
  • Self-reflections are part of professional development activities
  • Workplace and peer observations of job performance with the relevant training program taken into account
  • Reports and insights gathered from external colleagues, such as HCPs

How do we measure?

Concrete quantitative measures of training impact are not always the most reliable way to measure learning gain and impact, since learning rarely takes place in a linear fashion, its impact may be delayed until it is directly called upon and cultural and behavioural change must be tracked over time to offer reliable insights. Using a mixed methods approach (combining quantitative and qualitative data) provides a broader and deeper analysis of your program’s impact, enabling you to feedback more effectively to your stakeholders in response to their specific interests with regards to the five possible areas of evaluation.

Whether you decide on a comprehensive appraisal of your training program or whether you select a handful of measures to tease out priority data, there are various levels of information that can be gleaned. Audience is crucial here, so think about who you will need to feedback to and what they will want to know, as the data gathered can often be cut and presented a number of different ways to give your stakeholders the insights that are most valuable to them. Learning providers will often have digital tools and specialists that they can draw upon to help you gather, process, and display the information to different audiences. Learning management systems and survey platforms may also have built-in analytics that you can use to speed up the process. Key to obtaining useable data lies not only in the ultimate breakdown but in the timing of data gathering and the data that is sought in the first place, so get a survey specialist on board if you can!

In terms of data presentation, you will often be driven by existing approaches within your organisation and it can be advantageous to check as part of your stakeholder questionnaires how each group would like to receive their data and the level of information that they require. Don’t forget your learners either. For your team a good learning dashboard that enables them to track their progress and gains against the original learning outcomes, ideally situated within the context of their wider learning journey and their professional development goals, may be advantageous for sustaining motivation to train. Seeking and responding to learner feedback on their training experience and learning is also a great way to keep your learners onboard for the future.

OPEN Health’s L&D team brings together wealth of diverse staff with unique skill sets, healthcare training and communications experience. We take pride in providing good quality measures of training impact and effectiveness that take into account all your stakeholders needs. If you would like to hear more about how can help you track the RoE, RoI, and more of your training program, please get in touch.

Can we help?