Evaluation is crucial and the payoffs are high: solid data encourages stakeholders to invest in future training and making sure that training meets everyone’s needs and expectations. Yet, it is often overlooked and undervalued, dismissed as qualitative, subjective, and difficult to get people to engage with, and rarely do enough time and resource go into its planning. In this final article of our series, we are looking at the Evaluation phase of the instructional design systems process, ADDIE, and how to digitally innovate for maximum payoff from your training program. We will discuss how to plan and execute meaningful training evaluation, including what and when to measure, and most importantly, what we can learn from it.
Measure the impact of training so everyone knows it’s worth it
Whether you organize, facilitate, or participate in training, you need to know that it will be worth the investment of time, money, and energy:
- As a learner, you want to see evidence that your commitment is being rewarded and that you can do your job better than before, to maintain your quality of services and progress in your profession, to support your healthcare practitioner colleagues effectively, and through them, improve standards of care and experience for patients
- As a facilitator, you want to know that your training programs are having the desired effect, that your learners can meet the intended outcomes that will enable them to develop and enhance their knowledge, skills, and confidence, and that you’ve provided a positive and productive learning experience
- As a person responsible for instigating training
for your staff, you want to know that you are maximizing the benefit of that training for your team, that the budget and time spent on getting the program ready have literally paid off, and that you can report back to investors and learners with measurable progress against learner key performance indicators, company critical success factors and strategic priorities, and hopefully secure funds to meet future training needs
The big question is, how can this be achieved? To create evaluation that answers the questions stakeholders might have, we need to know what we need to measure to show effectiveness and impact, when we need to take those measures, and how we measure them. Let’s tackle those points in order.
What do we measure when evaluating training?
To measure if the learning experiences were meaningful, evaluation needs to span the entire training program to gather several types of information that fall under five key measures (stages), best summarized using the Kirkpatrick/Philips model of evaluation:
1) Return on expectation (RoE) – has training met the stakeholders’ expectations?
2) Participant reaction – have suitable conditions been created for learning to take place?
3) Measure of learning – did learning actually take place?
4) Job impact – has on-the-job performance been improved by the learning?
5) Return on investment (RoI) – has the upskilling of staff, improved job performance, strategic priorities, etc., justified the costs of investment in the training?
Assessments and evaluations combined are a true measure of the program
We can gather data for any of the five measures using either assessments, such as knowledge check, multiple-choice, or scenario-based tests, or evaluations, such as seeking feedback on the quality and impact of the program. Designing evaluations and assessments to answer the questions behind the five key measures and positioning them at different stages of the training program is a way to gather meaningful information. Let’s consider how this can look in practice (Figure 1).