Written by Briony Frost, Learning & Development Specialist and Dominika Bijos, Senior Learning Designer on Saturday 7th August 2021
Evaluation is crucial and the payoffs are high: solid data encourages stakeholders to invest in future training and making sure that training meets everyone’s needs and expectations. Yet, it is often overlooked and undervalued, dismissed as qualitative, subjective, and difficult to get people to engage with, and rarely do enough time and resource go into its planning. In this final article of our series, we are looking at the Evaluation phase of the instructional design systems process, ADDIE, and how to digitally innovate for maximum payoff from your training program. We will discuss how to plan and execute meaningful training evaluation, including what and when to measure, and most importantly, what we can learn from it.
Measure the impact of training so everyone knows it’s worth it
Whether you organize, facilitate, or participate in training, you need to know that it will be worth the investment of time, money, and energy:
The big question is, how can this be achieved? To create evaluation that answers the questions stakeholders might have, we need to know what we need to measure to show effectiveness and impact, when we need to take those measures, and how we measure them. Let’s tackle those points in order.
What do we measure when evaluating training?
To measure if the learning experiences were meaningful, evaluation needs to span the entire training program to gather several types of information that fall under five key measures (stages), best summarized using the Kirkpatrick/Philips model of evaluation:
1) Return on expectation (RoE) – has training met the stakeholders’ expectations?
2) Participant reaction – have suitable conditions been created for learning to take place?
3) Measure of learning – did learning actually take place?
4) Job impact – has on-the-job performance been improved by the learning?
5) Return on investment (RoI) – has the upskilling of staff, improved job performance, strategic priorities, etc., justified the costs of investment in the training?
Assessments and evaluations combined are a true measure of the program
We can gather data for any of the five measures using either assessments, such as knowledge check, multiple-choice, or scenario-based tests, or evaluations, such as seeking feedback on the quality and impact of the program. Designing evaluations and assessments to answer the questions behind the five key measures and positioning them at different stages of the training program is a way to gather meaningful information. Let’s consider how this can look in practice (Figure 1).
Let’s say we start with a program level evaluation survey. It looks at return on expectation (RoE), measures learning and job impact, and contributes information we can use to see return on investment (RoI). In this survey, learners are asked to rate both their knowledge and confidence
before and after each module (or program) to track perceived learning gain throughout the program.
Next, we might include a learner feedback survey, which specifically targets learners’ reactions to the program. It is a separate set of questions and asks learners to reflect on their experience of participating in the program, such as whether they thought it was effective, engaging, enjoyable, too short/long, or too detailed, and where it could be improved.
Finally, module and program assessment questions are a direct measure of learning and contribute to our understanding of the job impact and RoI. The same assessment can be repeated after a specified period of time to see how much knowledge has been retained. The results can be checked against what learners thought they could do when answering questions from a pre-program evaluation survey to see if they self-evaluated accurately and whether they are now able to meet the standard of learning expected of them.
Assessment questions are usually incorporated into training as there is a ‘pass’ level required for compulsory training within pharma but, often, the opportunity to combine it with training evaluation is missed and so too are insights into mismatched expectations from any of the stakeholders, or misalignments with the original needs of the learners.
Choose key measures, but listen to your learners
A comprehensive evaluation of training effectiveness involves appraising impact at all five stages. However, it is equally possible to select the measures that are most relevant to your organization’s priorities and interests. There are a variety of other training evaluation models available too, and it is possible to combine approaches to create a bespoke picture that is best suited to your organization’s needs.
A word of caution is worth sharing here: although it is often tempting to skip participants’ reactions as it does not necessarily correlate with job impact, enabling your learners to feed back on their experience not only provides valuable information to enable you to improve future training offerings, but makes learners feel as though their voices and engagement in the training matter, which can assist with maintaining motivation for ongoing professional development.
Furthermore, to sustain engagement in the post-COVID hybrid learning reality, we need to remember that learners are human and need to feel like they belong, and their company cares about them. Now it is crucial to remove any technical or practical issues that arise in the first few iterations of hybrid events to set good standards for how these events look. For example, remote workers fear that they will be disadvantaged by not attending in person. Learner participation in future training is likely to improve if you consider their feedback and respond by making changes to enhance the experience – few people wish to sit through tedious or unpleasant training experiences face-to-face or virtually, whether or not it ultimately improves their performance!
When? Plan evaluation from the start of your training program
Although Evaluation is the last step in the ADDIE framework, it is something we plan for during the analysis and design phases, continue throughout the development and implementation phases, and extend past implementation and completion of training. This integration of evaluation throughout the entire training program allows us to gather several types of information that fall under the five key measures we’ve already discussed.
Firstly, during the needs analysis, start planning for evaluation of RoE. Create quantitative benchmarking questions that your learners can answer upfront that they will then respond to again at the end of the training to measure distance travelled from initial levels of knowledge, skills, and confidence. Including qualitative answers here will give a more comprehensive picture of learners’ progress and experience.
Secondly, during the design stage, construct clear, shared, and measurable learning outcomes from your objectives, which can be tested through specific knowledge and skills assessment points during and at the end of the program.
During development of the materials, include confidence measures throughout the training to keep track of increases and declines. Confidence can be aligned to the results of graded assessments to see whether confidence and learning gain rise or fall together, to identify areas for further training development or revisions to the program itself. These activities will help you to assess whether you have met learner expectations and whether learning gain has been achieved.
Finally, job impact and RoI are methods of measuring training impact that extend beyond the end point of the training program. Job impact is reflected in measuring behavioral change, which can take place through activities, such as:
How do we measure? Tracking the impact requires design and tools
To be able to use the data to gain insights and present them to relevant stakeholders, there are a few elements that can help you:
The “so what?” of your evaluation effort
All this is designed to ultimately give you the answer to whether the training is worth the investment of time, money, and energy.
Evaluation done right can show you and your stakeholders that the training effort has, or has not, paid off, all supported by data. Before you plan your next training endeavor, consider investing in evaluation. It’s worth it.
OPEN Health’s L&D team brings together a wealth of diverse staff with unique skill sets, healthcare training, and communications experience. We take pride in providing good quality measures of training impact and effectiveness that take into account all your stakeholders’ needs. If you would like to hear more about how we can help you track the RoE, RoI, and more of your training program, please get in touch.
Please contact Jess Ingram, SVP Learning & Development