library_blogThis blog post is about evaluating training – particularly e-Learning – and as such this post may cover some familiar ground for old hands, but will be of interest to those who have less experience in the field.

The article is particularly pertinent to those who have a library (aka catalogue) of e-Learning courses – either acquired or inherited (if you’ve come into the role and they were there already). In such a situation it would be of great value to both the business commercially and professionally to evaluate the effectiveness of the courses within that library.

While it has been contended over the last couple of decades, I will be using Kirkpatrick’s evaluation model as the reference point of this article. Why use a model that’s opposed? Well, all evaluation models are contended and generally models across most industries tend to be but still seem to be useful.

For the benefit of recap the model is shown below:

Levels Description
Reaction Degree to which learners have reacted favourably to training
Learning Degree to which learners have acquired intended knowledge, skills and attitudes from training
Behaviour Degree to which learners are applying what they’ve learnt on the job (i.e. ‘learning transfer’)
Results Degree to which intended business outcomes are being met as a result of training

 

As a learning and development (L&D) practitioner, the notion of evaluating the business benefit of training generates either excitement or pain. Ever since the late Donald Kirkpatrick devised his four levels of training evaluation – and subsequent models and enhancements popped up – we’ve been trying very hard to move beyond happy sheets (i.e. end of course feedback and tests) to something more credible – confirmation that our learning initiatives are delivering meaningful value to the business.

Not all learning initiatives are mired in the same issue; there are a multitude that are signed off, funded and conducted in a way that allow for a L&D function to directly measure the real impact in the workplace, and the value gained from behavioural and competence improvement. These initiatives tend to be those responding to a very clear training need derived from a significant business outcome that in its current unchecked state is having, or would have, an effect on business performance. They are also generally those where evaluation parameters would have been defined up front.

But what about those of us without the luxury of a specific business goal to align to in order to measure the effectiveness of the e-Learning training we provide, usually as part of a library? How do we evaluate the extent to which such training can improve the performance of staff and have a tangible benefit to the business when it targets broader, less specific staff development needs?

For simplicity we’ll look at e-Learning courses that may be also termed ‘packages’ or ‘modules,’ and consider how we can evaluate their effectiveness. These are short courses that do not feature a live online or blended component. A learner would work through the course, usually by themselves, with all interactivity and features contained within the course itself.

Many organisations will invest in e-Learning for a number of reasons, but the following are probably the most significant:

Thus the basic aspects of measurement and evaluation could be:

Cost reduction per capita of the e-Learning compared to face-to-face training, factoring in all associated overheads

If we look at each of these, what could they tell us?

Cost reduction per capita: Cost per capita vs. face-to-face equivalent when all costs are accounted. The cost difference will likely be significant and would demonstrate good financial stewardship. This is of particular value where a library of e-Learning courses are concerned.

Attendance: Not the most impactful metric, but if staff are not automatically enrolled on modules in your e-Learning library, self-enrolments can provide useful data in helping you identify trends, additional learning needs and what needs to be promoted internally.

End of course feedback:  This would fall into level 1 (Reactions) of Kirkpatrick’s model. While it is generally true that ‘happy sheets’ are of more value to the L&D team than the wider business or the intended business benefit, there are some useful things you could do with your feedback questions. The most logical thing would be to frame your questions around the outcomes for level 3 (Behaviour). Although you may not yet be able to verify whether staff have applied their learning, you can certainly ask them how they intend to apply it and in what sort of timeframe.

Completion and test scores: These broadly fall under level 2 (Learning). The obvious way to assess learning is to test against the content being delivered, and with e-Learning courses this is usually associated with successful course completion as well. Again this is probably of more value to the L&D team than the business (with the exception of compliance training), because the business is primarily interested in how learning is being applied to the workplace to meet a particular business need.

So where does this leave us? Well, we can do some cost justification on our library by comparing it to equivalent face-to-face training. We can also measure the success of our training according to levels 1 and 2 on Kirkpatrick’s scale, and sew some of the kernels for getting to level 3.

So nothing earth shattering or totally awesome yet; but you’ll have to keep an eye on the Filtered blog for the next post in the series where we’ll explore some ways to get some level 3 and possibly level 4 data. We’ll also look at some useful math and statistics techniques that can help us with this. And we’ll follow this up with some conversations with L&D professionals about their experiences and successes with e-Learning evaluation.

**Filtered are exhibiting at World of Learning Conference and Exhibition on 30th September and 1st October 2014 at Birmingham’s NEC. Visit then at stand C50!**

About the author: Nick Fernando is Head of Content at Filtered, an online learning provider of business training to companies of all sizes. The courses require the user to take a diagnostic test which assesses the elements that the individual needs to know and then individually tailors the course to suit their training requirements.

Nick has more than 13 years of experience in learning and technology having previously worked in private, public, further and higher education, not for profit and commercial organisations. He is inspired by the diverse capabilities and potential of people and it is his continual goal to leverage technology and educational practice to support and enhance the learning experience to improve performance.

To find out more, visit Filtered.com, email [email protected], find us on Facebook, follow @FilteredCourses or connect with us on LinkedIn.