A new style of evaluating training programs: the Learning-Transfer Evaluation Model
Geschreven door Tamara Mesman | May 08, 2018How do you evaluate a training program? The Kirkpatrick's model has been used often since the 1960s. However, this model is way too old, states Dr. Will Thalheimer. It results in a checklist approach to evaluation and because of that it misses the true purpose. For the modern world he designed the Learning-Transfer Evaluation Model (LTEM), which states that training has only been successful when the participant applies the learned material in their behavior. Thalheimer identifies eight levels with which organizations evaluate their training programs, half of which are inadequate. Discover on what level you evaluate your training!
1. Attendance
The trainee registers, starts and completes the training. An evaluation like this is useless, Thalheimer argues, because attendance does not mean that the trainee has actually learned something. Although, if a trainee has been absent, one can be sure that the training was not successful.
2. Activity
The trainee takes part in the training activities, which can be measured on three sublevels: attention, interest and participation. However, all three are insufficient as level of evaluation: trainees can pay attention, show interest and participate, but that still doesn’t mean that they have actually learned something.
3. Learner perceptions
Many organizations do not exceed level 3. The trainees themselves are questioned about the training. For example in the area of satisfaction: what did they think of the training? Definitely not sufficient to validate learning succes, states Thalheimer. Questions that measure understanding and motivation to apply it can say something about the results, but they are still derivatives. Exercises that are experienced as being realistic can make sure that the trainee will remember the learned material for a longer time, but it is even better to measure the remembered material itself.
4. Knowledge
Level 4 focuses on measuring to what extent the trainee learned facts and concepts, a popular method of evaluation in e-learning. Whether these questions are answered directly after learning or a few days later, remembering the terminology does not mean that the participant performs better, Thalheimer argues.
5. Decision-making competence
Is the participant able to make the right decisions in relevant, realistic scenarios? If you measure this during or right after the learning process, it does help, but one can still forget the underlying knowledge or skill. So measure it at least a few days later, Thalheimer says, then you really know whether someone is able make decisions based on something that they learned.
6. Task competence
Performing a task combines decision-making with an action of the trainee. If they do this during or right after the training, it is a reliable measure, but you don’t know yet whether it is a permanent skill. So put a fews days in between, Thalmeier advises. What does this mean for training? Repetition exercises help! For example, make sure that all trainees can apply the learned material again in realistic exercises a week later.
7. Transfer
According to Thalmeier, it is better to determine with objective measures whether the participant applies the learned material to accomplish tasks successfully, also known as transfer. This applies to both ‘supported transfer’, when the trainee needs help to apply the learned material, as well as complete transfer. In the first case a manager for example urges the employees to apply the material, where in the second case the employee shows the new behavior as his own initiative. This level is measured at the workplace, does the trainee perform the trained tasks eventually better?
8. Effects of the transfer
The last level even exceeds the task itself: what is the effect of the transfer on a) participants, b) colleagues, family and friends, c) the organization, d) the community, e) the society and f) the environment? Assessing the casual positive and negative impact of the transfer does require a rather rigorous method, Thalmeier notes. Is the productivity actually higher after the learning process? Are the learned skills really positive for society, for instance, managers who learn to negotiate so well that their staff gets paid less? To what extent can the increased NPS (Net Promoter Score) be attributed to more customer-focused employees? Finding an answer to these questions in not quite as simple as A-B-C.
The model is pretty detailed. Because of this it enables a more specific evaluation compared to Kirkpatrick’s model. Thalheimer also wrote a report with the model. It is clear that level 7 is a challenge for many organizations, let alone level 8. TrainTool evaluates on level 6 with her clients, but still not always on level 7. It is nice that a manager sees difference in the working place, but that’s not an objective point for evaluation. In some projects, the influence of training on organizational metrics like the turnover or customer satisfaction is also measured, which is already a part of level 8. On which level do you evaluate your training?
Download the case study and discover why an international organization chooses for e-training!