How to measure training, so it doesn’t get cut


Learning and development programs are too often the first thing to go when a budget is cut. Part of that is because training outcomes are poorly measured – and so HR can’t show how it impacts the bottom line.

With a new financial year on the horizon, many senior leaders charged with managing the bottom line come into conflict with those charged with learning and development. High turnover, reduced sales, and downward pricing pressure are all put forward as reasons to slash the training budget.

Learning and development is an easy target because evaluating the returns is difficult. A recent study by Deloitte & Touche found that companies that “agreed” and “strongly agreed” on reducing training investments during budget cuts had a 13 percent lower market-to-book ratio (comparing a company’s stock price to its tangible assets) than those that did not.

Training evaluation thus presents an interesting managerial dilemma. Our gut feeling and the data tells us that training retains well-trained, skilled, productive, and committed employees. However, the lack of good and easy-to-use evaluation tools often drive us to abandon or postpone training and development activities when times are tough.

Current training evaluation systems employed by many organisations are mostly based on Kirkpatrick’s four-level training evaluation model that focuses on reaction, learning, behaviour, and results as evaluation criteria. However, a study by the American Society of Training and Development found that while 95 per cent of organisations conduct a post-event survey (reaction), a mere 37 per cent of organisations attempted to verify learning among employees (learning), 13 per cent followed through to ensure that people correctly deployed their learnt skills (behaviour), and only 3 per cent were able to demonstrate a positive result (results).

This highlights an underlying problem. Training evaluation is too often narrowly concerned with ‘did you like the training’.

Instead, questions that are more critical need to be asked and answered: ‘where, to what extent, why and how has training work?’. To answer these questions, HR managers must become liaisons, delegators, monitors, and facilitators for training evaluation and the go-to person for all managers in an organisation.

Liaising

How does this work? Imagine a scenario where the organisation is evaluating the outcomes of a formal training course on conflict management.

First, HR managers liaise with functional, strategy, and/or executive management to align training evaluation to organisational changes and goals. Training evaluation must be based on real goals, identifying “soft” and “hard” changes and at which organisational levels those changes are intended.

Soft conflict evaluation changes can be participant-related. For example, it could include KPIs requiring line managers to learn more about their own communication management styles, or techniques and processes to control and reduce conflict. More ‘hard’ conflict-related KPIs include reductions in the numbers of:

  • conflict meetings;
  • conflict emails;
  • sick days;
  • absenteeism;
  • turnover rates;
  • retention rates;
  • error rates;
  • and resource outputs at different organisational levels

This lets HR managers identify where training has worked. In other words, clear defined planned and committed changes set out by organisations result in accurate and useful training evaluation.

Delegating

Second, HR managers are critical delegators who consider the range of audience to training evaluation and ensure that relevant people in the organisation get access to training results. HR managers should always aim to include a broad range of audiences as this allows evaluation from different perspectives and against different interests.

In our conflict training scenario, HR delegates as to who sits at the evaluation table. Relevant audiences for conflict training include HR managers, the participants themselves, line managers, department heads and executive level and legal witnesses (if requested). This creates a feedback-rich, open, and transparent understanding of the outcomes and to what extent training has worked.

Put another way, a broad range of the audience to training evaluation creates a better understanding of training needs, measures, and outcomes.

Facilitating

HR managers are critical facilitators to training evaluation. Evaluation interests differ across organisational levels, and HR must choose suitable evaluation criteria to identify training returns on each level against those interests. Possible evaluation criteria include training module evaluation, knowledge acquisitions, and time and resource-based criteria.

In our conflict scenario, HR can capture processes and routines in relation to conflict by identifying possible reductions in conflict meetings and conflict-related emails among staff. Furthermore both time and resource related factors can be measured, including production targets, recall rates and customer satisfaction changes. By ensuring training evaluation criteria are suited, comparable, and aligned to each level, HR learns how training has worked.

In summary, training evaluation criteria at each organisational level must be relevant and aligned to planned and committed changes on that level.

Monitoring

With this in mind, HR managers become the monitor of training evaluation by capturing financial and non-financial returns at different organisational levels as and when they occur. Training evaluation therefore needs to be non-linear and flexible, yet the evaluation processes must be methodical, and the criteria standardised to collect accurate data for comparison.

In our conflict scenario, HR can monitor any identified changes to KPIs as they occur. For example, identifying a reduction in error rates, conflict emails, and conflict meetings allows HR to identify whether employees applied better communication skills and conflict management strategies, turning dysfunctional conflict into positive constructive discussions. This lets HR managers identify as to why training has worked.

In other words, linking ‘hard’ training evaluation outcomes to ‘soft’ evaluation factors creates a complete evaluation and determines the ‘real value’ of training.

Sten Langmann is a lecturer at Edith Cowan University’s School of Business and Law, and Stefan Thomas is a lecturer at Curtin University.


Make the most of your training budget and save $200 with AHRI’s EOFY offer for corporate in-house training and toolkits if you book by 30 June. Terms and conditions apply.

Subscribe to receive comments
Notify me of
guest

1 Comment
Inline Feedbacks
View all comments
Elizabeth Shannon
Elizabeth Shannon
5 years ago

I think the important element is to have more than a ‘gut feeling’ about what the organisation wants to achieve from the training. If the outcomes are well-defined, it is much easier to measure related outputs. Our open-access article outlines how we achieved this in the context of health and human services in Tasmania. https://sajip.co.za/index.php/sajip/article/view/1134

More on HRM

How to measure training, so it doesn’t get cut


Learning and development programs are too often the first thing to go when a budget is cut. Part of that is because training outcomes are poorly measured – and so HR can’t show how it impacts the bottom line.

With a new financial year on the horizon, many senior leaders charged with managing the bottom line come into conflict with those charged with learning and development. High turnover, reduced sales, and downward pricing pressure are all put forward as reasons to slash the training budget.

Learning and development is an easy target because evaluating the returns is difficult. A recent study by Deloitte & Touche found that companies that “agreed” and “strongly agreed” on reducing training investments during budget cuts had a 13 percent lower market-to-book ratio (comparing a company’s stock price to its tangible assets) than those that did not.

Training evaluation thus presents an interesting managerial dilemma. Our gut feeling and the data tells us that training retains well-trained, skilled, productive, and committed employees. However, the lack of good and easy-to-use evaluation tools often drive us to abandon or postpone training and development activities when times are tough.

Current training evaluation systems employed by many organisations are mostly based on Kirkpatrick’s four-level training evaluation model that focuses on reaction, learning, behaviour, and results as evaluation criteria. However, a study by the American Society of Training and Development found that while 95 per cent of organisations conduct a post-event survey (reaction), a mere 37 per cent of organisations attempted to verify learning among employees (learning), 13 per cent followed through to ensure that people correctly deployed their learnt skills (behaviour), and only 3 per cent were able to demonstrate a positive result (results).

This highlights an underlying problem. Training evaluation is too often narrowly concerned with ‘did you like the training’.

Instead, questions that are more critical need to be asked and answered: ‘where, to what extent, why and how has training work?’. To answer these questions, HR managers must become liaisons, delegators, monitors, and facilitators for training evaluation and the go-to person for all managers in an organisation.

Liaising

How does this work? Imagine a scenario where the organisation is evaluating the outcomes of a formal training course on conflict management.

First, HR managers liaise with functional, strategy, and/or executive management to align training evaluation to organisational changes and goals. Training evaluation must be based on real goals, identifying “soft” and “hard” changes and at which organisational levels those changes are intended.

Soft conflict evaluation changes can be participant-related. For example, it could include KPIs requiring line managers to learn more about their own communication management styles, or techniques and processes to control and reduce conflict. More ‘hard’ conflict-related KPIs include reductions in the numbers of:

  • conflict meetings;
  • conflict emails;
  • sick days;
  • absenteeism;
  • turnover rates;
  • retention rates;
  • error rates;
  • and resource outputs at different organisational levels

This lets HR managers identify where training has worked. In other words, clear defined planned and committed changes set out by organisations result in accurate and useful training evaluation.

Delegating

Second, HR managers are critical delegators who consider the range of audience to training evaluation and ensure that relevant people in the organisation get access to training results. HR managers should always aim to include a broad range of audiences as this allows evaluation from different perspectives and against different interests.

In our conflict training scenario, HR delegates as to who sits at the evaluation table. Relevant audiences for conflict training include HR managers, the participants themselves, line managers, department heads and executive level and legal witnesses (if requested). This creates a feedback-rich, open, and transparent understanding of the outcomes and to what extent training has worked.

Put another way, a broad range of the audience to training evaluation creates a better understanding of training needs, measures, and outcomes.

Facilitating

HR managers are critical facilitators to training evaluation. Evaluation interests differ across organisational levels, and HR must choose suitable evaluation criteria to identify training returns on each level against those interests. Possible evaluation criteria include training module evaluation, knowledge acquisitions, and time and resource-based criteria.

In our conflict scenario, HR can capture processes and routines in relation to conflict by identifying possible reductions in conflict meetings and conflict-related emails among staff. Furthermore both time and resource related factors can be measured, including production targets, recall rates and customer satisfaction changes. By ensuring training evaluation criteria are suited, comparable, and aligned to each level, HR learns how training has worked.

In summary, training evaluation criteria at each organisational level must be relevant and aligned to planned and committed changes on that level.

Monitoring

With this in mind, HR managers become the monitor of training evaluation by capturing financial and non-financial returns at different organisational levels as and when they occur. Training evaluation therefore needs to be non-linear and flexible, yet the evaluation processes must be methodical, and the criteria standardised to collect accurate data for comparison.

In our conflict scenario, HR can monitor any identified changes to KPIs as they occur. For example, identifying a reduction in error rates, conflict emails, and conflict meetings allows HR to identify whether employees applied better communication skills and conflict management strategies, turning dysfunctional conflict into positive constructive discussions. This lets HR managers identify as to why training has worked.

In other words, linking ‘hard’ training evaluation outcomes to ‘soft’ evaluation factors creates a complete evaluation and determines the ‘real value’ of training.

Sten Langmann is a lecturer at Edith Cowan University’s School of Business and Law, and Stefan Thomas is a lecturer at Curtin University.


Make the most of your training budget and save $200 with AHRI’s EOFY offer for corporate in-house training and toolkits if you book by 30 June. Terms and conditions apply.

Subscribe to receive comments
Notify me of
guest

1 Comment
Inline Feedbacks
View all comments
Elizabeth Shannon
Elizabeth Shannon
5 years ago

I think the important element is to have more than a ‘gut feeling’ about what the organisation wants to achieve from the training. If the outcomes are well-defined, it is much easier to measure related outputs. Our open-access article outlines how we achieved this in the context of health and human services in Tasmania. https://sajip.co.za/index.php/sajip/article/view/1134

More on HRM