How to measure training effectiveness
A diversity of opinions on how effective federal training is calls for performance measurement, Steve Kelman writes.
Training, always an appealing target for cuts in tight budget times, has also come under a further cloud in the wake of the government's conference hysteria, since there is sometimes an overlap between the two. Many believe that training is a crucial part of obtaining good employee performance, but others argue that a lot of training provided by government agencies or by vendors under contract to government, is rote and not engaging.
This cries out for performance measurement, to develop information about which training providers are doing a better or worse job, and to provide a series of natural experiments that could produce improvements in training offerings. If some ways of training on similar topics produce better results than others, we need to learn what distinguishes success from failure so we can spread good practice.
Having said that, we haven't gotten very far in developing performance measures for training programs, except for the student satisfaction ratings many of these courses use. (Those are not useless, but they are hardly dispositive.) We have talked somewhat about this at the Kennedy School with regard to our executive education programs. We have toyed with the idea of doing some before and after interviews of the direct supervisor of a sample of our participants, to see whether, and how, their job performance has improved after being exposed to our programs. I am embarrassed to say, however, that this effort has so far stayed at the discussion stage, and we are still limiting ourselves to the classic approach of asking students to rate the professors and the program.
Because of this, some efforts under way at the always magnificent Veterans Administration Acquisition Academy are noteworthy. (Full disclosure: I am on the advisory board of the Academy, an unpaid position, and learned about this effort at an advisory board meeting.) They have introduced a low-cost way of getting feedback from the supervisors of participants in their training course for VA program managers. In addition to surveys of participants about knowledge and knowledge utilization improvements (the results of which are almost certainly biased way upward), they have also done a simple survey of supervisors with questions such as whether there were "positive and noticeable changes in their staff members' project management behavior" and whether the supervisor believed that "cost, schedule, and/or performance improved as a result of training." While these responses are also probably subject to exaggeration (especially since the supervisor might feel that if the program was a bust, it would reflect badly on them for sending the participant there), results are likely more accurate than participant self-reports, and the specific questions do deal with outcomes of the training.
Incidentally, for what it's worth, on average 71 percent of supervisors report that participant behavior improved and 74 percent said cost, schedule and performance had improved. Even if those figures are double the real numbers, getting improvements from even a third of training participants isn't bad. In addition, I hope that over time the Acquisition Academy will develop enough of a database on supervisor reports so it can analyze differences among programs and faculty members in producing performance improvements.
All in all, the VA Acquisition Academy has developed an important innovation for improving the value the government gets from training programs. There are a lot of program management courses out there. How about the Acquisition Academy trying to work with the Defense Acquisition University, the Chief Acquisition Officers Council and relevant others to get a standard set of simple questions to ask supervisors so we can develop more data on the effectiveness of training programs in this area?
NEXT STORY: Making crisis-level collaboration the new normal