Content Tabs

Monday, March 21, 2011

The Politics and "Business" of learning, Part 1

I posted the bulk of this entry to the forum for one of the two courses I'm currently teaching.  The learners were sharing their observations and frustrations about politics and undue influence in supposedly objective evaluation frameworks.

So, mostly unedited, here is the first part for your perusal.


I must say that I am enjoying the discussions going on here and I wanted to add a few thoughts based on some of the recent comments. These thoughts are based on my own experiences working in a number of different learning environments. I offer these thoughts with the caveat that they're somewhat of a blanket indictment; while I'm sure there are organizations who operate differently than those discussed here, what follows are my observations of a perceived norm across general corporate technical training vendors.


Both [name] and [name] spoke of the idea of wanting to be "liked" as a teacher/educator/instructor, and I don't think anyone would disagree that there's a small element of "ego" at work when you're given the responsibility to train others. However, what one cannot lose sight of is the organizational interest in just how much learners "like" you. In organizations where training is provided 'for profit', customer satisfaction is huge, and rightfully so. However, it has been my experience that because many of these organizations are inserted as an "event fulfillment" provider rather than a strategic partner and stakeholder in someone's learning process, the commitment to learning is somewhat less than it would be if the learning were facilitated through an in-house resource. Training vendors, therefore, are mostly concerned with "bums in seats", preferably repeat ones. So, high satisfaction scores on the end-of-course smiley sheet become the almighty metric for vendor, buyer, and trainer/educator.


This leaves the educator in a bit of a dilemma: Do you do everything but stand on your head to chase a perfect evaluation score that tells you nothing about what you should be improving, or do you risk the wrath of those monitoring your scores by asking your learners to be genuine? Also consider whether or not the educator can really say whether or not the participant actually "learned" enough from you to put new skills and ideas into practice?


(As a sidebar, consider a different environment like military training. Based on my own experiences on both sides of the equation, I know there were very few instructors that I "liked", in fact, there were a number of them that I cordially detested...but I learned something from each of them. As an instructor and later an instructor coach/monitor, I knew that my role was not to be "liked", but to be an effective trainer/coach, and to be a positive role model, and to inspire the people I was responsible for. In that environment, instructor "likes" aren't the metric of the day. Successful performance of the trainee definitely is.)


So when we look at the "business" of training, and what it means in terms of evaluation practice is that evaluation and assessment really tend not to happen through a full cycle of any kind. Most of these folks are living at Level 1 of the venerable Kirkpatrick model for evaluation and are either unable to proceed deeper or unwilling because of the business model. Ultimately, the learners are the ones who lose. Because there's such a limited awareness of other frameworks, the Linus-blanket of the smiley sheet prevails to the detriment of all.


One of the aims of this course is to show people that there's more to evaluation and assessment than just sticking a survey form under a learner's nose and asking for their opinion, or giving them some multiple choice test that doesn't really reflect what they need to know. This discussion should really help to hammer home the fact that putting an effective framework in place AND following through with it is what will really give you the full picture on learner success and the direct impact on the organization.


For some additional reading, if you can get your hands on it, I would draw your attention to Mann & Robertson (1996) for a thought-provoking discussion on evaluation of training initiatives. For example, the survey cited in this article says that over half of the US companies surveyed (52%) used trainee satisfaction as the key metric, 17% assessed transfer of knowledge to the job, 13% examined organizational change, and 13% didn't evaluate any element of their training initiatives.

Reference:

Mann, S. & Robertson, I. (1996). What should training evaluations evaluate? Journal of European Industrial Training, 20(9), 14-20.

No comments: