Energy sums up the difference between discussion and dialogue pretty well. From my perspective, the dialogue here is equally, if not more so, valuable. Digression in an open format should be encouraged, but as any of us is capable of noticing, when both the discussion and dialogue continue, it is probably better to break out the dialogue aspect (for example, I think that I might have confused Lucinda by introducing Drucker’s definitions into the discussion, hence the changeup).
As such, I will return to the discussion of training effectiveness. If we think that we should break out the ‘definition discussion’, I will post a new thread to further explore this.
I would like to use Neelanshu’s closing comments here:
“The key to a successful training program lies in identifying training needs and finding competent trainers. Thus there should be some way of measuring how well the course contents match with the identified training needs and how "competent" the trainer is/(was?).”
I like them and I identify well with them. In days of old, I was prone to give a test at the end of a program to determine comprehension. Heck, it was what I knew since this was done to me at the end of training exercises and done to me all through school. In reality however, I only measured how well folks memorized things. Memorizing information is not knowledge and classroom exercises are more education than training. The education and training are linked through knowledge: knowledge gained and knowledge applied.
The path is: information > knowledge (education/theory) > know-how (applied theory/knowledge)
(My projection to follow): Let’s look at Neelanshu’s comment “The key to a successful training program lies in identifying training needs and finding competent trainers.” How are needs determined? Generally, they are determined through detected deficiencies. Sometimes, they are determined through projected deficiencies. Either way, needs are established and provide the frame work to determine what level of mastery is required to drive improvement (corrective or preventive). This gives us a starting point in asking key questions to the training candidate. Course content should be developed to address the needs and a trainer, with the level of mastery needed to provide such training, needs to be selected (this probably should precede content development). Course content should include education and training to be most effective in developing understanding and skills required. Exit feedback is required, the measurement of training effectiveness. This can be accomplished a number of ways.
From my perspective, I do not like to use grading/ranking systems. I use a sampling method shown to me by a close colleague that serves me well. I use a pre-sample of questions (mixed open-ended and multiple choice questions) to determine an initial starting point. The course content is built around noted deficiencies and any new information contained on the pre-sample (the pre-sample questions are developed by the individual with mastery of the skills and knowledge required to perform as expected). The course is conducted to provide knowledge base and the candidate returns to work to further develop skills. At some point after, the post-sample is done to determine how effective the education/training were based on the initial starting point. Without grades, one can determine if progress was made. Sometime follow-up education/training is required to help the associate achieve the required proficiency. In addition, the trainee is allowed to give feedback in order to improve the training process itself.
Returning to Raffy’s example of daily monitoring of status of a given process, this serves to better illustrate the effectiveness of planning for that process and the efficient execution of that plan. It only serves as a possible indicator of the success of the training. For example, if the process returns to a stable process after being found out of control where the remedial action was to bolster the training program, then one could say that the training was effective by this means (training was “doing the right thing”). However, as correctly pointed out in other posts above, there are many contributing variables to any given process. One cannot assume that training and ongoing training are the underlying reasons for improvement. You can only do so through isolation. Daily monitoring here serves as a potential source for Training and I like it because it is system driven. Through cause and effect, you may determine that the likely cause was training and you present yourself with an opportunity.
Regards,
Kevin
As such, I will return to the discussion of training effectiveness. If we think that we should break out the ‘definition discussion’, I will post a new thread to further explore this.
I would like to use Neelanshu’s closing comments here:
“The key to a successful training program lies in identifying training needs and finding competent trainers. Thus there should be some way of measuring how well the course contents match with the identified training needs and how "competent" the trainer is/(was?).”
I like them and I identify well with them. In days of old, I was prone to give a test at the end of a program to determine comprehension. Heck, it was what I knew since this was done to me at the end of training exercises and done to me all through school. In reality however, I only measured how well folks memorized things. Memorizing information is not knowledge and classroom exercises are more education than training. The education and training are linked through knowledge: knowledge gained and knowledge applied.
The path is: information > knowledge (education/theory) > know-how (applied theory/knowledge)
(My projection to follow): Let’s look at Neelanshu’s comment “The key to a successful training program lies in identifying training needs and finding competent trainers.” How are needs determined? Generally, they are determined through detected deficiencies. Sometimes, they are determined through projected deficiencies. Either way, needs are established and provide the frame work to determine what level of mastery is required to drive improvement (corrective or preventive). This gives us a starting point in asking key questions to the training candidate. Course content should be developed to address the needs and a trainer, with the level of mastery needed to provide such training, needs to be selected (this probably should precede content development). Course content should include education and training to be most effective in developing understanding and skills required. Exit feedback is required, the measurement of training effectiveness. This can be accomplished a number of ways.
From my perspective, I do not like to use grading/ranking systems. I use a sampling method shown to me by a close colleague that serves me well. I use a pre-sample of questions (mixed open-ended and multiple choice questions) to determine an initial starting point. The course content is built around noted deficiencies and any new information contained on the pre-sample (the pre-sample questions are developed by the individual with mastery of the skills and knowledge required to perform as expected). The course is conducted to provide knowledge base and the candidate returns to work to further develop skills. At some point after, the post-sample is done to determine how effective the education/training were based on the initial starting point. Without grades, one can determine if progress was made. Sometime follow-up education/training is required to help the associate achieve the required proficiency. In addition, the trainee is allowed to give feedback in order to improve the training process itself.
Returning to Raffy’s example of daily monitoring of status of a given process, this serves to better illustrate the effectiveness of planning for that process and the efficient execution of that plan. It only serves as a possible indicator of the success of the training. For example, if the process returns to a stable process after being found out of control where the remedial action was to bolster the training program, then one could say that the training was effective by this means (training was “doing the right thing”). However, as correctly pointed out in other posts above, there are many contributing variables to any given process. One cannot assume that training and ongoing training are the underlying reasons for improvement. You can only do so through isolation. Daily monitoring here serves as a potential source for Training and I like it because it is system driven. Through cause and effect, you may determine that the likely cause was training and you present yourself with an opportunity.
Regards,
Kevin
