Wes Bucey
Prophet of Profit
Standards are paying more attention to training of employees and documentation of the evaluation process on how well the employee is able to implement that training in his work assignments.This thread is in reply to Isabel Arroyo https://elsmar.com/elsmarqualityforum/showpost.php?p=78863&postcount=14
asking for a new thread exploring why employees should be rated only pass/fail instead of on a scale after training.
- The basic premise of "employee assessment" is to assure the organization is able to meet the requirements of its customers.
- If the employees don't meet an initial assessment of their ability to perform activities to meet the requirements, the organization trains them (in-house, external, or on-the-job.)
- The reason for the documentation is to assure the customers the employees are, indeed, trained and evaluated or assessed on their comprehension of, and capability to implement the training to make products or perform services that meet the needs of the customer.
In another thread (broken link removed), one of our Covers makes a valid point about training. This quote is just a portion of the complete post which is part of a very interesting thread. Jennifer makes the point that it is the "eventual measures such as repeat sales or customer satisfaction data, which would be target judgments for results that connect the training to strategic goals."
Also check this thread on training effectiveness (broken link removed) Many of our Covers make excellent points throughout this thread.Jennifer Kirley said:I have accessed this information through an excellent book by Jac Fitz-enz titled "How to Measure Human Resources Management". The book is packed with sensible approaches to measuring human performance and effectiveness of intervention efforts that can so vex HR and quality management.
Like Donald Kirkpatrick does in "Evaluating Training Programs" (another book I recommend) Jac Fitz-end asserts that change can be measured both at individual levels and across groups. I would set up spreadsheets to tally, compute with formulae and display results in graphs: knowledge change, behavior change etc. Of course the cost of training should be considered and the training should be planned toward specific objectives: error reduction, cycle time reduction, repeat sales for intensive customer service positions.
But we mustn’t put the cart before the horse. Wes very correctly points out that there may be environmental influences to the testing. That is why some scientists argue that "social science" can never be considered true science; people have so many shifting influences to behavior that dependent variables are difficult to measure and prove.
So, trainers--and quality people looking into training for corrective action-- should strive to eliminate environmental changes during this cycle as much as possible, or at least note the environmental differences between performance cycles that could be contributing to the measured "results". This could include supervisors/managers arriving, leaving or reassigned, other program changes, introductions or eliminations that may cause disruptions in the employee routines, even including changes in employee benefits programs. Layoffs, new product/service introductions and even changes in suppliers can also influence behavior results so this can become quite tricky and I would try to time the training cycle to suit and not let too much time elapse before taking measurements, lest the data be called into question.
I would certainly advocate taking in employee suggestions for the causes of errors, slowdowns, or lost customers/sales and maybe using a cause-and-effect analysis to approach the process. One should eliminate all practical material and environmental opportunities for variation before pursuing behavior intervention.
Once environmental and material causes have been ironed out and training is planned to pursue specific, goal-oriented changes in behavior that have real meaning to the organization, it makes sense to establish expectations for measuring success.
Both books note four basic evaluation levels:
(1) Trainee reaction (Smile sheet)
(2) Knowledge test (given after the program, it does not "prove" training effectiveness)
(3) Performance test (also given after the program, it does not "prove" the training had effects)
(4) Results test (before-and-after testing goes farthest to "prove" training effects and should be followed up some months later, to assess long-term impact.
#1 is an important starting point. If the trainees react favorably to the training, they are more likely to use it. Furthermore, good educators are sensitive to both their students' performance and perceptions, and will alter training styles, materials or programs to "tune in" to their students.
I would naturally advocate #4, using the formulae I already listed, but error reduction, problem-solving ability and cycle time improvements could lead to eventual measures such as repeat sales or customer satisfaction data, which would be target judgments for results that connect the training to strategic goals.
I hope this wasn’t too long!![]()
Jennifer
Also use the search engine here to search
"training effectiveness"
"management by objective" or more likely, "MBO"
"education"