Training evaluation (Competence vs. performance)

Wes Bucey

Prophet of Profit
This thread is in reply to Isabel Arroyo https://elsmar.com/elsmarqualityforum/showpost.php?p=78863&postcount=14
asking for a new thread exploring why employees should be rated only pass/fail instead of on a scale after training.
Standards are paying more attention to training of employees and documentation of the evaluation process on how well the employee is able to implement that training in his work assignments.
  1. The basic premise of "employee assessment" is to assure the organization is able to meet the requirements of its customers.
  2. If the employees don't meet an initial assessment of their ability to perform activities to meet the requirements, the organization trains them (in-house, external, or on-the-job.)
  3. The reason for the documentation is to assure the customers the employees are, indeed, trained and evaluated or assessed on their comprehension of, and capability to implement the training to make products or perform services that meet the needs of the customer.
The Standards do NOT have a requirement to GRADE the employees on a scale when making the assessments. Grading employees on a scale beyond "competent" or "not competent" sets up a value system which introduces issues beyond the scope of a Quality Management System. Customers have no business or right being aware of, or able to have a voice in, those issues. The Business Management Systems "may" have a reason to grade employees on a scale, but most who agree with Deming abhor such a system of grading individual performance because of the many variables which are not under the control of the employee (See "Red Bead Experiment")

In another thread (broken link removed), one of our Covers makes a valid point about training. This quote is just a portion of the complete post which is part of a very interesting thread. Jennifer makes the point that it is the "eventual measures such as repeat sales or customer satisfaction data, which would be target judgments for results that connect the training to strategic goals."

Jennifer Kirley said:
I have accessed this information through an excellent book by Jac Fitz-enz titled "How to Measure Human Resources Management". The book is packed with sensible approaches to measuring human performance and effectiveness of intervention efforts that can so vex HR and quality management.

Like Donald Kirkpatrick does in "Evaluating Training Programs" (another book I recommend) Jac Fitz-end asserts that change can be measured both at individual levels and across groups. I would set up spreadsheets to tally, compute with formulae and display results in graphs: knowledge change, behavior change etc. Of course the cost of training should be considered and the training should be planned toward specific objectives: error reduction, cycle time reduction, repeat sales for intensive customer service positions.

But we mustn’t put the cart before the horse. Wes very correctly points out that there may be environmental influences to the testing. That is why some scientists argue that "social science" can never be considered true science; people have so many shifting influences to behavior that dependent variables are difficult to measure and prove.

So, trainers--and quality people looking into training for corrective action-- should strive to eliminate environmental changes during this cycle as much as possible, or at least note the environmental differences between performance cycles that could be contributing to the measured "results". This could include supervisors/managers arriving, leaving or reassigned, other program changes, introductions or eliminations that may cause disruptions in the employee routines, even including changes in employee benefits programs. Layoffs, new product/service introductions and even changes in suppliers can also influence behavior results so this can become quite tricky and I would try to time the training cycle to suit and not let too much time elapse before taking measurements, lest the data be called into question.

I would certainly advocate taking in employee suggestions for the causes of errors, slowdowns, or lost customers/sales and maybe using a cause-and-effect analysis to approach the process. One should eliminate all practical material and environmental opportunities for variation before pursuing behavior intervention.

Once environmental and material causes have been ironed out and training is planned to pursue specific, goal-oriented changes in behavior that have real meaning to the organization, it makes sense to establish expectations for measuring success.

Both books note four basic evaluation levels:

(1) Trainee reaction (Smile sheet)
(2) Knowledge test (given after the program, it does not "prove" training effectiveness)
(3) Performance test (also given after the program, it does not "prove" the training had effects)
(4) Results test (before-and-after testing goes farthest to "prove" training effects and should be followed up some months later, to assess long-term impact.

#1 is an important starting point. If the trainees react favorably to the training, they are more likely to use it. Furthermore, good educators are sensitive to both their students' performance and perceptions, and will alter training styles, materials or programs to "tune in" to their students.

I would naturally advocate #4, using the formulae I already listed, but error reduction, problem-solving ability and cycle time improvements could lead to eventual measures such as repeat sales or customer satisfaction data, which would be target judgments for results that connect the training to strategic goals.

I hope this wasn’t too long!
Training evaluation (Competence vs. performance)

Jennifer
Also check this thread on training effectiveness (broken link removed) Many of our Covers make excellent points throughout this thread.

Also use the search engine here to search
"training effectiveness"
"management by objective" or more likely, "MBO"
"education"
 
Elsmar Forum Sponsor
I have three concerns with the Pass/Fail premise, although its basis as meeting a requirement is perfectly sound:

1. Grading developing skills is not tidy, evenhanded or fair across the board. Can the system assign a "Pass" grade to a score, say, 75% or 80%?

My point is that the objective is not meeting the requirement, but developing the talent. We don't want to shut them down if they do not do well. We want them to want to try again.

2. There are a handful of learning types and infinite shades of gray within them. Does the training system permit a progress notation that identifies specific challenges to meeting the target? Does the system note opportunities to meet these challenges and improve learning performance as an alternative to giving up on the employee and trying for a new one? E.g., looking for the "perfect" employee: just add company information and stir vigorously.

3. I currently work with Special Ed students in middle and high schools. I have noted with interest that their numbers are vast, and they are certain to land in your workplace at some point. Does the training grading system encourage their development or does it slap the label FAILURE when they don't pass the test? Or, does the system respond with an alternate method, look for simple missing learning puzzle pieces (like dyslexia or illiteracy) or otherwise respond to the newly found opportunity?

This runs tandem with #1 and #2. Shutting down good hearted people may not be in our interest but it is easy to do; so easy, in fact, that teachers are now taught (probably not everywhere) not to use red pencils or pens when grading papers.

I guess I'm being matronly, but without reassuring details I'm worried that training systems will be held to simplistic forms for auditability's sake and not for the sake of the customer, or the organization's long-term health. I can assure you that the numbers of challenged learners are growing, not shrinking. They don't wear signs that say "Challenged" and they can be among our most loyal employees. I submit they are worth investment.

Moreover, it's not your uncle's work force anymore. These upcoming people need different approaches than they may be experiencing in our very competetive and profit-focused environment. The price of their discouragement is low productivity and high employee turnover, two factors that are profusely bleeding profits from companies of all types.
 
Jennifer Kirley said:
I have three concerns with the Pass/Fail premise, although its basis as meeting a requirement is perfectly sound:

1. Grading developing skills is not tidy, evenhanded or fair across the board. Can the system assign a "Pass" grade to a score, say, 75% or 80%?

My point is that the objective is not meeting the requirement, but developing the talent. We don't want to shut them down if they do not do well. We want them to want to try again.

2. There are a handful of learning types and infinite shades of gray within them. Does the training system permit a progress notation that identifies specific challenges to meeting the target? Does the system note opportunities to meet these challenges and improve learning performance as an alternative to giving up on the employee and trying for a new one? E.g., looking for the "perfect" employee: just add company information and stir vigorously.

3. I currently work with Special Ed students in middle and high schools. I have noted with interest that their numbers are vast, and they are certain to land in your workplace at some point. Does the training grading system encourage their development or does it slap the label FAILURE when they don't pass the test? Or, does the system respond with an alternate method, look for simple missing learning puzzle pieces (like dyslexia or illiteracy) or otherwise respond to the newly found opportunity?

This runs tandem with #1 and #2. Shutting down good hearted people may not be in our interest but it is easy to do; so easy, in fact, that teachers are now taught (probably not everywhere) not to use red pencils or pens when grading papers.

I guess I'm being matronly, but without reassuring details I'm worried that training systems will be held to simplistic forms for auditability's sake and not for the sake of the customer, or the organization's long-term health. I can assure you that the numbers of challenged learners are growing, not shrinking. They don't wear signs that say "Challenged" and they can be among our most loyal employees. I submit they are worth investment.

Moreover, it's not your uncle's work force anymore. These upcoming people need different approaches than they may be experiencing in our very competetive and profit-focused environment. The price of their discouragement is low productivity and high employee turnover, two factors that are profusely bleeding profits from companies of all types.
You make strong points, Jennifer. Could we benefit from looking at the process in a slightly different manner?

When you talk about number value scores for the evaluation, aren't you really suggesting we DO set a value on the employees according to a number ranking?

Perhaps I'm the one who is being naive. Let's take a simple process like training employees to read a micrometer to check an outside dimension of some object. I can envision a system which says:
  1. We have Standard pieces shaped like products employee will measure on the job.
  2. The dimensions ARE STANDARD (perhaps confirmed with CMM, etc.)
  3. Upon completion of training, trainees take practical exam to measure dimensions of the Standard pieces and enter them on a check sheet.
  4. Trainees who enter readings within pre-determined variability pass
  5. Trainees whose readings are off are retested under supervision to learn why they missed (poor eyesight? mis-calibrated instrument?; dyslexia entering results? tendency to use instrument like a vise to reshape the product?)
  6. Trainees who pass are cleared to work.
  7. Remainder either get remedial training or reassignment to tasks they are qualified to do.
(Everyone is not capable of doing every activity - train me for 100 years, I'll still never be able to "dunk" a basketball into an official height basket!)

In my mind, I can see a way to make every task competence test pass/fail just like an inspection of parts. My customer doesn't ask me to line the passing parts up according to which are closest to the nominal dimension, only that they ALL fall within the specs. If the candidate falls between the upper and lower spec limits, he passes. If not, rework or scrap. Scrapping may not be the most humane, but the fact is some organizations are not geared to remedial education and, just like a sports team, "cut" the nonperforming players.

The curse, of course, is that the training programs may not be adequate, nor the testing truly predictive of the competence of the trainee. Grist for another thread, I think.
 
You made very good points here too, Wes.

I like your stepped approach to training design. It is sensible and its plan encourages value-adding examination of the technique, not just satisfaction of requirements, for the organization.

I don't like ranking employees. No no no...I agree with Dr. Deming, it can encourage animosity among the troops, or a stoic "that's the way it is" feeling that is not team performance-centered. GE (and many other major companies) routinely weed out their bottom (10 percent is my understanding) but do not describe how they arrive at their culling lists. Very often, the evaluation process is subjective or its very design is deeply flawed. Human opportunities are thus lost; potentially enormous opportunity costs result.

For example, two of the three employers I worked with that did evaluations did not tell the employees what the evaluations were until they happened. That is, we did not know what we were being evaluated against until the fateful day. One of these culled according to "poor performance among peers." In my view, the evaluations were worse than time wasters: they were morale crushers.

It's also true that many companies cannot invest in their employees...but can we be sure of this? The cost of employee turnover is enormous. Cost of poor quality is said to be between 15 and 40 percent of sales in services. Clearly, getting it wrong in people is just as problematic as with processes.

It's also true that not everyone can be trained for the job. Without question, people are sometimes mismatched to their jobs. This is largely preventable through a more careful hiring and promotion process. We should not overlook potential: we should be on the lookout for talent and ambition rather than fixating on finding members possessing a ready-made skill set. We should test, survey or otherwise scrutinize our candidates for their spiritual fit, but keep in mind the fact that someone internal might make a fine fit and the candidate could fill a lesser or different position. Making mistakes or prescribing to a rigid hierarchy and hiring practice is very limiting, risky and can be quite costly.

We are hiring people to make/deliver our products and services, not buying machines. People can't be calibrated, really. Training is an inexact process, but the alternative is a lack of readiness.
 
Back
Top Bottom