Statistically Comparing Technician Techniques

Steve Prevette

Deming Disciple
Leader
Super Moderator
You may not like the idea of more work, but I'd really suggest you also ought to do a check of the other technician also. You'd basically be trying to compare if the difference between your check and the questioinable techinician is significantly different from the difference between your check and the "good" technician.

You may also want to throw some known bad parts into this "test" to see if both can detect bad parts.

It's possible you could also look at past data from the two technicians, and check using p-chart control charts or binomial tests to see if their failure rates for "bad parts" are statisically the same or different. If we assume all parts have an equal chance of being bad, then both inspectors should have the same failure rate for bad parts, within statistical uncertainty levels.
 

Bev D

Heretical Statistician
Leader
Super Moderator
if you have been redoing her measurements how often have you foudn a different result? how often has it changed the disposition of the material as pass/fail???
 

Jim Wynne

Leader
Admin
You may also want to throw some known bad parts into this "test" to see if both can detect bad parts.

This isn't necessary unless the measurements produce only attributes results. The expectation for a variables study (standard GR&R) is use of parts that reflect the range of variation in the process.
 
Y

Yew Jin

The quick comparison technique we may use is to study the average and standard deviation of the 2 sets of data with a same sample provided to 2 technician for few trials.

We might have the desire location (target) that we need to meet. Compare the 2 set of the sample average, are they significant difference and meet the target? (may use the 2 sample T test if the data is normal dist)

We may look into the data spread as well, are the spread significant difference and which is widen? (may use the F test if the data is normal dist)

We may know who is bias and precise in the quick way.
The average that far from the the desire target has more bias, however if the standard deviation is high mean he/she do not precise during test or measure a same unit in difference trials.

Most of the case is the method of measuring will give us major variable. We may standardize the method of data collection or measuring so that appraisal to appraisal variation is less. Document it properly so that everyone can follow the only method or technique to measure.
 

Stijloor

Leader
Super Moderator
Re: Comparing technician techniques - statistically

ahem - until there is data showing that 1. this operator's results are statistically & practically different from the other(s) and 2. that her measurements are the least accurate you - and the OP's management have no basis for assuming there is anything wrong with her.

this requires an MSA

A measuring device in the hands of an incompetent operator will have a negative impact on the MSA results. If a skills assessment shows that there is a competency issue, you don't need to perform an MSA to demonstrate that "something is wrong with her." In any statistical process (SPC, MSA, etc.) one is expected to first minimize unnecessary variation. Once all known sources of variation have been eliminated to the extent possible, further statistical analysis will demonstrate how "good" the system is.

Stijloor.
 

Statistical Steven

Statistician
Leader
Super Moderator
I might take a very simple approach. If you and the suspect technician measured the same part on the same instrument, then you can just do a paired t-test to see if there is a statistically difference. You will also need to determine a practical difference in case you find a statistical difference that is not meaningful.
 

Bev D

Heretical Statistician
Leader
Super Moderator
Re: Comparing technician techniques - statistically

A measuring device in the hands of an incompetent operator will have a negative impact on the MSA results. If a skills assessment shows that there is a competency issue, you don't need to perform an MSA to demonstrate that "something is wrong with her." In any statistical process (SPC, MSA, etc.) one is expected to first minimize unnecessary variation. Once all known sources of variation have been eliminated to the extent possible, further statistical analysis will demonstrate how "good" the system is.

Stijloor.

yes an "incompetant" operator will yield poor results. It will show - proof - in the data. on the other hand I inferred from the OP's choice of words that the idea that this operator is 'different' has no supporting data - just opinon. The OP stated that his repeat measurement "was a waste of his time". that led me to infer that he never found a different result - otherwise, I thought that if seh was making 'wrong enough' measurements then the OP would find different answers and his questions to us would be far different: he would have asked about making her better or performance management, instead he asks how to get the data to compare her performance to himself and/or the other technician.

I think we too often jump to "incompetant operators" based on no data but a lot of opinion...the OP's post appeared to me to be going in that direction. whatever happened to "innocent until proven guilty"? As a manager I would hate to claim that an employee was incompetant wihtout proof (and HR would kill me.) additionally without the baseline data how would we know that the training worked?
 
Top Bottom