Repetition (sample size) for measuring equipment accuracy determination

P

puck1263

All,
I've got a piece of measuring equipment that I'm questioning the accuracy of (bias to standard). Our internal lab sent me a report showing bias and linearity through regression (r2 of .9999999). They used 3 standards of various values. Seems good.
However, I asked how many repititions of each measurement they toook and they said one per sample. This does not seem right to me. Shouldn't there be a deterimined sub-sample size based upon desired confidence intervals?
I'm a quality engineer and used to production sampling, but am new to calibration side.
Any advise would be appreciated.
 

Hershal

Metrologist-Auditor
Trusted Information Resource
If they used three different standards but only one run of each, then they may not have sufficient information for a good uncertainty budget. I would start by getting the uncertainty budget and examine the Type A carefully.

Usually a minimum number of 3 or 10 readings is required to develop a Type A, plus whatever Type B contributions that may exist. More readings will help develop a better Type A.

And uncertainty is a requirement for traceability, so no uncertainty means no traceability.

Also, without the uncertainty, the stated accuracy may be misleading, if the measurements are near the edge of the range.

Hope this helps.
 
Top Bottom