Proficiency Testing - ISO 17025 and Laboratory Scope

K

Ken K

Proficiency Testing

Laboratories that wish to become accredited and maintain their accreditation are responsible for participating in a proficiency testing, interlaboratory comparison or a round robin testing program.

This is one of the requirements for becoming ISO17025 accreditated.

My question's are:

Do you only need to do this for the tests that are listed in your scope of accreditation?

What if none of the tests listed in your scope is included in any of the proficiency testing programs?

Would interlaboratory comparison between two labs in our organization be acceptable, if in compliance with Guide 43?


Also, it is recommended that each laboratory participate in at least 25% of their scope each year.

Is this required or just a recommendation? How does your lab handle this?


Questions...questions...and more questions...
 
R

Ryan Wilde

Re: Proficiency Testing

Ken K said:

This is one of the requirements for becoming ISO17025 accreditated.

My question's are:

Do you only need to do this for the tests that are listed in your scope of accreditation?

Yes. An accrediting body cannot require (or really even ask) you to participate in proficiency tests outside of your scope of accreditation.

What if none of the tests listed in your scope is included in any of the proficiency testing programs?

It's a big problem right now actually, especially in testing. The first few years you can generally get away with documenting your search for available proficiency tests, citing that none were available. Eventually you will wind up having to design and run your own. We actually had to order an artifact, have each tech measure it, and then send it to NIST to get their measurements, then I correlated the data to determine adequacy of our system.

Would interlaboratory comparison between two labs in our organization be acceptable, if in compliance with Guide 43?

Yes and no. You still have to have a national lab or equivalent in the mix somewhere, as it is more than likely that both organizations would use identical methods.

Also, it is recommended that each laboratory participate in at least 25% of their scope each year.

Is this required or just a recommendation? How does your lab handle this?

It is a requirement of at least two accrediting bodies that I know of specifically, and from what I've heard, the EU is either already doing it or about to do it. The requirement by the US accrediting bodies is that each discipline in the scope must have a proficiency test performed at a minimum of every four years. We handled it with 25% per year, with the exception of the very tight measurements, which we did yearly. It's different where I work now in that we have so many labs that we run our own, using NIST as the source, and our Primary Standards Lab as the pivot lab.

I can give you a few tricks, though. First, keep your scope of accreditation non-specific. If you are measuring length of seven different types of widgets using a CMM, then your scope should only mention "Length using CMM", not the widget, or family of widgets, or composition of widgets. Take the best readings you can with your CMM, report it on your scope as best measurement uncertainty. From there you can derive your actual uncertainties for each type of widget, and that is what the accrediting body actually wants you to do. Rarely is my Best Measurement Uncertainty the uncertainty that actually goes on a calibration certificate.

General scope line items, such as "Length using CMM", are much easier to find proficiency tests for than "Length of foam rubber using CMM".

Keep the questions coming, these are good ones!

Ryan
 
K

Ken K

Best Measurement Uncertainty/Capability: The best measurement capability (BMC), as expressed as the Best Measurement Uncertainty in the Proposed Scope, (always referring to a particular quantity, viz. the measurand) is defined as the smallest uncertainty of measurement that a laboratory can achieve within its scope of accreditation, when performing more or less routine calibrations of nearly ideal measurement standards intended to define, realize, conserve or reproduce a unit of that quantity or one or more of its values, or when performing more or less routine calibrations of nearly ideal measuring instruments designed for the measurement of that quantity. They shall be supported or confirmed by experimental evidence.
:confused: :confused: :confused:

Thanks for the last reply Ryan.

The above quote has kept me confined to a dark room while mumbling incoherently since reading it. Do they not want you to understand this stuff or what?


Let's say I have a Mettler balance which I use to
weigh raw materials. The range of the balance is
0.01 - 2000.00 g.
The material has a basis weight of 1410 g/m² with
a tolerance of +/- 10% (1269 - 1551g/m²)(print specified
by customer).
The balance is calibrated using grade 3 brass weights.
The test method says to weigh 3 individual parts to the
nearest 0.1 g.

Take the best readings you can with your CMM, report it on your scope as best measurement uncertainty.

If I'm following you right, my best measurement uncertainty would be 0.01 g. Is that correct or am I all wet here? :mad:
 
Last edited by a moderator:
R

Ryan Wilde

Ken K said:

:confused: :confused: :confused:
The above quote has kept me confined to a dark room while mumbling incoherently since reading it. Do they not want you to understand this stuff or what?

Actually, that makes sense to me, but I've been doing this accredited thing for several years now. BMC is basically using your best standards with the best device that you could ever calibrate using those standards.


Let's say I have a Mettler balance which I use to
weigh raw materials. The range of the balance is
0.01 - 2000.00 g.
The material has a basis weight of 1410 g/m² with
a tolerance of +/- 10% (1269 - 1551g/m²)(print specified
by customer).
The balance is calibrated using grade 3 brass weights.
The test method says to weigh 3 individual parts to the
nearest 0.1 g.


If I'm following you right, my best measurement uncertainty would be 0.01 g. Is that correct or am I all wet here? :mad:

Material weight uncertainty, as a BMC:The uncertainty of your scale must be calculated, and that would be your best measurement uncertainty (see below).

Scale calibration Uncertainty, as a BMC: Scale You are REALLY going to hate this answer. Your Best Measurement Uncertainty for calibrating your scale is "Class 3 Weights". The best measurement you can make is defined by the weight set. The uncertainty of calibrating your scale, however, would not be included in the BMC for calibrating scales (Scales are something of an exception).

The uncertainty of calibrating YOUR SCALE (not BMC) would be a formula that you would have to work out. Something like:

Where:
Us = Standard Uncertainty (weights) as a single standard deviation (uncertainty on your weight cert / 2, assuming that the cert has k=2)

Ur = Resolution/Repeatability Uncertainty (If the scale repeatability is less than the resolution of the scale, then use resolution figure) To show repeatability, take lots of measurements and find the standard deviation. If it is <0.01, then use 0.01/SQRT(3).

Ug = Uncertainty do to local gravity errors (I'm not in the lab these days or I could give you a pretty good figure for this, but at the resolution that you are working at, it really doesn't matter, but I'm including it just in case somebody reading this has a µg balance, because then it DOES matter).

Ud = Uncertainty of the air displacement of your weights (Same thing as above, it makes little to no difference at 0.01g).

Ue=SQRT(Us² + Ur² + Ug² + Ud²)*2

From this uncertainty, you simply add (RSS) the accuracy of your scale and voila, the uncertainty of using your scale is complete. There you go, that's the best I have on a Monday at a customer site.

Ryan
 
Last edited by a moderator:
K

Ken K

Ryan, I'm gonna turn the light on and read your reply when I get to work tomorrow.

If I stop mumbling I know I'll understand what you said.

I've read five different articles about measurement uncertainty and it's starting to make sense. (little bit) Practice makes perfect but first you must understand the language.

Thanks for your replies. Excellent for a Monday.

Ken
 
Top Bottom