# Measurement Risk Management - 4:1 Test Accuracy Ratio Guardband

M

#### Mike Czech

Before we establish a measurement requirement, we must first establish the quality requirement for the product or process. Example: A car door must open and close 1 million times without falling off it's hinges and it must stay where we put it and not bang us in the shins or open into traffic. We analyze these requirments and turn them into a design requirement. As part of this design requirement, we end up with performance limits or tolerances on components. These in turn must be measured and verified for a successful product or process output, ie defects per thousand, million, etc.

The measurement equipment used to verify the product or process performance limits/tolerances must meet it's own performance requirement over a given time interval,(accuracy and End-of-Period in-tolerance probability).

There is risk associated with a the measurement being made and this risk is passed onto the product or process in the form of scrap costs, rework costs and consumer rejection of the product.

Measurement risk can be defined as false accept consumer risk,(the percent probability the equipment is not within it's performance limits), rather than the nebulous 4:1 accuracy ratio.

How do you ask the customer (user) to define the measurement risk they can tolerate?

------------------

#### Jerry Eldred

##### Forum Moderator
Super Moderator
I'm going to take a stab at answering your question. Its a rather complex issue, and I'm not altogether certain I can give you a satisfactory answer.

I'll start with the 4:1 Test Accuracy Ratio. That is an important consideration. It is normal to assign a percent probability that test equipment will remain in tolerance. A typical figure is 95%. Without having the statistical data in front of (nor the time today to do the calculations), the 4:1 test accuracy ratio (normally between measurement standard, and unit to be calibrated, for each parameter regardless of whether single or multiple parameter) gives a statistical increase in confidence. That is, the 4:1 is a guardband so that unit being calibrated, even if the measurement standard may have been out of tolerance, should not have detrimentally affected. Regarding the relationship between test equipment and units being tested, the acceptable risk can be defined by an acceptable Cpk value. I don't have the time to do an SPC (Statistical Process Control) class on line, but if you dig out some books on that, you'll readily find that you can develop a relationship between the repeatability/reproducibility of your measurements and the acceptable risk.

That answer even confused me a little. My suggestion is to get a copy of QS9000 MSA handbook and establish acceptable risk that way. I have seen a common minimum Cpk value of about 1.66. I don't think there is one simple answer to acceptable risk. It depends on what industry is involved. In medicine, or space shuttles or Boeing 767's or anything where lives could be lost, there is a different answer than in less critical industries.

Don't know if that is an answer. But its a try.

------------------