Gage R&R for Pressure Transducers - What GR&R procedure / formula (standard) is best?

  • Thread starter Thread starter avalon
  • Start date Start date
A

avalon

Greetings,
Question: We have a test machine with 13 pressure trandsducers. Eleven are 0-300 psi, two are 0-1000 psi.
The customer has requested we do a GR&R of them prior to running parts through the machine.
What is normally done is using a pressure calibrator, apply known pressures for the 0-300 psi trandsducer, one at 0 the other at say 250 psi, and take a reading from the analog display. It is done 10 times for each device. The same is done for the 0-1000 psi transducer; 0 and 800(?) psi.
My question is what GR&R procedure/formula (standard) is best used and how should it be interpreted? On the one hand it would seem a simple Gage R study would be sufficient, yet we need to make sure the analog display is accurate. Since Gage R only requires a tolerance, it seems it would not satisfy the need to ensure the analog display is reading accurately. Sorry if I'm using incorrect terms.
Thanks in advance for your help.
 
Elsmar Forum Sponsor
You may be confusing a gage calibration (which would compare a known standard - such as you describe - to the result as reported through the analog display) with a Gage R&R.
The Gage R&R tests the repeatability of the tester's ability to measure actual product (true value unknown) across the range of variaton that your processes produce. This is NOT accuracy. It is repeatability - can the tester get 'roughly' the same value each time it measures a real part. Reproducibility is the ability of the tester to 'repeat' even with different operators. In the case of 'active' testers where the opeartor 'simply' loads the part and pushes 'go', reproducibility is typically dependent on teh ability of the operator to 'load' the part. Reproducibility may not be trivial and shoudl always be tested, but it only has meaning if the repeatability of the tester is 'good'

The definition of 'good' repeatability is relative to either the tolerance spread OR the variation of the product. I prefer to use the variation of the product...

you may want to clarify with your custoemr what they are specifically looking for. Calibration is typically a given and a Gage R&R is often requested in addition to calibration.
 
Bev D said:
You may be confusing a gage calibration (which would compare a known standard - such as you describe - to the result as reported through the analog display) with a Gage R&R.
The Gage R&R tests the repeatability of the tester's ability to measure actual product (true value unknown) across the range of variaton that your processes produce. This is NOT accuracy. It is repeatability - can the tester get 'roughly' the same value each time it measures a real part. Reproducibility is the ability of the tester to 'repeat' even with different operators. In the case of 'active' testers where the opeartor 'simply' loads the part and pushes 'go', reproducibility is typically dependent on teh ability of the operator to 'load' the part. Reproducibility may not be trivial and shoudl always be tested, but it only has meaning if the repeatability of the tester is 'good'

The definition of 'good' repeatability is relative to either the tolerance spread OR the variation of the product. I prefer to use the variation of the product...

you may want to clarify with your custoemr what they are specifically looking for. Calibration is typically a given and a Gage R&R is often requested in addition to calibration.


There is no operator influence. The applied pressure is constant.

What is needed is to verify the analog readout/transducer on the machine is within acceptable limits i.e. is it accurate and repeatable. The customer wants each transducer separately to be evaluated prior to machine operation. They all use the same readout. The data collected is as I said.

Example: 250 psi is applied. The readout on the machine is 252 psi avg (for arguments sake). Repeatability may be very good, but the accuracy may not be. I need to know what method would determine BOTH of these results. The tolerance has yet to be determined. I hope that clarifies my inquiry a bit more.
 
how do you know?

avalon said:
What is needed is to verify the analog readout/transducer on the machine is within acceptable limits i.e. is it accurate and repeatable. The customer wants each transducer separately to be evaluated prior to machine operation. They all use the same readout. The data collected is as I said.

Example: 250 psi is applied. The readout on the machine is 252 psi avg (for arguments sake). Repeatability may be very good, but the accuracy may not be. I need to know what method would determine BOTH of these results. The tolerance has yet to be determined. I hope that clarifies my inquiry a bit more.
Avalon,

First, be aware that pressure measurements are not my main area of expertise, but I may be able to add something useful. I want to start with a couple of questions:
  • Are the pressure transducers periodically removed and checked against a traceable measurment standard - that is, are they calibrated?
  • Since the digital display appears to be independent of the transducers, the same question applies - is it calibrated?
If the transducers and display are calibrated, that gives you independent verification that their readings are probably within their manufacturer's specifications. The transducer manufacurer should have a performance specification for the response of the transducer. The digital display manufacturer should have a specification for its response to an electrical signal input. The combination of the two (one transducer at a time) will give you a baseline for evaluating the performance of the measurement system.

It appears that your customer wants you to perform a process measurement system validation or verification before starting the job. You said that you can apply, for example, 250 psi to a transducer.
  • How do you "know" that exactly the correct pressure is applied to to the transducer?
Ideally, there is an independent method of measuring the pressure that is more accurate than the transducers you are using, and which has excellent repeatability. For example, a good situation might be that at 250 psi your transducer is known to be within +/-2 psi of the true value and you can independently measure the pressure to +/-0.5 psi with something else. I would imagine that this is not the case, though.

But suppose you can adjust the pressure applied to the transducer to amke the display read 250.0 psi. Using the specification or calibration data from the transducer and display, you can then make some calculations and get a range of probable values. Using the numbers above and assuming a display accuracy of +/-1 psi -- and without going into the depths of uncertainty analysis -- you could make a fairly conservative statement that the pressure is believed to be 250 +/- 3 psi. (add the two numbers; this is a worst-case value) A more detailed analysis might come up with a "better" number but it may only differ from this by a few tenths.

The next important step is to compare that range to the requirements of the customer and the process: is it acceptable? If it is, you have a happy customer. If it is not, you have an opportunity to improve the process, the measurement system, or both.

Another step would be to track the values over time, to gain knowledge about repeatability, reproducibilty, and random variation in the process and its measurements. But that's another project.
 
Back
Top Bottom