Study Variation and Discrimination Ratio in Gage R&R ANOVA

  • Thread starter Brad Gover - 2010
  • Start date
B

Brad Gover - 2010

Hi all,

I have a question on how to apply and interpret the use of Study variation and discrimination ratio. Some people at my company set up an MSA with made up sample ranges. For example, the MSA of an analytical balance was set up to have weight samples spanning the range of the gauge's ability to measure (0.001grams to 200. grams). The measurement results from two operators three trials were given to me for analysis. I calculated the values and got a study variation of 0.00% and a discrimination ratio of 148,000. I have a part to part standard deviation of 12.48 with a total GR&R standard dev of 0.0001. All that makes sense to me. The problem I have is the samples do not have anything to do with any process here. Others claim that we're not looking at the process, only the gauge. My problem is by not having samples that span the variation of some process (not just measuring a range of weights) makes the study variation and DR meaningless. They want to say the MSA is acceptable due to the outcomes of the study. Should the samples always be pulled from the process or can you arbitrarily make up samples to analyze the gage only? How should the conclusion with the report on Study variation and DR address this?

Thanks for the input.
 
Last edited by a moderator:
R

Richard Pike

Hi all,

I have a question on how to apply and interpret the use of Study variation and discrimination ratio. Some people at my company set up an MSA with made up sample ranges. For example, the MSA of an analytical balance was set up to have weight samples spanning the range of the gauge's ability to measure (0.001grams to 200. grams). The measurement results from two operators three trials were given to me for analysis. I calculated the values and got a study variation of 0.00% and a discrimination ratio of 148,000. I have a part to part standard deviation of 12.48 with a total GR&R standard dev of 0.0001. All that makes sense to me. The problem I have is the samples do not have anything to do with any process here. Others claim that we're not looking at the process, only the gauge. My problem is by not having samples that span the variation of some process (not just measuring a range of weights) makes the study variation and DR meaningless. They want to say the MSA is acceptable due to the outcomes of the study. Should the samples always be pulled from the process or can you arbitrarily make up samples to analyze the gage only? How should the conclusion with the report on Study variation and DR address this?

Thanks for the input.

It is often not financially feasible to analyse all products and all gauges.

Therefore, although technically the gauge should be checked against the specific product characteristic, it is also common place to check the gauge range and apply those results to several products.

This is of course the principle of applying one test to several products or product characteristics.

A lot will depend on the results. If marginal, this may indicate a specific test against the product characteristic is required. If very good, then even if the results are a little of the mark, it will not make any difference to decisions about the product are valid.

After all is said and done, the only reason we want to know gauge error, is to ensure the decisions made as a result of the measurements.

I justify "assigning" gauge results to other product charecteristics with the example of an engine block. 500 charecteristics, potentially 3 gauges per charecteristic, potentially 3 appriasers 500 x 3 x 3 = 4500 tests. Clearly "assigning" values is the only option. Hope that is on the mark and helps.
 
Last edited by a moderator:

Bev D

Heretical Statistician
Leader
Super Moderator
Hi all,

The problem I have is the samples do not have anything to do with any process here. Others claim that we're not looking at the process, only the gauge. My problem is by not having samples that span the variation of some process (not just measuring a range of weights) makes the study variation and DR meaningless. They want to say the MSA is acceptable due to the outcomes of the study. Should the samples always be pulled from the process or can you arbitrarily make up samples to analyze the gage only? How should the conclusion with the report on Study variation and DR address this?

Thanks for the input.

Brad - There are two considerations here:
  1. The realistic assessment of the measurement error itself
    MSA is supposed to assess the entire measurement system, not just the guage. Calibration is an assessment of the guage. Two important components of the system are how the operator and the gauge work with the part and any fixturing (that is typically part dependent). angle's of access and within part variation come into play. If generic parts are used we cannot make an assessment of this part of the system. Sometimes these two components are the largest contributors to the measurement error; sometimes they are not. relatively simple parts measured with relatively simple gauges and will typically not be affected by this. A relatively complex geometry measured with a fixture on a CMM can be greatly affected.
  2. Proper assessement of DR and study variation
    If you accept that the part and fixturing are not major contributors then you could calculate an appropriate DR and 'study variation' by using the actual (historical) observed variation for each feature of interest instead of the variation of the 'dummy' parts. It is not appropriate or ethical or valuable to use the range of the 'dummy' parts to calculate DR or study variation. Not only is it simply a lot of work that adds no informative value to your own organization it is in violation of the intent of the MSA manual(s) and quite possibly Customer requirements. IF you are submitting the DR and Study Variation results using the dummy parts total variation as part of a PPAP, then I think there are some ethical questions that come into play.
 
R

Richard Pike

Brad - There are two considerations here:

  1. Proper assessement of DR and study variation
    If you accept that the part and fixturing are not major contributors then you could calculate an appropriate DR and 'study variation' by using the actual (historical) observed variation for each feature of interest instead of the variation of the 'dummy' parts. It is not appropriate or ethical or valuable to use the range of the 'dummy' parts to calculate DR or study variation. Not only is it simply a lot of work that adds no informative value to your own organization it is in violation of the intent of the MSA manual(s) and quite possibly Customer requirements. IF you are submitting the DR and Study Variation results using the dummy parts total variation as part of a PPAP, then I think there are some ethical questions that come into play.
Ethics and MSA - that's a new one to me. As long as the method is disclosed, I don't see any ethical issue. I fully understand the originator's viewpoint - although I don't understand the internal departmental conflict. The MSA Process should surely have been agreed from the start. Personally I still maintain that if the Process Capability is "good" then the issue of not incorporating the Appraiser & Method Error is surely not an issue, unless the "control" of the Process is placed in doubt. Very interesting to hear another person's viewpoint though - thanks - and I hope the originator's dilemma is helped not hindered by this discussion.
 

Miner

Forum Moderator
Leader
Admin
For what purpose is this scale used (e.g., inspection, SPC, statistical studies)?
 

Bev D

Heretical Statistician
Leader
Super Moderator
Ethics and MSA - that's a new one to me. As long as the method is disclosed, I don't see any ethical issue.
It depends on the Customer's requirement. typically DRs and study variation are intended - and in many cases specifically called out - to be in relation to the each characteristic's actual variation. IF MSA results were submitted using the range of 'dummy parts' (which I am interpreting to be the case that Brad is describing) then the report would be in violation of the custoemr requirement. Additionally, if only summary data were provided (the DR number for example it wouldn't be obvious that the actual process variation wasn't used. that would be in a potential unethical situation.

Personally I still maintain that if the Process Capability is "good" then the issue of not incorporating the Appraiser & Method Error is surely not an issue, unless the "control" of the Process is placed in doubt.
This is a very good point! I would recommend using that data as a substitiute for the OP's organizational attempt at a 'generic' MSA.
 

MasterBB

Involved In Discussions
As Personally I still maintain that if the Process Capability is "good" then the issue of not incorporating the Appraiser & Method Error is surely not an issue, unless the "control" of the Process is placed in doubt. Very interesting to hear another person's viewpoint though - thanks - and I hope the originator's dilemma is helped not hindered by this discussion.

Richard,

Good point.. I agree.....
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
I agree with Bev D: one of the key points of using actual parts versus generic parts is if there is any influence of the physical part on the measurement, such as a cantilevered force - that the generic part would not show, then you are not doing an adequate job of analyzing the gaging system.

However, you are doing a scale or balance here, and I think you can meet the intent, while adjusting the part specimens to provide a variation that your process has not yet given you. If you can add or remove weight (removal would be best), and still maintain the basic shape of the part, I would find that acceptable. If you were to use a square block as a generic part for an actual round part, then you would be missing the variation from the part rolling around and the operator maybe not waiting for it to stabilize to take the weight measurement. But to shave a little weight off of the actual part to provide the variation to check the gage seems reasonable to me.

"Generating" variation is not uncommon, especially for a new process where you are trying to verify the adequacy of the gage before all of the variability of lot variation, operator variation, machine wear, etc. have manifested themselves. You are much better off to generate variation, than to take small variation, approve the gage, and walk away as the process goes well beyond the original sample set variation (even more common).
 
Top Bottom