AIAG's MSA (Measurement Systems Analysis) 3rd Edition - K1, K2, K3 constants

Y

YKT

I was reading the MSA 3rd Edition manual, and found that now the K1, K2 and K3 constants used during the long GRR method have been changed to follow the d2 table in Appendix C, page 195. I've few questions.

1. On page 115, Analysis of Reswults - Numerical, it says that K1, is 1/d2, whereby d2 is dependent on the number of trials (m) and number of parts X number of appraisers (g). But for K2, it is dependent on number of appraisers (m) and g=1 (range)...and as for K3, it isdependent on number of parts (m) and g=1 (range)

Now, I'm a bit confused on the usage of m and g here. How do we get the 'range' (g) = 1 for the cases of K2 and K3 ?
Maybe someone can enlighten me of how to use the m and g more efficiently ?


2. In the Appendix C, page 195, Table d2, if the g > 20, it is true that we will use the d2 infinity value, which is right at the bottom of the table ? (second last row) ?
What is the 'cd' value for ?

3. Can anyone help me on the definition of 'degree of freedom' ? Been to some of the website, and still could not grasp the idea. Anyone care to explain it in layman term ?


Thanks a lot !!!!
 
A

Atul Khandekar

Fwiw

1. In case of K2(Calculation of EV) we are performing only ONE range calculation - range of 3 values (XaBar, XbBar, XcBar:assuming 3 operators) - so no. of subgroups, g=1. From the table, with m=3 operators, you get the K2 value of 0.5231 (= 1/1.91155).

Similarly, in case of K3 (Calculating PV), there in only one subgroup of 10 values - one range calculation and hence for m=10 parts and g=1, you get K3=0.3145 (=1/3.17905)

2. For an explanation of 'cd' values, please refer to this document from AIAG: Archived link

Hope this helps a bit.

3. DF: I have the same question too. Can someone with a better training in statistics explain in layman's terms? esp. how does one get fractional degrees of freedom, such as 10.8?
 
Last edited by a moderator:
A

Al Dyer

Atul, You can always be counted upon to put it short and sweet.

Although I know MSA pretty well you have an inate talent.

I am still wary of these new (K1,K2,K3) constants, do the new ones mean that previous gage studies are suspect? Do you have any info on the real reasons for the change?

Thanks!!!
 
A

Atul Khandekar

The present constants K1,K2,K3 are just the old factors divided by 5.15. Previously, AIAG used to insist on a coverage factor of 95%, hence the multiplication by 5.15. Now the users have a choice of coverage factor to use, 5.15 or 6.

The following is an excerpt from AIAG's MSA FAQ:

The K factors used in the original MSA1 and MSA2 manuals included a 5.15 sigma multiplier that cancelled out of the final results. Since that multiplier essentially had no impact on the final results, it was decided to eliminate its presence in the formulas. [See also page vi in the front of the MSA3 manual.]
 
A

Atul Khandekar

A correction to my post above:
5.15*sigma corresponds to 99% spread, not 95%.
 

Marc

Fully vaccinated are you?
Leader
Re: MSA 3rd Edition - K1, K2, K3 constants

This is an old thread. As an FYI from the AIAG:

MSA FAQ List
1. What is the difference between "% contribution" and "% study" in terms of GRR performance?

% Contribution is determined by multiplying by 100 the proportion of the GRR variance to the total study variance. % Study is determined by multiplying by 100 the proportion of the GRR standard deviation to the total study standard deviation. Thus, a level of 20% study is equivalent to a level of 4% contribution (.2 x .2 = .04).

2. Why are the K1, K2, K3 factors for doing a GRR so much different now than they used to be?

The K factors used in the original MSA1 and MSA2 manuals included a 5.15 sigma multiplier that cancelled out of the final results. Since that multiplier essentially had no impact on the final results, it was decided to eliminate its presence in the formulas. [See also page vi in the front of the MSA3 manual.]

3. Is the level of "GRR %" acceptability intended to include bias, linearity, and stability?

No. GRR % is merely the percent of the total variation in the GRR study as determined by the GRR methodology. Analyzing for bias or linearity requires separate, independent analyses. Stability requires long-term studies.

4. If a GRR study meets the "correct" level of performance, does that mean the measurement system is totally acceptable?

Not necessarily. GRR only covers the amount of variation due to measurement error, and technically only covers the short-term results gained from one study. This study also may not include all the sources of variation that can affect the measurement system over time, such as environmental effects, lot to lot differences, etc.

GRR also covers only one characteristic, one of perhaps several characteristics in a total measurement system. Similarly, the Ppk or Cpk index covers only one characteristic of a part or process - a "good" Ppk or Cpk index does not necessarily mean the entire part or process is acceptable.

Also, GRR does not cover bias, linearity or stability issues since it does not generally study the measurement process over a long period of time.

5. Why did we drop the %Bias and %Linearity?

The reason we dropped the indices is that (1) there is no "correct" way to analyze them and (2) we want to focus on the understanding of the measurement system variability and sources of variation rather than on "acceptable" indices.

We went with the focus that the bias and linearity should be the statistical equivalent of zero -- consequently the confidence bounds and the test of hypothesis. If the bounds are large (i.e. the natural measurement system variability is large) then the bias can be statistically zero even though it may not be "emotionally" zero (i.e. a large percent in the old terms). However, because the variability is large, the system is unacceptable due to the other parameter evaluations and, furthermore, adjusting the bias (using this variation) can cause the bias to become worse even though the calculated index becomes better -- ala Deming's funnel experiments.

6. Where can I find gage acceptance criteria in the new MSA manual?

Gage acceptance criteria may be found on page 77 of MSA-3. This is not listed in the Index or Table of Contents. This will be added to the Index and the Table of Contents in the 2nd printing.

7. Although ndc (number of distinct categories) is discussed in the new MSA manual, I don't see it in the Index or Table of Contents.

ndc is covered on page 77 and page 117 of the MSA-3. This will be added to the Index and Table of Contents in the 2nd printing.

8. Is Stability, Bias, Linearity a requirement for QS-9000 and ISO/TS-16949?

MSA3 is a guideline, not a standard. You should do what is appropriate and makes sense for your particular measurement system. You should also comply with your customer's requirements. You may want to do one of each of these studies so that you can demonstrate to yourself, your customer and your auditor that you are capable of doing these studies when they make sense.

9. What is the effective date or compliance date of the new MSA manual?

On March 18, 2002, a letter was sent to all automotive suppliers from the Supplier Quality Requirements Task Force, which explains this. Click here to view this letter . The MSA manual is a reference guideline. Beyond this, consult with your customer.

10. Why do some of the examples provided in the new MSA manual have "unacceptable" results rather than "acceptable"?

Most examples used in MSA3 represent real data from real situations. Real data such as these should make you think about what you are trying to accomplish and how a decision must be made. When everything is "OK", you probably don't learn as much.

11. Can I compare GRR% to tolerance rather than process (total variation)? If so, what is the rule of thumb for acceptability?

The answer to this depends on several things and there may be no "pat" answer.

The mathematics used in the standard GRR forms will generate "process variation", PV, and "total variation", TV. For those estimates to make sense in their usage here the process should be stable and the parts selected from what should represent the full spread that the process is expected to generate over long periods of time. If these pre-conditions do not yet exist (new process, changed process, etc.), then you may either compare the GRR to tolerance or to a projected process situation that has a desired target Ppk. In other words, under these conditions, substitute either the tolerance spread or the spread generated by your chosen Ppk level for TV when doing %GRR.

Above all, check with your customer for approval of whatever you do in this regard.

12. If I have a very capable process to the extent that my range chart shows low discrimination and my GRR % is high relative to the process, what can I do?

If your process is stable and capable, the spread of this acceptable process distribution includes your measurement error. There may be no need to study your measurement error from a purely "acceptability" viewpoint. If under these circumstances your discrimination is "unacceptable" (as shown by ndc values or by the GRR average chart not showing enough out of control points), you may still use your measurement system for control purposes even though it is not suitable for establishing capability results. As always, consult with your customer.

13. If my long-term stability chart for the measurement process shows control but a very small range and/or runs on the average chart, is this unacceptable?

You should not apply the same rules for a measurement stability chart as you would for a process control chart. If your measurement stability chart were "perfect" it would show one long flat run on the average, and endless zeroes on the range chart -- after all, you are measuring the same part over and over. If that part does not change (and the surrounding conditions are stable), then the ideal condition would be for the stability chart to show no difference from measurement to measurement. In a process control chart situation, the range chart is supposed to show at least 4 levels to show proper discrimination -- this rule does not apply to a measurement stability chart. Also, stratification rules probably do not apply to measurement stability charts. Points beyond the control limits should be investigated, as should trends. Such stability charts should be used with the spirit of investigating measurement system situations that appear to be unusual, or not stable (considering you are always measuring the same part), with ensuing corrections made and noted.

14. On p. 97 of MSA3 it says that the Range Method is acceptable. However, my QS auditor says that PPAP does not allow the Range Method. What do I do?

The Range Method IS statistically acceptable for short term, quick studies when perhaps zeroing in on changes to a measurement system. "Acceptable" is a tricky word -- one must ask, "Acceptable for what and to whom?" Sometimes these issues must be sorted out into customer issues and statistical issues, but always consult with your customer for the "right" answer.

15. Why are the short and long form attribute methods that were in MSA2 not in the new MSA3 manual?

In MSA3, look at Figure 29 on p. 126, or the graphic in the left margin of p. 125. For an attribute gage study to be beneficial, this gray zone must be defined. In order to do that, some parts from the gray zone must be present in the study and one must use valid statistical methods that define that zone. The previous short and long form had shortcomings. The procedure surrounding those methods did not allow for "indecision" -- results were required to show 100% agreement; a bad part must be called "bad" all the time and a good part must be called "good". There was no room for disagreement in the results. However, using a "perfect" check fixture, a part that is exactly on the specification limit will be called "good" 50% of the time and "bad" 50% of the time. In reality, this "exactly on specification limit" is the gray zone (wider than the specification limit) and it should be understood and defined in a successful measurement systems analysis. Doing an attribute study successfully also requires a relatively large number of parts and test-opportunities to make a decision. The previous short form did not really accomplish this. Keep in mind that if the short and/or long form methods are still acceptable to your customer, they may still be used.

16. I can't determine how to calculate the significant t value (two-tailed) in Table 3 on page 88 (value shown as 2.206) and Table 4 on page 90 (value shown as 1.993).

If you are using the MSA Third Edition, First Printing check the errata for the MSA 3. There are errors on pages 88, 89 and 90 that may mislead you.

For Table 3: Given that the df = 10.8, look in a t-table in a standard statistical reference book. Look for the value in the t-table for df=10 and df=11, in the t-sub-0.975 column (this column represents the two-sided values for 95% confidence). The value at 10 df = 2.2281 and the value for 11 df = 2.2010. You must interpolate between these values to get the answer for this problem.

Take the difference between 2.2281 and 2.2010. This equals 0.0271. Since the df we are after is 10.8, we need to interpolate by either adding 20% of this value to the 2.2010, or by subtracting 80% of this from 2.2281. 20% of .0271 = 0.00542; adding that to the df for 11 = 2.2010 + .00542 = 2.206, rounded off to 3 places as shown in MSA3. Or you may find 80% of .0271 = .02168. Subtracting that from the df for 10 = 2.2281 - .02168 = 2.206, the same value we found using the alternate method.

If you follow the same procedure for Table 4, you should be able to interpolate the same way.

By the way, the df value of 10.8 in Table 3 is found in the table in Appendix C of MSA3 by using g = 1 and m = 15. The df value of 72.7 in Table 4 is found in the table in Appendix C of MSA3 by using g = 20 and m = 5. The corresponding d2* values are also shown in those same places of the table. By using the formulas shown on page 89, you should have no problem duplicating the values in Table 3 and Table 4.
 
Top Bottom