Number of Distinct Categories" or NDC Calculation in MSA Studies

leftoverture

Involved In Discussions
Hi Folks,

I wanted to start a discussion to see what others are thinking about the "number of distinct categories" or NDC calculation in MSA studies. The calculation is, per my interpretation of the MSA manual (see page 46), essentially an effort to improve on the old 10:1 rule (where gage discrimination should be at least 1/10 of the specification range).

So, AIAG seems to suggest that making gage discrimination 1/10 of process variation is an improvement over the old 1/10 of spec range rule. I suppose in some cases this may be true, but since the objective of most manufacturers is to reduce process variation, I would suggest that any calculation that carries the risk of penalizing your measuring system because the process variation is low is probably, to some degree, errant.

And this brings me to the subject of NDC. As a reminder, the formula for NDC is 1.41(PV/GRR). I'm sure many of you have experienced what I have, a measurement system with excellent discrimination that exceeds the old 10:1 rule but yields a low NDC anyway. And the reason the NDC is low is that the process variation is low.

I would further guess many of you, like me, have found that your actual R&R raw data has yielded 7,8,9,10 or even more different outcomes yet the NDC came out low anyway. Again, this is most likely because the process variation is very low. I have conversed with other quality professionals who have been as frustrated as I have been by such NDC results.

Some of those folks have taken to inducing variation into their studies, ie: machining parts to increase the variation in the sample set to achieve a better NDC to satisfy their customers. While that does work in many cases, what's the point? To satisfy the customer or to validate a measuring system?

In the old days (I've been at this a while) we used to count (yes, count) the number of distinct outcomes in our study and if the count exceeded 5 we were pretty happy (assuming we met the old 10:1 rule first). And usually we could further validate our measuring system by looking at the X-bar and R charts (bad measurement systems reveal themselves pretty quickly).

I recently had a case study where I had a rather unique measurement for which only one device was available, and the process capability was something in excess of Cpk=5.00, yet the NDC was low because the process did not vary much. And because the person evaluating the study was probably trained to view NDC as an absolute rather than an indicator, they still wanted it improved. (How much money you got to spend?)

So, I would put forth that the real objective of MSA is to have an adequate measurement system; the MSA manual does, in fact, refer to NDC as an indicator not an absolute. So what is really needed is better understanding of what constitutes an adequate measurement system. (Because anything more than adequate could be wasted money.)

So, care to share your thoughts, opinions, experiences with NDC?

By the way, for those of you keeping score, in the AIAG MSA manual, 4th edition, the index lists NDC references as: page 47 (correct); page 80 (blank page); pages 125 & 131 (NDC not specifically mentioned); page 216 (correct); page 218 (not mentioned); and page 227 (Index cover page). Anyone working in Quality Control over there at the AIAG?
 
Last edited:

Statistical Steven

Statistician
Leader
Super Moderator
NDC is a meaningless metric for the reason you state, and well as other reasons. The "process variation" part is key, so that if you have parts with low variability, your measurement system can be capable but still have a NDC=1. My biggest problem is many organizations worry more about testing costs and instrumentation costs, that they will take a lower discrimination gage that is barely capable.

The real issue about MSA studies is they do not reflect the actual use of methods in many cases. If you do a 2 operator x 10 parts by 3 repeats per part for your MSA, but when you use the method for release of product, it's one operator and one repeat, the MSA under estimates the true measurement variability. In other words, if you want a better NDC, you can measurement the same part 3 times and average them for a single reportable value.
 

Bev D

Heretical Statistician
Leader
Super Moderator
Steven is correct, the NDC metric is mostly useless for many reasons. The biggest issue here is the delusional need for a 'bright line'. Unfortunately we are a culture that tries to score everything: if you have the highest score you win, your process is good if it is greater than this score and bad if it's less than this score (substitute Cpk, p values, NDC scores, RPNs, etc. for 'this score'). We no longer value thinking...

I wrote a brief paper (The Statistical Cracks in the Foundation of the Popular Gauge R&R Method) for publication within my organization that explains some of the weaknesses of the AIAG method for MSA as well as some alternative approaches. The paper is attached. There are also numerous references for you to perform your own research...

In brief I plot the MSA results (only 2 repeats as 3 repeats provides no better information on the repeatability variation than 2) on a Youden plot with the specification tolerances and I look at the data and then I think about what it means to my process...
 

Attachments

  • The Statistical Cracks in the Foundation of the Popular Gauge R and R.pdf
    776.9 KB · Views: 855

leftoverture

Involved In Discussions
OK, Bev, took me longer to get through your document than I expected, but I guess that's normal enough in the quality biz. Some questions:

1) You say you wrote the paper for publication within your organization. Not sure the scope there, but has your paper been professionally peer reviewed and published? (If not it should be!) :applause:
2) I would like to try the Youden plot. Do you know of an Excel template that I might find somewhere? It seems to be hard to find.

I know there are those out there who are detractors to the Youden plot, and I don't think I'll get my customers to accept it, but I see the benefit to understanding my measurements systems for the sake of truly understanding my measurement systems.

Following the same thinking regarding sample size, I have been experimenting with determining the true standard deviation of my process and using that in the NDC calculation, and it seems to help. Of course, I usually need the MSA study before the true process variation is known.

Yea...that sound you hear is my head hitting the wall...:frust:
 

Bev D

Heretical Statistician
Leader
Super Moderator
Try this MSA spreadsheet I posted some time ago.

You can find some other things I've posted easily at www.qualityforumonline.com in the resources tab under practical quality engineering resources. You will have to register but it's free. It is a 'little sister' forum to the cove...

I have found that some Customers in the automotive world will accept the Youden approach if you have the conversation with them...the key is getting people to think about what is really happening and not relying on one dimensional statistical numbers
 
Last edited:

leftoverture

Involved In Discussions
Thanks, Bev! The first link doesn't work, but I am already on Quality Forum (same forum name as here) and found your stuff there. Thanks for sharing all this. I think it will take me some time to digest all that but looks like some very good stuff you've posted!

-Tim
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
If my parts are so good, why is my ndc so bad?
Of course, it is because your parts are so good. The true reasoning for this goes back to the point of calculating the ndc.
Number of discriminate categories is a statistical calculation of the true resolution of a gage within the range of process variation. “Passing” ndc means that:
  • You can use that gage to tell the difference between parts in the process
  • The gage can therefore be used to “see” the process variation due to its adequate statistical resolution.
Gages have physical resolution – the distance between markings, for example. But the “true” resolution rolls in any error you have using the gage, as well as whether the gage resolution can see the variation of a specific process well enough.

NDC has typically been calculated using the variation of the parts “presented” to the study. So, the parts in the study are considered by the calculation to be representative of the variation that the part will see throughout the life of the process. Chances are there is no way you are going to find ten parts that represent that. You may be just starting up the process, and the parts are very close because historical variation of material lots, operators, machine wear, etc. have not occurred. This may cause the samples to be:
  • Too close to tell apart
  • Not representative of the process over time.
What do you do now?

In automotive, they offer the option of using “historical process variation” instead of using the variation in the 10 part sample to represent the process variation over time. It simply makes statistical sense and far more meaningful to always use the historical variation than the result of 10 samples.

But what if you just started the process and do not have historical variation? This is more typical than not! What is the minimum expectation of the process? In automotive, it is to be “capable”? What is a good estimate of a capable process? If you were to estimate your process variation to be 75% of your tolerance, you could use that as an estimate of a 1.33 Cpk process (if random variation, independent data, etc., etc.) Most software is looking for the process standard deviation, or (.75X(USL-LSL))/6. This estimate is better than using the whole tolerance – which is appropriate if the gage is only used to sort product – not control the process.

Again, even though the software may allow you to put in actual historical process variation or the estimate of the process variation, in some software packages the only ndc that is calculated is from the parts in the study. That tends to be useless. My suggestion is to either encourage the software providers to supply it – because it make the most statistical sense – or calculate it yourself for the truer value of the ndc.

When you do this, you can have the parts be anything, including parts that are out of specification. But, to have parts that are out of specification and use the ndc calculation for the parts, you are saying to those that you provide that data to that your process is expected to run incapable. That is not a good idea. Using historical or estimated historical data resolves that issue.

Sure, putting in an out of spec part will help you “pass” the ndc. You could use a giant log as one of your samples, and your Gage R&R and ndc will look great. But…you are saying giant logs are a normal part of the process variation over time. I don’t think so.

Now I know I spoke here of things that are often derided...Cpk...capablilty indices... But, if you want to do SPC, and your properly calculated ndc is less than...10....(not 5)....you are going to have some very rough data with insufficient statistical resolution to "see" what is going on in the process.
 
Top Bottom