Number of Distinct Categories" or NDC Calculation in MSA Studies

leftoverture

Involved In Discussions
Bob, I appreciate your thoughts. Good stuff there. The issue with NDC is that it is a metric that penalizes you when you have low variation, and it is for this reason I say it is a metric that is often not worth its weight. I have conducted studies on my extremely accurate scanning CMM and found low NDC results (1 or 2 even) just because the part variation is that low. And, simply put, I have no better measurement system available. If my variation is truly that low, then I probably don't even need to do SPC on that characteristic, so NDC would be essentially irrelevant anyway. But often customers won't think in those terms. I have heard of people who intentionally machine samples for gage R&R to induce variation into the study so they can elevate their NDC just to pass the customer requirements. That is silliness. It is "muda", the very waste we should be eliminating. I like the idea of inputting historic variation, if that data is available, but when all you have is your initial process study on a new project, and the data shows very low variation, NDC can be a waste of your time and cause you to incur unnecessary expense trying to fix a measurement system that may not really be broken.
 

Bev D

Heretical Statistician
Leader
Super Moderator
I no longer use NDC (or the older discrimination ratio). I think back in the day it was a noble idea and was helpful for some things. I have found that Problem Solving in particular and SPC secondarily are easier - or at least more straight forward - when we have a real and large NDC. but neither is impossible if you have a small NDC. we just have to be more creative in how we approach both goals.

This really points to the huge downside of black box standards and requirements and having people who do not understand the topic in charge of meeting requirements.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
Well, yes...not knowing how to use a tool is the biggest problem - even in problem solving and SPC. And doing things to check a box is pathetic.

But...

When properly applied (and at this point I won't get into Dr. Wheeler's approach. It is statistically more accurate, but I believe you get adequate signals from the AIAG approach for most practitioners) there is some value.

The reason why we do most things in quality is to answer two questions:
1. How do you know?
2. How can you prove it?

In order to make you claim that parts are so similar that your process has no variation:
1. How do you know? You would have had to measure them with a device so accurate that its gage error did not make the difference between the parts!
2. How can you prove it? Well, if you have another methodology the folds intrinsic gage error (repeatability) and operator effect (repeatability) to show what you are seeing is statistically similar...now is the time to discuss it.

Even if you process doesn't show variation today (If my variation is truly that low, then I probably don't even need to do SPC on that characteristic, so NDC would be essentially irrelevant anyway):
1. How do you know it won't tomorrow? Are you claiming it is impossible for a special cause to occur? No wear or material variation will ever be significant? Nothing will break? Going to let your customer be the detection? Also, some process will vary, but very slowly, such as stamping dies. It may take 3 years - which day to day will be minuscule. Yet - you want to watch it to predict when to pull the die. It's just the charting frequency can be very low.
2. How do you prove it? If you have no clues that your gage is giving you adequate statistical resolution, you really haven't proven your point.

As far as having no better measurement system available- it may actually be fine if calculated correctly or you need to understand the gage limitation and discuss that your measurement capability can't detect process changes well with your customer.

"I have heard of people who intentionally machine samples for gage R&R to induce variation into the study so they can elevate their NDC just to pass the customer requirements." OK..I get it is for customer requirements if you do not understand the tool, but what you are really doing is making variation that is likely to occur over time that has not yet occurred. Totally legitimate. It is even legitimate to use parts slightly out of specification - as long as you use historical or historical estimate of process variation in the calculation, not the variation of the parts presented to the study.

Yes..one of the biggest problems of Gage R&R is it wants to know the variation of your process over time, you simply do not have it yet if it is a new process, your current studies won't determine it...it is a real struggle. That is why I use the estimate of the process variation over time assuming it will be a capable process - or about 75% of the tolerance. It is an estimate - but probably far more reliable, or at least reasonable, than any other projection of the process over time.

I used to think exactly the way you did in your post. But a little more thought, and corrections of calculations, started to make a lot more sense. Will the result be so accurate you can carve them into stone tablets and bring them down from the mountain? No, but it can help justify your gage choice and prevent you from having the common error of a gage - and incorrect usage of the gage - masking your true process variation.
 

Welshwizard

Involved In Discussions
Hi Leftoverture,

I think that next to the P/T ratio this is the second ranked aspect where there is an awful lot of time spent in trying to make sense of what this means in the context of gage study. Of course, we are all human and if a score is favourable we can bask in the sun, if not its all hands to the pump and out come the crises managers.

In order to make sense of the so called ndc its worth a bit of history of how it came about.

Back in the early to mid 1980's Don Wheeler and Richard Lyday were compiling their first EMP book. Early feedback prompted them to think about a method of describing the usefulness of a measurement process to discern variation in a stream of products. They already had the X bar Chart as part of the tool set and so there was the fact that any points outside the control limits would indicate a strong signal that the measurement process had the resolution to pick up part variation.

Engineers feedback suggested that they would like to classify and place some kind of summary on this to report. Don and Richard came up with the Classification Ratio and its cousin the Discrimination Ratio. However, the non linear nature of the outcome meant that it was too complex to use and by the late 1980's was dropped for the much easier to use and interpret Intraclass Correlation Coefficient (ICC)

For some reason which is lost, the AIAG group in the earliest days took the Classification Ratio, called it the ndc and added scores. Let's look at what the ndc is reported to describe:

The AIAG description of ndc implies that it is a number which represents the number of groups that the measurement process can distinguish where the the higher the number the better the chance that the measurement process has of discerning one part from another

What the ndc actually describes:

We can be very precise about this because the ndc is the Classification Ratio (CR) which came from Don Wheeler and Richard Lyday. The CR merely characterises the ratio of product to measurement variation. Some have interpreted this ratio as the number of non-overlapping groups of similar parts that could be established between the natural process limits. However, this rather complex interpretation implies that the parts have to be sorted in groups in order to get parts that fit together.
Put another way, if you were using the measurements to sort the product stream into categories such as high, medium and low zones within the natural limits (not specs) , then the ndc value would define the maximum number of categories to use.


Of course, you may be running a process where this is a legitimate aim but its hardly world class and nothing to do with whether the measurement has the resolution to discern between parts however you play with the value.

So there you have it, of course, as has been said many times on here. if you want a more complete and faithful estimate of part variation you will need yo measure more parts and be careful not to bias part selection as this computation has nothing to do with specification.

Hope this helps
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
So there you have it, of course, as has been said many times on here. if you want a more complete and faithful estimate of part variation you will need yo measure more parts and be careful not to bias part selection as this computation has nothing to do with specification.

If you calculate it based on the AIAG calculation, you are correct - it is totally based on that you have 10 parts in your gage R&R that are a pristine statistical representation of the variation of your process over time. The probability of that occurring is nil. The Gage R&R also assumes that...unless you use the historical process variation (MSA 4th edition page 121) which is far more likely to be representative result. BUT, the ndc must also use that PV from the calculation to benefit from that correction. IF you do that, you have a pretty good signal - call it buckets or whatever - of a statistical resolution that includes the effect of gage error . IF ignored, you are likely to be a victim of gage error masking your process variation. Beyond that, you may STILL be likely to have measurement error to mask your true process variation - but that take even more observation and less calculation.
 

leftoverture

Involved In Discussions
All good discussion folks, but I have been doing this quality thing for 42 years now and I can tell you that back in the day, we did things much more simply. We used common sense things like the 10-1 rule for gage selection, and we actually counted distinct categories in our data rather than calculating them. Sometimes, old-school simplicity is still darned reliable and efficient. I love statistics and use them daily, but I also believe in keeping things simple. Honestly, if we have a tolerance of, say ±0.13mm, and someone doesn't understand that a CMM has plenty of discrimination for that without doing a study, I'm not sure that person is ready to be a metrology professional.
 

Bev D

Heretical Statistician
Leader
Super Moderator
The issue here is that these 'scores' are not really useful. we can squint and hold our heads sideways, invoke yeah-buts and we can maybe think our way out of what the standard tells us to do - but that is so much work. The waste and opaqueness outweigh any meager benefit we may get. And in the end it doesn't do us any good because it keeps us from embracing and utilizing truly effective methods.

Leftoverture has a real point. We don't need mathiness. We need to think about the process, understand the question before us and take data in a planful way that answers the question we are asking, plot it on a graph and think about it.

Really the masters had it right: Deming, Youden, Seder, Ott to name just the cream of the crop.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
Part of the thinking - for example - is 10:1 to what? The specification? Not if you are trying to do SPC...unless chunky data is adequate for your analysis. There are some nice graphs that come out of Gage R&R studies...but I don't believe any of them clearly illustrate what ndc is supposed to inform a person of. That said, perhaps a chart of data that does that chore would be a welcome addition. Continuous improvement. Deming knew there were issues that were unknowable. This isn't one of them.
 

Bev D

Heretical Statistician
Leader
Super Moderator
The youden plot for standard R&R is great. the multi-vari (Seder) is perfect for most NDE, destruct, and other weird systems. both plot the data that you need to see in a form that is simple and straightforward - no mental gymnastics required.
 
Top Bottom