# MSA for Counting Measurement Systems

P

#### Piotr Stoklosa

Hi guys,

I have probably very basic question but I can?t find an answer on it. I want to ask how to evaluate counting measurement systems?

Here is an example of such a system. Let?s assume we have plates with black dots. An operator has to count the number of dots on every plate so the result of the measurement is a number of dots. When we analyse this measurement system we want to evaluate how is the error in operator?s counting (for example there is 20 dots on the plate but one operator found 19, the second 21, the third 20 etc.) They also can make mistakes in consecutive trials.

I can imagine an experiment where we prepare 10 different plates with different number of dots and then 3 operator for 3 times count number of dots on each plate. But what to do next with this results?

In my opinion this not a continuous system because we have discrete data so we cannot use a usual GRR. This is also not an attributive decision so we cannot use kappa or other attributive method. Do you have any suggestions?

Thank you
Piotr

#### Miner

##### Forum Moderator
You have at least two options. One would be to perform an attribute agreement analysis for ordered (ordinal) data. However, count data is much more precise than ordinal data because the relative magnitude is known.

I recommend that you take advantage of the fact that at counts > 10, the Poisson distribution can be approximated by the Normal distribution. You have what is known as pseudo-continuous data, so a standard crossed R&R study is feasible. One possible area where you could have problems is if the variation in the number of dots is small, you may run into a pseudo-resolution problem.

P

#### Piotr Stoklosa

To Miner:

Thank you for showing me the direction. I also agree that the best idea would be to assume pseudo-continuous data. You are also right that I can have pseudo-resolution problem - if the number of dots will be similar on each plate.

The other interesting issue is that I can easily establish reference value (true value) but if I use GRR I will ignore this information. Maybe I can use it in separate bias study with the same data?

I was also thinking about using Intraclass Correlation Coefficient (ICC). Do you have any opinion about it?

Piotr

J

#### Jan Rew

Hi,

for the sake of simplicity why not try a Kappa method (e.g. acc. to MSA 4th Edition) ? Assuming that you are not necesserily interested in measure of spread of number of holes reported by operators (I mean EV, AV) but only in their effectiveness of finding out a correct number, you could easily get data for Kappa calculations (ready-to-use):

I would imagine a set of, say, 40 plates with different number of holes. Operator would know required number of holes for each plate. Some plates (say 21) would have required number (conforming parts), some (remaining 19) would have a bit too much or too few holes (those plates would act as non-conforming parts). The rest of procedure could be identical as in widely-known Kappa method.

If Kappa and other indices (false alarm rate, missed alarms rate) would come out to be acceptable, the task is done. Measurement (in fact - inspection system) capability is proved.
If Kappa or other indices would be unaaceptable, you could try to identify causes looking at capability indices for individual operators.

Just an idea, not requiring extra statistical methods.
What do you think ?

Greetings from Poland!
Jan

#### Miner

##### Forum Moderator
Yes, you can use the true value in a bias study. If you choose to go with the Ordinal Attribute agreement analysis, you can include the true value also.

The ICC is a good measure provided that you include both Repeatability and Reproducibility into the calculation. It is not commonly used because more people are familiar with %SV or ndc, etc.