Bill - did you read my paper? From what you just said you clearly don’t trust several critical items of the method AIAG has adopted.

Your article raises some good points; I am going through it in a little more detail.

On page 26, "A subgroup size of 10 can be biased by one outlier" ... the outlier should be evident in the MSA assessment such as the range chart and also in anything supporting, like a normal probability plot. At this point, we know something is wrong with the study (or that there is a risk of non-random assignable cause measurement errors).

Also, 10 is a clearly inadequate sample for estimating the part variation; I see part variation as more of an academic exercise where we compare what we get from the MSA to what we get from a process capability study that uses 30 or more parts. I would not rely at all on the MSA's part variation to reflect the actual process performance. Also, I am not sure why part variation is even relevant (unless one wants to compare part variation to gage variation) because the key metric is the ratio of the gage standard deviation to the specification width or tolerance, and not to the process standard deviation.

I would also, as you point out on page 28, be hesitant to add %EV and %AV as calculated for exactly the reason you describe. The deliverable is the total gage standard deviation for which a precision/tolerance ratio can be calculated. I know that various software reports these things but my webinar on MSA does not bother with them at all because the deliverable is the P/T ratio.

I am going to continue looking through your presentation, and I will also see what I can find on the Youden plot, which looks very interesting.