After going through the same discussions throughout the years, it comes down to this: don't worry about it. Pick a measure of dispersion and use it consistently.

Juran says in "Quality Planning and Analysis" that standard deviation (sample or population) "*is only a formula. There is no hidden meaning to the standard deviation, and it is best viewed as an ***arbitrary** index".

Regarding total *estimated* variation, the MSA manual says on page vi, that you '*the reader can choose*' whether to use 5.15s (99% of total estimated distribution) or 6s (99.73%). It is still just an estimate.

And, if comparing a distribution to a print tolerance, the tolerance itself is often an *arbitrary zone* chosen by an engineer.

In short, it is an exercise in picking nits. You can discuss the accuracy and precision of the statistic til the cows come home, but you will end up correcting a gage if it generally appears to have "too much" variation, and you will leave it alone if it appears to be consistent. It is like saying that if you are standing about 10 feet from the edge of a cliff your are at a safe distance (even if your true distance is 9 ft.). But if you appear 6 inches from the edge when your true distance is 5.2 inches, it is still dangerous - back up!.

I suggest you put the "hot topic" to rest. Pick a measure of dispersion, and use it consistently.