Performing a statistical study on a geometrically toleranced feature with a "bonus tolerance" callout is a complete waste of time and energy.
If the probability of a defect is very remote I would agree. If one does not know the probability of a defect I would disagree.
In my mind, it makes more sense to "study" the Basic dimensions as they relate to the feature location RFS.
I agree!
I agree wholeheartedly, and think it's particularly ridiculous for cast and stamped holes that ain't never going to move.
If the location of a punch or core pin cannot be improved by an offsetting the locator segment of the insert I'd agree. In that case it would be better to solve for the optimum punch or core pin size that minimizes the defects of the size and variable position simultaneously.
There might be some application in formed or machined parts, but it almost always makes better sense to analyze where the features are wrt a nominal dimension.
I can't disagree with that.
Since the number reported for position is not indicative of whether the position is erring in X or Y, or positive/negative in those directions, results can't be relied on except by those that just want a warm fuzzy feeling.
It is the buyer or the customer that wants the warm fuzzy feeling. They generally require that the probability of a defect is below some remote level. You are right about the capability not being indicative of whether the position is erring in X or Y although that is not its job. Before variation causes the probability of a defect to rise above that remote level it is important to identify the root causes of the variation and attempt to correct them. I recommend as you do to monitor and control the coordinates of a position rather than the resultant deviation diameter.
Let me say that checking variable tolerances with attribute gages is a proper but highly inefficient way to get that warm fuzzy feeling. In order to feel the warmth that most buyers or customers demand these days somewhere between 1.33 and 1.67 Ppk your attribute sample size has to be very very large. At 1.33 there cannot be more than 1 defect in 31,250 parts and to get any measure of confidence in the prediction there must be some repetition of defects in sample relative to the sample size. So if your capability threshold was 1.33 and your sample size was 250,000 you would expect no more than 8 defects. At 1.67 cannot be more than 40 defects in a billion parts. Consider these sample sizes!!
Since Sinned posted the same Variable tolerance question on another forum I'll paste my response to that here.
Geometric tolerances i.e. Ø9.4-8.9 |⊕|Ø0.36(M)|A|B|C| have a variable upper specification limit. That limit can be visualized by making a histogram with both distributions on the same graph, the one for the geometric tolerance and the one for feature size. The scale of the graph begins at 0 and at the geometric tolerance's specified USL the size limit tolerance corresponding to the MMC condition would begin. When the two distributions are plotted on this graph the histogram reveals the extent to which the distributions intersect.
The geometric tolerance distribution will often appear skewed toward the zero boundary (this happens because the computed deviation is always a positive number that reflects the size of the diameter zone needed to contain the deviation). If a scatter diagram shows that the means of the X,Y position deviation coordinates are roughly centered on target the histogram will appear more skewed conversely the more they are off target the histogram will appear more normal.
To figure the Ppk of a variable geometric tolerance you have to estimate the intersecting area of the two distributions in contrast to the area between their means. When one is non-normal this is a difficult problem but not impossible, however you can estimate that area differential somewhat less accurately with the classic equation for stress vs. strength if you treat both distributions "as normal." If we assign letters to the mean and standard deviation values for size (Ms=mean size, Ss=stdev size) and position (Mp=mean position, Sp=stdev position) the equation for Ppk would look like this:
One more thing, the MEAN value for size Ms has be converted to its corresponding value for variable position tolerance Mt. Subtract the mean size value from its MMC limit and add that to the lower constant value for the geometric tolerance and you will find the mean variable tolerance from the mean size.
Variable Tolerance Ppk = (Mt-Mp)/(3*sqrt(Ss^2+Sp^2))
To figure the Pp process potential of a geometric tolerance you must examine the scatter plots of the measured coordinates and determine whether the coordinates can be adjusted to target or not. If they can be improved refigure the geometric tolerance deviations as if they had been adjusted (understanding that the distribution shape will change). Pp = Ppu (coordinate means adjusted to target)
To figure the Pp process potential of a VARIABLE geometric tolerance you must first adjust the means to target (if possible) and refigure the geometric tolerance distribution as described above and then you must find the optimum mean value for size that will make the Ppu for the variable geometric tolerance and the Ppu for size equivalent. This minimizes PPM defective for both size and variable position simultaneously. By setting the equation above equal to the equation for Ppu Size and solving for the optimum target size we have:
One more thing, the USL value for size has to be converted to its equivalent maximum variable value for position USLpmax. Add the difference between USL and LSL size to the specified minimum USL value of position.
Optimum variable tolerance Mt[optimum] = (Ss*Mp + sqrt(Ss^2+Sp^2)*USLpmax)/(Ss+ sqrt(Ss^2+Sp^2))
Convert Mt(optimum) back to Ms[optimum] and we have:
(USLs-Ms[optimum])/(3*Ss)=Ppu[size]=Pp[variable tolerance]=Ppu[variable pos]=(Mt-Mp)/(3*sqrt(Ss^2+Sp^2)).
This method slightly underestimates the capability and potential capability of a variable tolerance because the predictions are made by assuming both distributions for size and position are normal. If capability analysis software was written to figure the intersecting area of dissimilar distributions then the estimation would improve somewhat.
Other methods have been touted as a solution to this variable tolerance capability analysis problem but I have found them to be lacking. Most methods combine the individual variable bonus tolerance with the individual position deviation and then compare the resulting surrogate variable to a constant limit. These methods Adjusted TP, Residual Tolerance, Percent-of-Tolerance, and (effective size compared to virtual condition) can mask or amplify the variation in the surrogate relative to the variation inherent in the contributing sources therefore their predictions I have found to be untrustworthy.
Some will say that the capability should be determined on the coordinates separately. I disagree! The specifications are often given as cylindrical zones where the maximum coordinate displacements are a function of one another. To limit that variation to something other than the design tolerance is to give a false capability. The variation is always different in each coordinate distribution.
There are also methods to compare the elliptical boundary of the scatter plot to the circular boundary of position tolerance but the circular boundary is regarded as a constant value in those analysis methods so even those methods fall short of variable tolerance capability analysis.
I hope this explanation helps,
Paul F. Jackson