I can understand if nobody has a quick easy solution - perhaps we can have a discussion about the problems?
A percent to tolerance study is typically the industry standard for accpetance critaria when validating a measurement system. When inspecting True Position - the tolerance is not a constant if the maximum material condition rule applies.
This means that every part has a unique tolerance which for location that is relative to actual size. The actual size of a physical feature - minus the MMC condition of the feature gets added to the Positional tolerance.
If I have a hole that is supposed to be 10mm +/- 1mm than the MMC condition of the hole is 9mm. If my actual hole physically measures 9.5mm this means that there is a 0.5mm difference between the actual size of my hole and the MMC condition of the hole.
This 0.5mm gets added to the positional tolerance which as stated above is
[email protected]
This means that if my hole is made to the smallest allowable diameter than my position has to be perfect, but as the hole gets larger my position can deviate from perfect.
The problem is how do I validate the measurement system if my tolerance is not a constant? Every part will have a different hole size which means that every part will have a different positional tolerance.
The situation becomes further complicated when you consider that the inspection of the size of the hole will also have some amount of variability. This means that every time i measure the feature on the same part I will find a new tolerance to use for my positional inspection.
I think this is a good place to stop for now
So - how can this situation be addressed?