I understand where you are coming from - however the CMM is not functioning in the same manner for the calibration as it is during the inspection program.
But how do I at least demonstrate that it does not have an effect now that I am using that gage in a different manner?
OK - so now I better understand your dilemma.
This is a true situation for ALL measurement systems. you can pass calibration but when you use the system in real life, with real parts, real operators, real fixtures, real environment
etc. you may be 'inaccurate' or biased or have a non-linear response.
Unfortunately most of the common literature and 'standards' regarding this are rather quiet about how to assess truth vs results in a real use environment.
However, the method for assessing this is rather simple (although not always easy). It commonly referred to as method comparison. It would be a special case of method comparison where the system beign assessed is compared to some 'true' method.
- Determine a 'gold standard' method
- Select real parts or even a handful of 'gold parts' or NIST standards that span the range of teh dimension or property you will be measuring (if you must).
- Measure each of the parts TWICE with EACH measurement system. You should randomly select which parts get measured with which system IF time is a factor.
- Assess repeatability of each system. There can be no real assessment of bias until repeatability is understood.
- Now take the first measurement of each part for each system and compare the results. I use a Youden plot and the Bland Altman method to look at and quantify the bias and linearity of the 'production' system vs. 'truth'
If you check out the attached docuemnt it will provide some detailed explanations, formulas and references.