Be aware of the differences ...
Hello,
Since it has been a while and nobody else has posted a reply, I will have a go at it. I must say at the beginning, though, that dimensional measurement is not my main area of expertise. (I am an electronics person.

)
Calibration interval analysis is a difficult area, especially when you are looking at a very small quanitiy of items - one, in your case. You can get a lot more information from NCSL Recommended Practice RP-1,
Establishment and Adjustment of Calibration Intervals. You can purchase a copy from NCSL International,
www.ncsli.org .
In your example, the tolerance that is referred to is the performance specification if the item being calibrated - the 75mm OD gage.
- This is usually the manufacturer's published specification, if any. (I am not sure how the performance of these gages are specified by their manufacturers.)
- It could also be a usability requirement that you set based on your own measurement needs. For example, you might decide the gage is no longer usable if the correction is more than some specified value.
In the method you describe, the calibration interval would be increased if the reported values are within those specifications.
This is about the simplest method to use, but it is in many ways the least useful. RP-1 notes several problems with this method.
- This method makes adjustments based on essentially random results. Deming's funnel experiment is used to teach the futility of this.
- This method cannot account for a target reliability for the equipment. For instance, there is no way to set and achieve a minimum reliability goal of (for example) 95% probability of being in-tolerance at the end of the period.
- This method does not settle to a stable value for the calibration interval. If it accidentally arrives at a "correct" interval, the result of the next calibration will inevitably change it. Even if you attempt to compute a mean from the interval changes, the time required to reach a stable value often exceeds the useful life of the gage.
Since your gage has only a single measured parameter, I would suggest using Method 2, the next section from the one you cited. Plot the points on a run chart, or on a process decription ("control") chart for individual variables. You will be able to see any long-term trends, and the overall scatter of the points. Once you have "enough" points, a regression analysis will help you predict the future behaviour. If you also plot the calibration uncertainty as error bars, you will see how that relates to the reported value. Note that in all but the last calibration, the uncertainty of the measurement has been larger than the reported correction value, assuming I am interpreting your table correctly.
There is nothing "wrong" with keeping the same calibration interval for a gage like this over its lifetime, especially if you have only a small number of them. Yes, you can save money by calibrating less frequently. However, you also have to evaluate the increased risk of the tool going out of tolerance before the next calibration. Other methods of calculating calibration intervals can account for this risk, but require a large population of identical tools, or a long time period with fewer tools, to accumulate the data for a statisically valid analysis.