The book doesn't specify (maybe it does and I forget where...) because every situation is different - you have to use judgement. However, I would think that uncertainty (all these factors combined) should take no more than 10 to 15% of the spread.
The point is this: You measure a part at an inspection point. How accurate is your measurement result in reality? If you buy a measurement device, it will come with a stated measurement uncertainty - example: Accurate to +/- 0.001 inch. This is 'assumed' to be across the range of the instrument (it's bias will not exceed +/- 0.001 at any point on its range). Bias at any point is a source of error.
If you take all the biases (say you have 100 devisions on the instrument scale and you calibrate [check] it at 10 points), the measurements at those points are indicators of the instrument's linearity - plot them and you can see its linearity. If you have large biases but compensate for them, large bias may not be a problem at all. The key is to understand that the bias exists, to measure it and to compensate for it.
Stability is essentially the same. If every time you send a measurement device to calibration and it comes back without change you're in good shape. However let's say it starts coming back and needs adjustment. That is now a source of possible error for the uncertainty budget.
Stability is also a function of other possible factors such as temperature. If the local environment is not temperature stable and the measurement device is temperature sensitive - another possible source of error arises. This is where the control chart comes into play as the MSA book describes on page 41.
The bottom line is that the intent is to ensure you identify and compensate for possible measurement error(s). I have a plate and I measure the thickness. The upper 'tolerance' limit on the drawing is 1.21mm. I measure it and find it to be 1.20. The question is.... Is the part really in spec or is the 'slop' of the measurement system enough that it is possible that the 'true' value is 1.22 (above the spec). Say you know the bias at that point is +0.1 - what does that tell us about the measurement we are taking? we know right there that the device says 1.20 but with the bias at that point the measurement is really 1.20 (observed measurement) + 0.1 (bias at that point) = 1.21 (the spect limit). Add other possible error sources such as the instrument uncertainty, and you can see we could measure and find it in tolerance when it actually is out of tolerance when all the measurement uncertainty is considered.
Does this help or have I confused the issue?
[This message has been edited by Marc Smith (edited 01 February 2000).]