Simple question with an important impact I'm having a hard time finding an answer to: Is the Test Uncertainty Ratio (T.U.R.) based on the standard uncertainty, or the expanded uncertainty? For example, if I calculate a 1 volt source to have a ±2 ppm standard uncertainty and my meter has a ±9 ppm tolerance, the T.U.R. is a nice 4.5. However, if the uncertainty is expanded to k=2 (4 ppm), then the T.U.R. drops to a measly 2.25. Which is right? If I need to calculate a guardbanding factor because of a low T.U.R., how do I do that?