The important thing is (and this has been the same for years) you have to be able and ready to justify when (frequency) and why you calibrate and/or "verify" a measurement device, and it has to include the type of use, the frequency of use, the effects of it not being in calibration (such as dropping the device between calibrations and/or verifications), and things like that.
While the word risk wasn't much used in the past, these days it is especially with respect to (for example) what if someone drops the measurement device between calibrations and a lot of product passes that is not in spec. Even though the word risk wasn't much used, it was taken into consideration "where appropriate". I have seen quite a few scenarios where a measurement device would be verified at the start of each run and re-verified at the end of the run.
This was very common in semiconductor and electronic assemblies. You'd run your "Golden Masters" at the start of each run and you'd run them again at the end of each run.
While the word risk wasn't much used in the past, these days it is especially with respect to (for example) what if someone drops the measurement device between calibrations and a lot of product passes that is not in spec. Even though the word risk wasn't much used, it was taken into consideration "where appropriate". I have seen quite a few scenarios where a measurement device would be verified at the start of each run and re-verified at the end of the run.
This was very common in semiconductor and electronic assemblies. You'd run your "Golden Masters" at the start of each run and you'd run them again at the end of each run.