Weaver, the team has provided some really good suggestions here. So I'll just kind of add some breadcrumbs to their gems.
If you had a calibration failure, the first thing I do is... approach it from an accuracy (or uncertainty) ratio standpoint. Hopefully your device is significantly more accurate than your process requirement. Say, you have a thermometer that needs to measure a process to ±2°C. But the device has an accuracy of say ±.25°C. If it exceeded tolerance with a reading of... +.4°C, the potential impact is going to be very low, as that .4 is not going to significantly affect the 2 C tolerance. So... first establish (if you haven't done already) your accuracy/ uncertainty ratio. Some times depending on risk and industry and such, you can close a deviation based on sufficient ratios.
Next... view the process. Is this a more high risk measurement with no downstream checks or anything? How "risky" is this measurement? If there are secondary downstream verifications, you can mitigate potential risk based on those readings.
Also... did it fail in a range that would impact you? Say a multimeter failed for.. DC voltage, but all you use the meter for is measuring AC voltage to equipment, you can possibly scope out that failure (as long as AC voltage parameters were within specifications).
My suggestion here is... to make sure you have an understanding of "what" failed, and it's possible impact, before you have to try to take on re-measuring a bunch of stuff or getting customers involved (if its not needed).