Imagine your inspectors have been measuring temperature with a glass thermometer and your engineers decide that digital is better. The engineers might justify the switch to new technology based on faster result, more precision, less prone to error due to operator interpretation, the option for USB output eliminating transcription error, the opportunity to measure continuously 24/7. I personally don't know that digital is inherently more accurate than a fluid-filled thermometer (precision being not the same as accuracy) but someone who does not understand quality might be persuaded that digital is more accurate. IMO, there are more things that can go wrong with digital which are not readily apparent to the operator, so on that basis I think there is increased risk with the new technology.
Suppose for this story the new digital thermometer reads out only in Celsius, there is some reason the device can not display in Fahrenheit, and all your past records were in Fahrenheit. The customer specification and all your work instructions are in units of Fahrenheit. This is a overly simplistic example, and most STEM students would know how to convert Celsius to Fahrenheit.
You could
a) convince your customer to rewrite his spec in new units of Celsius. If he didn't know, you could provide test results that validates the correlation.
b) have operators measure in Celsius and convert before recording values in Fahrenheit, to appease the customer and/or management. In this example, temperature conversion is simple algebra, but the calculation is not so simple that people routinely perform in their heads. This approach is how some suppliers manage metric versus English dimensional units. The risk here is if the calculation is done incorrectly, this is a new source of inaccuracy.
c) have the operators measure and record in Celsius, and provide them with re-calculated acceptance limits in units of Celsius. Some engineer downstream then does the conversion calculation and reports in Fahrenheit to the customer and/or management.
d) you might encounter a situation where a side-by-side test does not show equivalence or linearity of test results, due to thermal mass or evaporation or ADC loss of precision or any number of hypothetical factors. In this case, you might make a lookup table, so that measurement bias might be corrected when converting into specification units. This would be a correlation/validation study which I asked about.
Is one of these scenarios better (less cumbersome and/or less risky) that the other scenarios? You have not given us much in specifics to hold a specific discussion, so I start a discussion based on an analogy.