Search the Elsmar Cove!
**Search ALL of Elsmar.com** with DuckDuckGo including content not in the forum - Search results with No ads.

New (Upgraded Technology) vs Existing Equipment - Measurements Shift

#1
Hello All,

We are a small medical device manufacturing firm. After the device is manufactured, the final measurements are taken using, say, Equipment X. Equipment X is being used for a long time now and management has decided to switch to equipment with updated/latest technology. Equipment Y was purchased and some feasibility testing was performed. Differences were observed in measurements when samples were measured on both pieces of equipment because of the difference in technologies between the two. How can the shift in measurements be justified?
 

John Predmore

Quite Involved in Discussions
#2
The shift is explained by the explanation you provided here. The shift is justified if the change to the new equipment was justified. Maybe you are asking how the readings from the new equipment can be correlated to readings of the old. You said you did some feasibility study on the new equipment. Did you do a correlation study on new and old equipment? Is there a problem with data results or are you anticipating questions?
 
#3
Hi John,

Thank you for the response. Yes, you are correct. Sorry for not framing the question properly. The correlation study did show few differences in measurements. The intent is that the consumer does not see a change in what they order for and what they get. However, with this change, if they order a product with 'x' measurements, they might be getting a product with 'y' measurements due to the shift.
Our idea was to stop using a 15+ year old equipment and upgrade to better equipment for testing. Instead of keeping up-to-date with technological advancements from time to time, I think the switch is being made after a long period which leads to seeing this shift in measurements. How can this shift be justified? If it cannot be justified, what's the direction to be taken?

Thank you.
 

John Predmore

Quite Involved in Discussions
#4
Imagine your inspectors have been measuring temperature with a glass thermometer and your engineers decide that digital is better. The engineers might justify the switch to new technology based on faster result, more precision, less prone to error due to operator interpretation, the option for USB output eliminating transcription error, the opportunity to measure continuously 24/7. I personally don't know that digital is inherently more accurate than a fluid-filled thermometer (precision being not the same as accuracy) but someone who does not understand quality might be persuaded that digital is more accurate. IMO, there are more things that can go wrong with digital which are not readily apparent to the operator, so on that basis I think there is increased risk with the new technology.

Suppose for this story the new digital thermometer reads out only in Celsius, there is some reason the device can not display in Fahrenheit, and all your past records were in Fahrenheit. The customer specification and all your work instructions are in units of Fahrenheit. This is a overly simplistic example, and most STEM students would know how to convert Celsius to Fahrenheit.

You could
a) convince your customer to rewrite his spec in new units of Celsius. If he didn't know, you could provide test results that validates the correlation.

b) have operators measure in Celsius and convert before recording values in Fahrenheit, to appease the customer and/or management. In this example, temperature conversion is simple algebra, but the calculation is not so simple that people routinely perform in their heads. This approach is how some suppliers manage metric versus English dimensional units. The risk here is if the calculation is done incorrectly, this is a new source of inaccuracy.

c) have the operators measure and record in Celsius, and provide them with re-calculated acceptance limits in units of Celsius. Some engineer downstream then does the conversion calculation and reports in Fahrenheit to the customer and/or management.

d) you might encounter a situation where a side-by-side test does not show equivalence or linearity of test results, due to thermal mass or evaporation or ADC loss of precision or any number of hypothetical factors. In this case, you might make a lookup table, so that measurement bias might be corrected when converting into specification units. This would be a correlation/validation study which I asked about.

Is one of these scenarios better (less cumbersome and/or less risky) that the other scenarios? You have not given us much in specifics to hold a specific discussion, so I start a discussion based on an analogy.
 

indubioush

Involved In Discussions
#5
The "shift" the OP mentions sounds like the average measurement obtained from the new equipment is different from the average measurement of the old equipment. For example, if the item being measured is exactly 40 mm, the old equipment measured 39, and the new equipment measures 41 mm. Maybe they both meet the calibration accuracy requirements, but because they are different, the existing 39 +/- 1 mm specification now needs to change. Depending on the associated risk, a CAPA may be required.
 
Top Bottom