snglcoin said:
I talked to my Calibration Coordinator about the ISO 10012-1 guidance and he said he would interpret devices with “Zero Adjusters” as “the zero control on an analog meter when changing ranges” which makes perfect sense to me and makes my argument to an auditor very weak.
Your
hypothetical argument. Not to say that there's no value in anticipation, but you shouldn't wear yourself out on the argument before it happens.
snglcoin said:
My real world experiences? I have a digital depth micrometer, which has a resolution of .00005 “and is used to inspect a feature with a .0012”, total tolerance. I had my doubts about the gage at first but I performed a measurement system analysis and found the device to be extremely accurate, repeatable and capable of the measurement in question. It far exceeded any veneer depth micrometer I had ever studied previously. The problem was in setting the zero during calibration. You literally had to “wring” the base to a lab grade surface plate and then bring the spindle gently into contact several times until you got repeatable zero readings. We found any amount of dirt or even oil could cause the readings to be off by as much as .0002” which would be the equivalent of 16% of our total tolerance and cause the system to no longer be valid.
It sounds like a tradeoff question. Do the disadvantages of the digital gage mean that using the analog/vernier type would be more advantageous in the end, all things considered?
snglcoin said:
Other examples have come from equipment returned for calibration that, when the anvils are cleaned don’t read zero anymore. By as much as .002” in some cases which then caused us to have to investigate the potential for nonconforming material.
An error of .002" shouldn't be limited to just digital gages. If there's some junk on the anvils that's .002" across, it's going to be measured regardless.
snglcoin said:
If you were to do an
FMEA on a digital Caliper and listed one of the Potential Failure Modes as “invalid zero adjustment” and Potential Effect of Failure as “invalid measurement result” what RPN would you have? I came up with 160, which in our system would warrant a recommended action.
First, it's not a good idea to set "trigger" RPN values. When the "trigger" is say, 150, the highest RPNs you're likely to see will be in the 140's. Also, assignment of RPN factors is almost always subjective. Remember also that they're called
recommended actions, not
mandatory actions. The expectation is that you'll review the situation for
reasonable ways to mitigate risk. It doesn't mean that you
must do something that will lower the RPN (hence the risk).
snglcoin said:
About the only thing I could realistically reduce would be occurrence and detection. I can perform training to reduce the occurrence but I would argue that training is not an effective action to address root cause, and it would only have a minimal effect on the resulting RPN. That leaves me with detection, which means design control by removing the potential of an operator inadvertently zeroing the device and invalidating the measurement result.
It seems to me that in your FMEA surmisal that you haven't
identified the root cause. Maybe the root cause is "Digital gage used instead of analog." You also seem to be confused about "detection" as it's used in the FMEA context. In this instance it would refer to the relative likelihood of detecting the problem before an "invalid measurement result" occurs.
snglcoin said:
All I’m asking for is to put the absolute button under the battery cover so it can’t be accidentally hit or provide a menu option with a password protect. Then it is truly “safeguarded.” I still get the feeling like I'm the only one who feels this is an issue. Doesn't anybody else out there see it as an issue?
It's a matter of priorities, I suppose. I have a tendency to not worry much about things that I can't control. I think you've identified a valid issue, but short of banishing digital devices from your workplace, there isn't much you can do that you haven't already (admirably) done.