Calibration Revisited, software too - Previous approach was to calibrate everything

C

chiefexec

I'm needing some help in the area of calibration. I am the management rep for a location which does only design and testing of products that are manufactured at other facilities. We do not have a good handle on calibration. The previous approach was to calibrate everything, including steel rulers and calipers that each engineer uses. In addition, our test equipment has several measurement devices on each unit (load cell, torque, displacement, temperature, etc.). I'm wondering how to approach this. Can we self-certify the small hand tools using gage blocks, etc.? If so, do we need work instructions for that?

Also, we have Standards that can be used to verify the settings on the various test machines. Can we calibrate that equipment ourselves? I would guess that would require a work instruction, as well.

Many of the test machines have software and computers attached, developed both in-house and outside. Does the software that interprets and graphs the readings from the measurement equipment need to be verified/certified? How often? Can this be done with known Standards, as well?

And last, to make a long post even longer, what about traceability of items tested with each piece equipment? The QS-9000 Standard requires that we be able to "assess and document the validity of previous inspection and test results when inspection, measuring or test equipment is found to be out of calibration." (4.11.2.f)

On some of the test machines we keep logs of what has been tested, but what about those calipers and rulers, tape measures, etc? How do we track what has been measured with those without requiring every person to maintain some sort of log?
 
T

TheOtherMe

I moved this to the Calibration and MSA forum as it seems to be more 'at home' here.

The previous approach was to calibrate everything
This is silly. You should calibrate only instruments or devices which are used to assess conformance to requirements.

our test equipment has several measurement devices on each unit (load cell, torque, displacement, temperature, etc.). I'm wondering how to approach this. Can we self-certify the small hand tools using gage blocks, etc.?
Yes - you can calibrate your own equipment, however the masters you use (gage blocks, etc.) must be calibrated to NIST.

If so, do we need work instructions for that?
All calibrations should have a calibration procedure.

Also, we have Standards that can be used to verify the settings on the various test machines. Can we calibrate that equipment ourselves? I would guess that would require a work instruction, as well.
Yes, yes.

what about traceability of items tested with each piece equipment? The QS-9000 Standard requires that we be able to "assess and document the validity of previous inspection and test results when inspection, measuring or test equipment is found to be out of calibration." (4.11.2.f)
This has to do with containment. You send a gage for calibration and it is found out. You first go back to the last calibration and work forward. Maybe your last cal was in March, this cal in July shows the gage is out. You have product you manufactured in june - you check it and it is out of spec. Now you know the gage has been out for a while. You find a customer has some product you made in April and you request that they pull a sample and send it to you. You check it and it is OK. So - now we have Bad in June, Good in April. The whole point of this is to ensure you have a way to track lots back to where the gage went bad. I think you get the idea. Remember, though, this is with respect to the item and its requirements. Let's say you make a plastic plug which (as one of 25 others) holds the trunk liner in place. Traceability is limited and its function is not safety related. Tolerances are not critical. I dare say you will not see a recall based upon a 'defective' trunk rug clip.

This is in contrast to say an air bag module. Traceability is critical. Every one is serialized and typically traceable to the day and minute it was manufactured. In addition, each auto manufacturer can trace every specific, serialized air bag to the specific car it goes in. If you find a piece of test equipment failed and may have passed defective air bags - you have (as I remember) 24 hours to be able to trace back every individual module to a point where you can show there is no chance you have missed a single suspect air bag. The auto manufacturer takes it from there to cross reference the serial numbers of the air bags to the specific cars each is in. Then - the auto manufacturer makes a decision on the severity of the problem. Some problems they find 'nusance' and hide. Some problems they reveal (umm, recently one of the companies announced a recall related to anti-lock brakes) for one reason or another.

Catch the drift here?

On some of the test machines we keep logs of what has been tested, but what about those calipers and rulers, tape measures, etc? How do we track what has been measured with those without requiring every person to maintain some sort of log?
See above also. All this depends on the nature and requirements of the product and customer. In addition, this applies to instruments and devices with which you make an accept/reject decision. Let's say you cut steel bars to a specific length per a print - what you measure it with is important. Let's say you cut steel bars to a 'relative' length - they are going to fed into a machine and cut and machined (you just cut them to a 'relative' length so it will fit in the machine which will cut the metal) - this probably won't need to be calibrated, logged, etc. unless you internally define this as a critical process characteristic for some reason.

Need more specific info to go any further.

[This message has been edited by TheOtherMe (edited 27 July 1999).]
 
C

chiefexec

Thanks for the comments. That gets me started. However, perhaps I should provide some more details and see what that opens up.

As I said, this location is for design and testing only. So the equipment that we are using is only to validate that our designs meet the performance requirements specified by the customer. We are testing parts under specific conditions (temperature, torque, etc.) to verify that they fulfill a specific durability requirement. The specific parts that are tested rarely, if ever, end up going to the customer. The parts sent to the customer (for prototype and production purposes) are built and inspected at our production facilities.

As a consequence, there would seldom be an instance when our design location would need to contact a customer to re-verify some characteristic due to an out of tolerance measuring device. So in that sense, it may appear that we are given a pass on the calibration requirement.

However, that opens up a potentially bigger concern: What if our test machines are out of tolerance, and we erroneously conclude that a part passed some series of performance tests that it would not have passed if the conditions had been controlled appropriately. We then proceed with that design and we end up producing it for the customer. When we later discover that the equipment is out of tolerance, do we have to recall every product whose design was tested on that machine, and revalidate them? What if the customer received prototypes and conducted their own testing, and the product passed their tests? Does the duplication of testing cover us?

I know I'm throwing out a lot of questions here, but there are so many intricacies, and I am trying to brainstorm to make sure that we approach this thing in the right way. Since we are not a production facility, it is a bit more difficult to apply the Standard to us.
 
T

TheOtherMe

If all you do is design, I would take your equipment list and determine which ones are 'significant' to you. The previous response dealt with, admittedly, production.

In the R&D and design stage you look at your equipment and define 'significant'. You do this by asking what is important.

Design validation requires calibrated equipment by its very nature. And, in fact, in early regsitration projects I was involved in the design folks were the last to
'bow' to being included in the calibration cycle. It was sorta "We're above this - we know what we're doing - you don't."

You may be testing in-house but it's the same as your putting together a prototype and sending it out to a lab for environmental, performance or other testing. You would expect calibrated, traceable gages, indicators, torque meters and such.

Validation testing is a form of inspection and test, if you will. Design engineers don't like to hear it, but they're not the kings they often feel (believe) they are. They follow a process / system just like the production folks do. You have a control plan for the production end, but they (typically) have a design FMEA, design 'stages' including verification and validation. Your 'important' instrumentation is that used for validation. The ability of their design to meet both customer and process requirements (critical characteristics) is being 'inspected' if you will.
We do not have a good handle on calibration.
This is how you started out. You are here---> Look at each instrument you have and ask yourself: "How important is this to the design process?" If you have to meet a torque spec, I would suggest you have your torque meter calibrated. How else would you know if your validation test was accurate?

Maybe you make a motor where the end-product critical characteristics are 1.) 200 Ft-Lbs +/- 25 Ft-Lbs torque, 2.) motor body OD 5.5" +/-0.1", 3.) Placement of four tapped holes with GD&T based on the motor shaft plane, 4.) 220V +/-15V and 5.) a draw of no more than 12 amps +/-3 amps operating and 15 amps +/-2 amps surge/start-up.

For validation testing you are going to need what? Torque meter sensitive to 1 Ft-Lb, linear measurement device sensitive to 0.01", a decision whether a hard gage will be better to measure hole placement due to the GD&T requirement, volt meter sensitive to 1 V (best to go for 0.1 V or 0.01 considering the cost of equipment of that accuracy) and an amp meter sensitive to 0.1 amp. In some companies the engineers have all these things in their labs but the company has an ES lab where when validation tests have to be run they are sent there. The ES lab equipment is ALL calibrated due to the nature of what they do. Whether or not you want the engineer's toys calibrated depends upon the risk you feel in their using them to make decisions based upon their findings with them. Technically the verbiage says you cannot base decisions in a project on uncalibrated equipment just like you can't use uncontrolled prints to base decisions on. Catch my drift here?

In other companies the engineering folks keep their own equipment and there is no lab. Or they do electricals and they have a dimensional lab.

Step back. Have everything on your equipment list. Go through the list. Ask yourself what you use each instrument for - one by one. Ask yourself what the risk of a screw-up is including lost time - not just the immediate project financial costs, if an engineer bases a design decision on an uncalibrated instrument. You may have calipers for each engineer, but they just use them for 'approximations' such as motor diameter. You're going to check it on the cmm. The tolerance is prety wide. The calipers are sensitive to 0.01" (how far out of calibration could they really, really go?). Do these really need to be calibrated? I would say not important to be calibrated as long as for your validation test where you address each requirement (such as those listed above) you use calibrated equipment.

However - as you go thru your equipment list, do not forget that many devices, such as calipers, have multiple uses. Typically they're of the same 'type' of use but may not be and there is also the issue of sensitivity with respect to the specification precision.

This is no different from production, really. It's not really just the instruments used to check compliance, either. For example in a chip fab, DI water is critical. 'Bad' water could ruin a batch and - well, thousands of dollars is down the drain 'cause once the batch is started, the investment is high. There is a meter which tells the 'purity' of the water. The expectation is that a company determine what is critical to their processes. It may be pressure. It may be temperature. It may be time. You are expected to understand your processes well enough to be able to make the determination of what is critical and what is not. No different with design. You have to be ready to say "We don't calibrate our calipers because ........"

Does this help? That will be 5 cents, please!

[This message has been edited by TheOtherMe (edited 28 July 1999).]
 
C

chiefexec

Thanks for that extremely helpful response. I think that will give me a better approach to the calibration issue at our location.

Any ideas on the software issue? Many of our test machines are developed in-house, and include software on a PC which receives the data from the various measurement devices to make calculations and charts. What is the best way to certify or verify that software, and how should that be documented?
 
L

Lassitude

Any ideas on the software issue? Many of our test machines are developed in-house, and include software on a PC which receives the data from the various measurement devices to make calculations and charts. What is the best way to certify or verify thatsoftware, and how should that be documented?
There are a lot of ways to look at software, but the most important concept to understand is calibration as a system. Typically you have a system which is comprised of components. You look at your outputs and determine what you have to address. Let's say you have a 'system' which takes temperature measurements and voltage measurements. Make up 'known' units to run as standards. Then you look at expected vs actual results. With a series of knowns you can check for bias and such. For voltage instead of a standard you may use a calibrated output device - set it to x volts and see how your system records the voltage.

If the software is doing calculations things get more complex, but it's the same principle. Some software you are going to say is 'calibrated' by the software 'manufacturer'. For example, if you have an Excel spreadsheet for gage R&R you don't typically 'calibrate' the software to ensure that the formulas Excel uses gives you the correct result.

To some degree this gets messier with factors such as 'homegrown' software, machine specific software and such. We come back to common sense - you have to look at the system and decide what needs to be checked and how you want to do it. What makes sense?

A good example of a system is vibration testing. You have your controller, the shaker amp(s), the shaker(s), accelerometer(s) and the acceleratometer amp(s). If you're a GenRad lab, you have GenRad software in the controller 'module', if you will.

This is now a test. You tell me - what would you calibrate in this 'system' of software and hardware? Why?
 
Top Bottom