Thinking of Measurement as a System

Marc

Fully vaccinated are you?
Leader
I ran across this in the ISL listserve. I am posting it because IMHO it shows a good thought train in a calibration system. QS9000 auditors are more and more asking for details of how a company complies with MSA. Some oldies are being discredited (you do *NOT* have to do R&R on every Control Plan line item) and many new concepts are being asked about (addressing linearity, bias, stability, etc.). They are looking to see logic and understanding throughout the process from choosing M&TE during the design stage (automotive=APQP) through and in production.

This has to do with 'test' equipment, but the concepts within are good thought provokers. Oh well, FYI - I think it is a good read:

-------snippo-------

>From: "SMITH, RON D. (JSC-EM)" jsc.nasa.gov
Subject: Q: Long Term Testing/Smith

> >> I am having to prepare a process for handling the calibration of IMTE that
>> is a part of a long term test...over 5 years. The devices are,
>> specifically, thermocouples, and gas or liquid flow monitoring devices in >> ambient conditions (but the process should apply to any type IMTE). The
>> tests have been going on for 5 years with a few more to go. The testing
>> cannot be torn down for the calibration of the test IMTE since a teardown
>> will effect the integrity of the test.
>> >> Our ISO effort is requiring us to attempt to 1.) cover pre-ISO days and
>> 2.) have a new process to go by for new testing conditions.
>> >> Simply put, the basic process we are looking at starts with:(a.) prior to
>> the start of the test, research IMTE that is expected or advertised to be
>> reliable over the period of the test (b.) Estimate the uncertainty for the
>> test period (If the IMTE spec is /- 1inty for a five year test) (c.)perform at least three pre-test
>> calibrations to evaluate the projected IMTE reliability (this is not the
>> case in one of our long term tests where there is only initial calibration >> data), (d.) at scheduled intervals during the test period, the test IMTE
>> measurement data would be analyzed to establish variability and detection
>> of trends - to be additional uncertainty components (e.) and finally a
>> post-test calibration for the followup evaluation of all data during the
>> testing phase.
>> >> As there is no way to remove these instruments once testing has started
>> and no way to insert a check standard to verify instrument performance,
>> can the data be considered valid from a calibration standpoint while the
>> test is ongoing? Can the instruments be considered as in calibration
>> during the test?
>> >> Comments with regard to the validity of this approach and any suggestions
>> would be greatly appreciated.
>Ron Smith
>Johnson Space Center
>Measurement Standards and Calibration Laboratory
>Houston, Texas

From: nicolet.com (Doug Pfrang)
Subject: Re: Q: Long Term Testing/Smith/Pfrang

Ron,

The fact that there is no way to remove the measurement equipment once testing has started is not especially relevant to your problem. Calibration is always something that is performed at a single moment in time and then is ASSUMED to be valid until the next moment in time when another calibration is performed. ANY piece of IMTE can go out of calibration in the interim. Therefore, it doesn't really matter that there is no way to remove the instruments once testing has started; regardless of whether you can remove it or not (i.e., to check the calibration), the real issue is how inherently STABLE your IMTE is over whatever interim you're using between calibrations.

The answer to this question depends entirely on your SELECTION of IMTE. If you know that you must go X years between calibrations, then you have to SELECT equipment that is going to be stable over that period. Once the equipment is selected and installed, there's not much you can do beyond what you are already doing -- namely, monitoring the output to see if there are any unusual changes in the data. If there are unusual changes in the data, then the question is: are the changes real (i.e., have you detected a real change in what you're testing), or are they due to a malfunction in your measurement equipment.

To answer this question, you presumably, at some point, can take the experiment apart and recalibrate your measurement equipment so see if it is still in calibration. If it is not in calibration, then you generally assume that it drifted out of calibration sometime during your experiment, and your data becomes suspect. You might be able to salvage your data if you can determine when the drift occurred and by how much. You can then go back and correct the data post hoc, compensating for the errors that were introduced by your test equipment.

But even if your measurement equipment is still in calibration at the end of your experiment, and you assume that it stayed in calibration throughout the experiment, you are still making a leap of faith, because you are calibrating the equipment only at isolated moments in time and then ASSUMING the equipment did not change significantly in the interim. This is not necessarily a valid assumption. Therefore, you should still do a statistical evaluation of your data (both during and after the experiment), and of your test equipment (after the experiment), to check for variability and trends, because you can never be absolutely positive that the data you collected are free from malfunctions in your measurement equipment. All you can do is increase the LIKELIHOOD that the data are free from equipment error.

Bottom line: Don't worry that you can't recalibrate the equipment without disrupting the experiment, just continue the experiment and continue examining the data during the experiment. Then, at the end of the experiment, revalidate the test equipment, analyze the data thoroughly to look for ALL possible sources of experimental error (including those caused by faulty test equipment), and draw the best conclusions you can with what you have.
 

Marc

Fully vaccinated are you?
Leader
Any current (2006) comments on Thinking of Measurement as a System?
 

WEAVER

Involved In Discussions
I have been doing research on how to select the instruments that we will include to our additional bias evaluation and i saw this. Does this mean we
have to make bias studies for all our calibrated instruments as basis for our calibration cycle?

Thanks in advance,
Weaver
 

Miner

Forum Moderator
Leader
Admin
I have been doing research on how to select the instruments that we will include to our additional bias evaluation and i saw this. Does this mean we
have to make bias studies for all our calibrated instruments as basis for our calibration cycle?

Thanks in advance,
Weaver
A well planned calibration system will automatically cover bias and linearity.
 

Jim Wynne

Leader
Admin
I have been doing research on how to select the instruments that we will include to our additional bias evaluation and i saw this.
See the earlier post from @Miner. Bias and linearity should be done as a matter of course for all calibrations. I wonder, however, about what "...how to select the instruments...[for]...our additional bias evaluation" means. MSA should include specific parts. What makes the bias testing you're interested in "additional"? Additional to what?
 

WEAVER

Involved In Discussions
See, we already have calibration standards for each measuring instrument. I have always been using these calibration standard procedures from the start (for about 16 years now) but these standards just came from our HQ abroad. Now there has been revisions to the general rules of calibration to conform to international standards and we need to review our MSA studies to add bias evaluation. Before, the only MSA we do (at least consciously) is GR&R. So i was required to make "additional" bias studies but i do not know where to start. As i have been reading in this forum, some instruments need linearity studies, some need stability studies, some need both of these or the other bias evaluation tools described in the AIAG handbook. But the bottom line is to ensure for our customers that enough MSA is being applied to where it is necessary.
 
Top Bottom