Welcome to the Elsmar Cove!Measurement Systems Analysis
ISO 9001 - QS-9000 (Now TS 16949) Information Exchange
18 December 2000
I am revisiting this page as much has changed (evolution) in the last few years on how auditors are looking at calibration and test -- in large part this is a result of QS-9000 Measurement System Analysis reuirements. I first noticed this a couple of years ago when UL began garnering some comments (I got e-mails and the 'old' forum had some questions posted) because auditors started asking some pertenent questions. I 'cut my teeth' on Mil-Std-45662 -- much has changed.
Requirements for calibration has evolved with an increased focus on technical knowledge requirements and the understanding of measurement systems as a whole including communication within a company. In part, the arguements which I cited in the e-mail exchanges which follow the table below were made with a number of assumptions including a certain amount of technical knowledge on the part of the person entrusted to maintain a calibration system. Please read these exchanges with consideration to the fact that I am still learning.
If you have any calibration issues perplexing you, don't hesitate to visit the Forums here. I also recommend you join Greg Gogates ListServe. A number of very respectable experts in calibration participate in the ListServe. E-Mail Greg Gogates -- his e-mail address is [email protected]
A lot is being said about Measurement Uncertainty. I have looked closely at it and all it really amounts to is determining a confidence spread in terms of sigma . Now, before you take me to task for saying "...all it really amounts to..." as if it is simplistic, I am not saying that the calculations are simple. I am saying that the concept is you are determining a confidence (standard deviation). I have set up a directory in the pdf_files directory here where there are some e-mail exchanges and other information you might want to take a read through. One good read is QMag_Uncertainty.pdf. For right now, measurement uncertainty is still somewhat of a 'black art'.
We are 'up the road' a bit now. I hope you will use the information within this page as Food For Thought. Remember - Some of the information and views expressed herein are 5 years old.
I do want to thank John Adamek. I suggest you read through his comments closely. I was very busy at the time and playing a bit of a 'know it all'. I was quite argumentative in response to his e-mails, however his foundations and comments are quite valid. I should have listened a bit closer to a valuable resource. I hope you will do the same.
The following is a bit dated - the rage is "What the heck is Measurement Uncertainty and how do I do it?", but may help some of you. Every time I read through it I pick up a 'tidbit' that I missed. For the latest in calibration issues, stop by the Calibration and MSA Forum at the Cove! If your question has not already been asked, please don't hesitate to ask it!
Measurement as a System | Error Types | More Specifics |To get ANSI/NCSL Z540-1-1994
Inspection, Measuring and Test Equipment - General Questions 1. Are all items of measuring and test equipment and measurement standards that are used for product acceptance included in a system for in-house and/or outside calibration services? 2. Are written procedures in use to control and describe all aspects of the calibration system, and do they comply with your company's quality manual requirements? 3. If in-house calibration is done, are detailed written procedures used for calibrations of individual items of measuring and test equipment and measurement standards? 4. Does the system include calibration schedules which provide identification and location of measuring and test equipment and measurement standards, as well as frequency of calibration? 5. Is frequency determined and adjusted when necessary, on the basis of required accuracy, stability, purpose and degree of usage? 6. Is the equipment available for each inspection, measurement, and test function suitable for the accuracy required for the accuracy for that function? 7. Does the system provide for the mandatory recall of all measuring and test equipment and measurement standards when due for calibration? 8. Does the system provide for periodic or other cross-checks to detect, correct and prevent damage, erratic readings or other degradation prior to next calibration? 9. Are the required environmental conditions maintained during calibration, storage and use of measuring and test equipment and measurement standards? 10. Are measuring and test equipment and measurement standards calibrated with standards of required accuracy, stability and range, which are traceable to the appropriate national (e.g.:NIST) or an international standard? 11. Is the accuracy of reference standards (and other standards, test and measuring equipment, when deemed necessary for maintenance of the required accuracy control), supported by calibration reports, data sheets and/or certificates? 12. Does this supporting documentation attest to the data, accuracy and conditions under which results were obtained? Attest to the fact that the highest level standards used have been compared at planned intervals with those of NIST (directly or indirectly)? 13. Are new or reworked measuring and test equipment and measurement standards calibrated prior to use? 14. Are the calibrated settings of measuring and test and measured standards sealed and/or tamper proof? 15. To the extent applicable, do calibration records reflect: A. Item name, identification or serial number, and location of use? B. Frequency of calibration and reference to calibration procedure? C. Date last calibrated, by whom, and date next due? D. Identity of standard used to perform calibration or identity of the calibration document? E. Results of last calibration and deviation from standard values? F. Results or indications of cross-checks or periodic inspections, maintenance or repairs, if any, conducted between calibrations? 16. Are measuring and test equipment standards labeled or marked to indicate: A. Item identification or serial number? B. Date last calibrated, by whom, and date next due; or for items too small for application of a label, an identifying code to indicate inspection status? 17. Are inactive standards identified as "inactive" and are they separately stored? 18. Are items requiring only a functional check identified as "calibration not required"? 19. Is limited range measuring and test equipment marked to indicate "forbidden" ranges? 20. When employee owned or customer furnished measuring and test equipment and measurement standards are used, are they controlled in the same way as company owned items? 21. Are protective handling procedures used for transportation and storage of measuring and test equipment and measurement standards for in-house and outside calibration services? 22. Are written procedures in place for assessing the validity of previous inspection and test results when equipment is found to be out of calibration --- AND --- for the disposition of products which have already been tested using the equipment? 23. Is "good housekeeping" maintained in the calibration area?
19990102 - OK - What's the Latest?:
Many things have changed. Let us ponder the 'latest' trend.
The latest version of QS9000 has brought some changes to Measurement & Test Equipment requirements. What UL started looking for some time ago has been carried into the latest QS9000 and become quite defacto. The earlier things I have written in this 'calibration page' are not automatically no longer valid. However, the tenor of the whole requirement has changed to what I have been talking about anyway. You have to understand measurement systems requirements and you must be ready to explain yours and how it is compliant.
In large part, measurement systems are being discussed in The Forums and as such I'm not going to go into details here in the future unless something spectacular happens. This is because the forums contain the most up-to-date info and because there is a larger issue of integration of requirements for 'instruments' and consideration of 'laboratoriy' requirements in general. Simply stated, there is a need to understand what is going on in a wholistic sense. For example, choosing appropriate M&TE equipment during the design stage - It is no longer simply an issue of proving M&TE is calibrated.
So - stop by the forums. There is a SEARCH feature if you're looking for something specific. It's pretty fast and - well, I like it a lot.
19980110 - From the Forum:
Posted by John Adamek on Saturday, 10 January 1998, at 7:57 a.m., in response to Re: Bias, Linearity and Stability, posted by Marc Smith on Friday, 9 January 1998, at 7:50 p.m.
QS 9000 if implemented as per requirement does in fact require a MSA on all of your measurement devices denoted on the Control plan. The definition of MSA is Gauge Repeatability & Reproducibility, Stability, Linearity and Bias. All five studies are required on each type of measurement device.
From my experience, only UL audit for the presence of Stability, Linearity and Bias. I would be very interested in hearing from anyone who has been audited by another registrar that has required these studies.
The funny part in all of this, is that the MSA manual does not have a pass criteria for the study of Stability, Linearity or Bias. Next time you see Dan Reid or any of the QS originators, ask them how they would handle it!!
In actual fact if Stability. linearity and Bias are conducted as part of the MSA analysis they can be very effective tools. All of our clients conduct these studies as part of their APQP process and have found them particularly useful.
My response was:
I disagree with this. (COMMENT: I was wrong!!) UL is exceeding the requirement if they are asking for all studies to be performed. My disagreement comes from over 20 QS audits in the past 3 years. UL is known for exceeding the requirements. This is one reason I advise clients to avoid UL as a registrar. Registrars I have worked with that have never asked for all this include: Entela, AGA, LRQA, and Eagle - to name a few.
Registration requirements are based upon the QS standard its self. The 'reference' manuals such as the MSA are *not* auditable.
I in no way mean to distract from the possible value of performing these studies *in some cases* - it depends upon the precision required. I personally do not recommend performing these studies except in special cases where precision is an issue. Most of the time these studies are not necessary and would buy you nothing. Let's say you have a measurement you do with a caliper and you dedicate an instrument for it - linearity studies will buy you nothing at all since the measurement is at one place and the rest of the range is not used. If the caliper is calibrated (adjusted) at the reference point, bias does not play a part either.
Let's also take a case where a caliper is used to measure multiple points on a part (or multiple parts). The expectation is that calibration be performed at each point where a measurement is to be taken, or, as a minimum, at multiple points on the scale. If the linearity is not sufficient for the instruments' precision, the instrument will be rejected and replaced. Bias is where you measure a point and the reading is a specific amount away from the reference point. If the bias is significant, the instrument is calibrated to reduce or eliminate that bias. Specific studies should not be necessary unless the instrument is something like a hydrometer which cannot easily be adjusted (calibrated) or cannot be adjusted at all.
I content this is more of an issue of understanding measurement systems for planning than for performing studies on every piece of equipment you have (beyond what the results of calibration reveal, if anything). That would just be plain silly - it would not be cost effect nor is it.
If an auditor wanted to address these issues I would respond:
Linearity and Bias - acceptability for the precision required is verified as sufficient or not during calibration. Evidence of linearity (if applicable) is data recorded post-calibration - look at the highest point and the lowest point. It it pretty easy to see (without a 'study') whether the gage is linear. Bias is addressed during adjustment of the instrument - bias is supposed to be adjusted out (the instrument is supposed to be centered on the appropriate value).
Stability is verified as acceptable or not by gage R&R.
(NOTE: I was technically wrong here in my reply in so far as *Stability Over Time* goes. Gage R&R proves stability at one point in time... Stability over time is evidenced by your calibration history. You simply look at the adjustment [if any] you make each time you 'calibrate'. If you have to adjust a little bit more each time you calibrate, the stability of the instrument is decreasing.)
No other special 'studies' are necessary.
If the auditor doesn't accept this answer, s/he doesn't know jack-shit about calibration and MSA.... Which is not that unusual.
The key to this is the required precision. If you plan correctly when purchasing equipment or when deciding to use current equipment, these issues are moot. You must understand and consider these issues when planning, however, as I stated earlier.
To which he responded:
Re: Bias, Linearity and Stability
Sunday, 11 January 1998, at 2:04 a.m.
I agree with one comment, the QS standard must be audited, as you would know the QS standard does require a MSA study on all measurement devices on the Control Plan, not a portion of a MSA study, or a simple Gauge R&R, but an entire MSA study. Whether we agree with it or not is not really the point, the QS requires it as a "Shall" and therefore must be conducted.
Another point, in my experience UL is the most experienced and professional registrar I have come across, they have a good reputation world wide and they rightly deserve it.
To which I responded:
First, can we agree that a comprehensible study (at best) would be performed (if at all) *only* on a new type of equipment brought into a facility? And that if you already use B&S calipers (for example), you do not do studies on every caliper you buy (although you *DO* do gage R&R on each)?
I would appreciate an example of a 'study' - the instrument and how the studies differ from what one would gather from calibration data. If, during calibration, you verify readings from 4 points on a caliper (including low end and high end), is linearity not evident? Is adjustment of the instrument to an international standard not eliminating (within the precision of the instrument and assuming the standard you use is valid) bias?
For example, in the MSA, page 26, you will find the BIAS study description. This is nothing more than what is done during calibration - you check several points on the scale against a standard. I don't think you need another 'study' or 20 points to verify what is done as a normal part of the calibration procedure, nor do I believe you have to plot the data to know the bias. If you can't 'zero' the instrument, I would recommend you toss it. Since BIAS by definition is LINEAR, if the low end of the scale can be zeroed but the high end cannot the instrument is NOT LINEAR which is unrelated to bias (since you can zero one end but not the other). From the calibration information, linearity is easy to see.
In so far as the effects of temperature and general conditions go ---> This is where an understanding of the system as a whole comes in. For Example - If you are calibrating in a temp controlled lab but use the device in a 'hot' environment (say 95 degrees F) *AND* the precision of the measurement is significant, a short study to verify effects of (say, for example with a micrometer) heat (now we get into the coefficient of expansion of both the object being measured AND the instrument) could be of significance. As I stated in each of my posts, the necessary *precision* would determine the need. In the case of calibration in a lab and use in a hot environment, if the precision is such that the coefficient of expansion would have a significant impact on the measurement, the object being measured would have to (by default) be measured in temperature/humidity controlled conditions as well to get a true reading.
We could get into esoteric stuff like x-ray and such, but for now I'll confine the discussion to linear length/volume measurements. I have seen cases of SPC on test equipment done (eg: wafer probe in the semiconductor industry) and the results were predicted by Gage R&R.
Now comes the factor of STABILITY which is simply the capability of the instrument to stay within an acceptable range between calibrations (over time). We are really looking at the 'use' factor, 'wear-out' factor and the resulting drift. As the instrument is used more and more, it wears and becomes unstable because its wear (see page 21-22 of MSA). This is the instruments drift (from the 'zero' point it was adjusted to to where it is when it is again calibrated) which becomes the instruments bias (which increases over time). The drift is typically a factor of the frequency of use (the more the use, the more the wear). If you have high drift with infrequent use, you probably chose either a cheap piece of M&TE or chose the wrong equipment for the intended use. Addressing stability is the basis of calibration INTERVAL. If the instrument is unstable because of heavy use (a lot of use tends to make the instrument drift and drift is typically seen in one direction), an increase in calibration frequency is prescribed (or buy another piece of M&TE which is sturdier, etc.). In fact, the before and after data required as 'standard calibration data' is intended to show the wear and drift of the instrument over time. From this data over time you can use predictive methodologies for calibration intervals.
I maintain that a good understanding of measurement systems is important to those deciding on equipment to use for inspection and tests and to calibration personnel, but except in cases where an unusually high level of precision is required, or where conditions of use are extreme or unusual, studies outside information gathered during the calibration process and gage R&R is unnecessary.
In your post you state "...as you would know the QS standard does require a MSA study on all measurement devices on the...". Where in QS9000 does it state that studies for bias, stability and linearity must be performed beyond what is done during calibration? It *does* say (4.11.4) that methodologies utilized must conform to those in the MSA manual, but I have only found in QS 9000 the requirement for Gage R&R - not linearity, stability and bias studies *other* than the information inherent from calibration. Please cite the specific paragraph in QS which states linearity, bias and stability studies are required. If studies for linearity, stability and bias (outside data from calibration) are required, we can only assume hundreds of QS registered companies are not in compliance and that the registrars (other than UL) are not in compliance with their charter.
As far as UL goes - UL is a very good company - I am not knocking UL - but they are known to be picky and that they sometimes go beyond the intent of QS9000 (and ISO 900x for that matter). There are other companies just as good as UL.
Please understand I am addressing this in length because I believe the MSA stuff is very misunderstood in the light of reality. The MSA book gets people to thinking they have to use 'heavy' analytical techniques and detailed studies for even simple systems (such as those utilizing calipers). I believe that in general registrars (with the possible exception of UL) see the calibration system and Gage R&R as sufficient in addressing the issues of linearity, stability, bias and R&R.
I will admit that as the measurement required becomes more subjective (colour, taste, etc.) the value of detailed studies increases.
970518->From: [email protected]
->Subject: R&R Gage Study
->Date: Wed, 14 May 1997 19:36:31 -0400 (EDT)
->I am doing a study for an Engineering Management course I am taking and I am
->trying to get information about R&R Gage studies. I think I pretty well
->understand what the study is, but in a statistical sense, I don't know what
->the null hypothesis is for the study, nor do I know how to interpret the
->x-bar chart used in the study. Can you help?
Well, you have me - and I just went thru a Stat-A-Matrix MSA course last week! I'm not a statistician. In short I took stats in college years ago and still don't understand what a null hypothesis really is. If you can briefly explain this to me in an e-mail, I sure would appreciate it. The assumption is the gage is acceptable and there are no differences between technicians.A quick word on the Stat-A-Matrix MSA course. It was given by Michael F. Flynn - a Sci-Fi writer who is well versed in statistics. Mr. Flynn did a very good job presenting the subject material. I liked the course, but unlike mine it is almost entirely statistically focused. My only complaint is that there are many of us who really wanted more info on how it correlates with the AIAG MSA manual and how QS 9000 fits in with it all. I was disappointed with the class materials. There were numerous transcription errors and mistakes. My course is more general addressing some of the statistical elements but more focused upon what you need to do to comply with QS 9000. My course is a one day affair. Let me know if you are interested in my course. The next planned public date as of this minute is in Bangkok, Thailand on 4 July 1997. It can be presented in house - I have no immediate plans to hold a public course in the US.
Bottom line for the Stat-A-Matrix course: The Stat-A-Matrix course is worth it if you have a good statistical background and are mainly interested in the methodology and details. Most people are going to want to stick with the form in the AIAG MSA which is simple to use, QS9 acceptable, and can rather easily be put in a spreadsheet for automatic calculations. I believe they also want a focus on QS9 compliance issues. Also - the food was Excellent and Plentiful!
I don't know which/what x-bar chart you are studying is as I can't see your book from where I am. Shewhart charts can be used in a number of ways. They are used to show VISUAL TRENDS. The key word is VISUAL or Graphical. The first thing to remember is you can test a single or multiple contrasting parameter(s) or function(s) in several ways. A Shewhart chart (which the auto industry calls a Control Chart) is a way of observing trends - both within group and between groups. If you perceive inspection as a process, you can evidence stability of the instrument by periodically charting a reading much as you do in production when the instrument is taking readings and is charted. BUT - You cannot use this data as the process noise is too high. You have to go down by a factor of 10 (at least) and you can also add extra 'fixtures' or technicians. Essentially you are making a control chart. You determine a USL and LSL (upper and lower 'spec' limit) using your tolerances. So - as you chart, you watch to make sure the readings stay within the limits. It's a bit too much to go into control charts here - such as determining limits, etc. - so if you don't understand them you have a little work to do.
General factors to consider when setting up a test include, but are not limited to:
- Fixturing or Technician & Training
- General Human Factors
- Environment - Where used vs Where test performed vs Where calibration done
- Material to be inspected
- Characteristic to be inspected
- Sample collection and preparation
- Type and scale of measurement
Instrument factors to consider when setting up a test include, but are not limited to:
- Discrimination [Resolution]
- Bias [Accuracy]
- Repeatability [Precision]
As far as 'standard' R&R goes, for the automotive industry the AIAG has set up a standard from for 2 or 3 fixtures or technicians measuring 3 parts 10 times. Their approach is TEXT or NUMERICAL based.
When you want to SEE the trends, use graphics like a Shewhart chart.
A last quick comment - For QS 9000 folks: Gage R&R must be performed on all M&TE listed on your control plan(s).
Hope this helps. Regards,
> I've got a question about QS9000/ISO9000 and calibration. We have several
> different measurement instruments which we perform calibrations on. These
> include micrometers and calipers. My question is do we have to do R&R
First off, do any customers specifically require R&R on attributes gages (GM often does)? Then, Defer to the AIAG Measurement Systems Analysis. Do R&R only on variables M&TE which you take critical data with. Rule of thumb is if its a measurement you take data from and you record that data, other than attribute M&TE data, do an R&R. In general it's a critical characteristic measurement they want to see R&R on. A last thing is the M&TE you do R&R on is where you are measuring the same dimension over and over again (as opposed to M&TE you use for a variety of situations on numerous dimensions, and on different parts.) If you are a QS situation, every gage on your control plan MUST have an R&R.
> studies on all of these? Currently we perform R&R on all micrometers but
> only on digital calipers. This is because the vernier and dial calipers are
> only used when measuring very large tolerances.
> Also we have go/no-gauges which are used to test for thread acceptablilty.
> We do periodic calibrations on these as well but definitely no R&R.
R & R: The ol' bogeymen. Well, first off lets look at the facts. Attribute 'measurements' are taken for rough 'checks'. If an engineer has a critical (or other 'important') dimension, they're not going to be playing an attribute game. They'll call for a variable. Bottom line is R & R is seldom performed on attribute gages. BUT - I know one place where GM made one attribute check a customer requirement... Have you spoken with your customer about their expectations and requirements?
Does that help? If not, give me a call. BUT - one last item. When ever you have any question, get in contact with your registrar and pose your question. DO NOT ask 'What should we do?' as you will be inviting a 'consultancy' conflict of interest. You CAN ask if 'What ever you want to ask' is ACCEPTABLE. Or if 'What ever you want to ask' meets the intent of the specification, in his/her opinion.
The following is a true story. Talk about 'We Told You So': Everywhere one reads one hears about calibration being a leading failure mode in the registration process - This is a timely example of the Peter Principle. While the calibration procedures were OK and the calibration area did not get the 'ding', misunderstanding of the significance of calibration (by management) was ultimately the culprit.
All went well thru a QS 9000 audit until the calibration system was checked. The company had an excellent calibration lab with a person specifically, solely tasked to oversee and totally control the system including fixture PM & Cal. This guy really knows calibration. He is GOOD! It is his living.
The company has a specific piece of test equipment - only one. It crashes about 1 week prior to its scheduled yearly calibration verification. A suggestion is made - Someone knows the local university has an identical piece of test equipment. The machine is five years old and is only used several times a semester - it is essentially a 'virgin' piece of M&TE. A management decision is made where the required production samples are sent to the local university with company technicians to perform the required production tests. This by-passed the internal calibration system controls and the person responsible for the calibration system for the period of time it took for the company's test equipment to be fixed.
The auditor finds this occurred while tracing several random calibration records. He noted that the schedule cycle called for 12 months and it was calibrated at 13 months. A possible 'Major'. Because:
The university considered the equipment training equipment used for demonstration purposes only and therefore did not have the equipment in a 'calibration system', no calibration cycle defined, equipment was 5 years old and had not been calibrated since installation, etc. Worse, the event occurred quite some time earlier. This brought into play a containment issue which the auditors found troubling... Sort of a double whammy! The company officials who authorized this figured that the equipment was hardly used so it should (even though it was know the equipment was over 5 years old) be 'in calibration'. No consideration to students possibly crashing the equipment or such. But the simple fact remains that the piece of equipment was not in an adequate calibration system.
Failure cited as a process deviation system failure (calibration was NOT dinged...)
Root Cause of Failure:
1. Management failure to understand this is a process deviation involving calibration and thus did not consult with company calibration resources (personnel) for comment or direction. In fact, the calibration personnel should have been intimately involved with approving any outside 'inspection' source(s). It should be noted that management often does not ask if it is OK to do something like this - they most often 'direct' that it be done this way without consultation with the person(s) within their organization who know whether it is acceptable or not.
2. Documented systems (process deviation and calibration) did not specifically address this scenario as a possible process deviation.
AND AS ALWAYS ---> Remember that calibration records MUST include readings from the device as it is brought in and, if the device settings are changed (the device is calibrated [read ALIGNED]), the new readings must be recorded - In short, Before and After readings must be documented!
Calibration requirements for QS & ISO 9000 are essentially the same. QS 9000 adds 4.11.3 - restates all (including employee owned) equipment must be in the recall system and details 3 conditions (see the spec). 4.11.4 requires evidence of gage R&R (unless you can justify why you see no need for them (eg: instruments used in special studies, design, etc.) There are some tricks to this but essentially you have to logically back up and be ready to explain your reasoning if you choose not to perform R&R.
R&R is particularly relevant to production. In your message you do not elaborate, however I am assuming you manufacture scales. Lets say you have certain screws you torque - if the instrument used to install the screw 'automatically' sets torque, the instrument has to be in the system (even tho it is not a gage per se). If someone installs the screw and then at some point tightens to a torque spec using a torque wrench, the torque wrench has to be in the system. Both cases would require R&R (production equipment performing a repetitive single measurement). QS 9000 is the stickler here and it actually points to the AIAG Measurement Systems Analysis manual which you must follow.
NOTE that QS 9000 is a pointer to several AIAG documents. Is this fashion, the QS 9000 document REQUIRES COMPLIANCE TO DOCUMENTS REFERENCED (eg.: AIAG's Production Part Approval Process [PPAP] manual) WITHIN THE QS 9000 DOCUMENT!!!
Other than this the system requirements are pretty standard: You have to have records (including all past records). You have to be able to explain (justify) calibration cycle frequencies for each instrument (longest = 2 years,standard = 1 year, extensive use = based on use justified by calibration records).
Concern comes in when an instrument is brought in for calibration (which is really calibration VERIFICATION) and is found to be out of calibration. You have to have a reaction plan for this (how many bad measurements were made and what effect could this have on items produced where this questionable instrument was used).
Rule of thumb is if records show every time an instrument is calibrated it is in calibration, you can lengthen the calibration cycle to a maximum of one year (eg, you have an instrument and you start out checking it at 3 months - if in calibration, extend period to 6 months. If in calibration the next time it comes in, extend calibration frequency to 1 year). Do not exceed 1 year unless you are sure you can justify it - (eg: a seldom used surface plate, gage blocks, etc. may go for 2 years - DEPENDING UPON USE).
If you are supplying the auto industry you really want to look at the need (requirement) of your customer for your firm to be registered to QS 9000 or ISO 9000. Are they requiring you, as a provider of inspection equipment (ie: NOT a supplier of parts which actually go into the vehicle), to be QS registered? Seems silly to me, but then QS 9000 is silly to me. Also consider that both FORD and Chrysler are 'expected' within several years to drop QS 9000. It is foreseen in many circles that QS 9000 is a dead document in its current form. NOTE THAT THIS IS CURRENTLY HERE-SAY - A RUMOUR. However I believe there's truth to it.
Mr. X wrote: > > In order to be in compliance with QS standards, our company must investigate our suppliers and our calibration
> sources. Do the auditors responsible for verifying our compliance come with any documentation as to their
> suitability? Is there any way for a company to investigate the experience of a particular auditor? What do you do
> when an auditor requires documentation not supported by the QS9000 manuals?
All they want is to see your evidence that they are suitable for your company in making the product(s) they make for you. Let's say you and I both use XYZ Company to supply a part or product that is identical - we both buy the same exact item. I may have a different rating system or be somewhere where shipping is more of a problem. I may rate them low while you have had excellent service. They do not know about the XYZ Company's relationship with me, but even if they did, their business is only in that you monitor and rate them as appropriate to you specific situation. Answer is: No - your rating of the XYZ Company and my rating of the XYZ Company (even if it is the exact same product we each buy from they) are not related nor do they check nor do they care nor is it (straight out) any of their business how anyone else rates them or their rating with any other company.
>Is there any way for a company to investigate the experience of a particular auditor?
You have every right to (and should) ask your registrar for any relevant information (education, scope of experience, focus of experience) of the auditor. You should, however, be tactful. In general you want the same lead auditor to 'stick with' your plant. You definitely DO NOT want different lead auditors every audit. This is a relationship you have to develop and nurture. Base it on honesty and straight forwardness. You'll be seeing that guy or gal about every 6 months and - like in a marriage - you have a lot to develop in these areas.
>What do you do when an auditor requires documentation not supported by the QS9000 manuals?
I asked: Can you give me an example?
I got this reply:
Perhaps a little background is in order. I am an instrumentation technician in charge of all our electronics. Yesterday we had a preliminary "fault finding" audit conducted by Underwriters Laboratories. The auditor spent all of about 5 minutes in the lab. He picked a piece of equipment and asked to see the calibration records. After scrutinizing the certificate he claimed that it was invalid as it did not reference the NIST standards by number. We don't do our own calibrations, GE Capital provides this service. I couldn't believe that a large company with a client base of huge corporations and nuclear facilities would make a mistake like this (or try to rip us off, as the auditor suggested). I contacted GE and they sent me documentation which seemed to back up their contention that the NIST numbers are not necessary to show traceability of secondary (and beyond) standards.
Hopefully this is all much ado about nothing, but I want to be prepared when the REAL audit occurs. I'm sorry about the length of my follow-up. I really appreciate you taking the time to help me. Your pages on the net have been a real help (a beacon of light in the ISO-QS fog) to me and, I'm sure many others. Like you I come from a military background (12 years at General Dynamics) and I have found the QS standards vague and nebulous. I imagine this has been good for you as the "tiller man" navigating clients (and parasitic barnacles like me) through these dark waters.
My next to last response:
Howdy - to bring this to closure, I spoke with a last resource a few minutes ago. She and I both have scoured the applicable specs and there is nothing calling for the specific number all the way down the document chain.
Several respondents have told me they find the 'source' number on some and on some they don't when they look thru their calibration records. In short, we believe the auditor is full of..., errrr, he/she needs training. But I would wait until the registration audit before you get testy with the registrar. Be ready with all the applicable specs (qs9000, and the iso and ansi calibration specs) and request (if he/she points out the lack of the number) that they show you where it states that the specific number has to be on the cert. You do have to have a copy of at least the first page of the procedure they use to calibrate the instrument and a few other odds & ends and they have to state the standards are traceable to NIST *** OR *** other national or international standard. Note that the specs say national OR international..
If you don't get satisfaction you have to go thru a post-registration process to have the disagreement considered by the companys' board. You can at any time ask about the credentials of any auditor without fear.
And one last response:
Marc, This is my experience in the matter. We "approve" our vendor laboratories in accordance with our Work Instructions. Procurement invokes the requirement and QA Calibration does the actual approval. ISO-25, ISO-9000 certification is considered. All present labs used have had their Certificate of Calibration reviewed prior contracting services. All must state environmental conditions, "as found" data, and a statement that they are traceable to a standard (i.e., NIST). This is required for our own calibrations, also.
I would not wait to challenge the auditor. With the large increase in auditors with little experience, whose of us with years of experience are appalled at the subjectivity imposed. You don't have to be nasty, just politely state that you were unaware of the requirement and ask the auditor to: Show me where in the standard the requirement exists so I can reference it in the Work Instruction. All good auditors stick to the letter of the standard. We know that the requirement can be met numerous ways.
I will admit that past work experience (We did it this way at XYZ) can cloud one's assessment. It is human nature to try to impose what we THINK should be done. Auditing can be a power trip for some. Good auditing comes from EXPERIENCE. A 32-hour class and a couple of audits do not necessarily a good auditor make.
By the way, Radley Smith (one of co-authors of QS 9000) discussed this subject in Dallas several months back. His story was that auditors were requiring documents to be in ink. His general comment - "I defy them to show me where it says that documents shall be done in ink." Poor auditors nitpick the little things and miss the big process picture.
Signed: Mr. X
The Elsmar Cove