Since the subject of this thread is linearity,, and since linearity is a 'big' topic right now, I submit the following:
Guess I'll take a stab at this.
The first thing to consider is that during the design stage (ideally) decisions are made as to what mesaurements are to be made (inspections, tests. etc.) as well as the precision necessary. What is acceptable is determined at that time. The 'standard' is the 'old' factor of 10. So far, we are talking
hypothetical. If 1 always = 1 exactly, there would be no problem - a 'perfect world'.
This said, reality comes into play. There are several considerations which we know include bias, linearity and stability.
Linearity of the Device: Linearity is how well the measurement device tracks. This is to say, if you check the device against a standard (or standards), is the bias equil along the range of the instrument scale. These concepts are illustrated in the MSA manual on pages 16 thru 18. However, there is the linearity of the device and there is the corresponding 'corruption' of the measurement process. You might think of this as an additional factor. You can graph the linearity of an instrument quite easily. In fact for a long time I argued that a compentent calibration person doesn't need to graph linearity - if you can read the numbers, linearity of the instrument is apparent (sometimes you don't need a 'picture'). This is to say, for example, you calibrate a measurement device at x points along its range against 'standards'. From this you can graph the linearity of the instrument. An arguement can be made that the calibration lab must also do an R&R against the standard(s) used and the operator(s) performing the calibrations - you have the same influences at the layout level as you do at the part measurement level.
The MSA book looks at linearity (and other factors, including bias) with consideration to the 'meassurement error' mentioned above. That is to say, the entire measurement
system. If you look at linearity in the MSA book (page 35) it is appears that they want a somewhat convoluted method to be used. In fact, it also raises more questions. For example, they say you should take parts to be measured which vary over the operating range of the device. Then you take those parts and do a layout on them to determine the 'reference value' of the part. Now you have the uncertainty of the device used to do the layout involved. To be 'real' you would have to to an R&R of the layout device on that part and the appropriate dimensions.
But - what it looks like to me is that they are simply combining several sources of deviation from the defined 'reference' part values. It should be noted that since you are just taking parts and measuring them to determine their 'reference value', you are doing nothing more than making those parts 'calibration standards'. I believe they are doing this without explaining that they are assigning each part a 'reference' value based on the expectation that the layout inspection is made with an instrument with a discrimination of 100 times or more instead of the 'Factor of 10' discrimination. I see no reason (I could be wrong but please don't just say "You don't understand...", explain to us all exactly why I am wrong) why you cannot take for example, joe blocks and determine the linearity of the device.
That said, I will say that measurement of any dimension on a part with a measurement device is a possible source of error - it depends upon what is being measured and with what. Measuring (let's say with a caliper) a simple thickness of a plate is somewhat different than measuring the circumference of a shaft which is different than measuring a length on the shaft from one 'feature' to another 'feature' on a shaft (like the distance from the center of a groove on the shaft to the center of a raised ring on the shaft. In saying this I am trying o point out that some measurements are inherently not easy to make consistently - which is the reason for an R&R. As I understand it, the MSA manual wants linearity to be determined with R (repeatability) built in. I cannot say why you have to do them together, but I can accept the methodology.
------snippo-----------
> Dear Marc
>
> Subject: QS-9000 MSA Process variation
>
> I have been studying the manual once more.
>
> On page 20, see the following convoluted sentences:-
>
> "A measurement system will have adequate discrimination if its
> apparent resolution is small relative to the process variation. Thus a
> recommendation for adequate discrimination would be for the apparent
> resolution to be at most one-tenth of total process six sigma standard
> deviation instead of the traditional rule which is the apparent
> resolution be at most one-tenth of the tolerance spread." (sic)
The only thing they are doing here is changing the range from the tolerance stated to the range of the process variation.
> On page 26, appears the following sentence:-
>
> "If an index is desired, convert bias to a percentage of process
> variation (or tolerance), by multiplying 100 and dividing by the
> process variation (or tolerance)." (sic)
Well, if you want an index
> Tolerance is normally given absolute boundary values on a drawing (e.
> g. +/- 0.01 mm). You are correct when you state that the tolerance
> needs to be stated as a percentage, as the bias has to be considered
> against the tolerances over the range of the instrument.
>
> If I understand the above, the tradition is that the allowable bias of
> the instrument throughout its range should be less than a tenth of the
> manufacturing tolerances.
The traditional 10x based upon the tolerance assumes certain things about the tolerance which are not always true. You and I both know that tolerancing a drawing is subject to personal whims as well as 'industry standards'. I have gotten tolerance changes on drawings many times. Some of the changes admittedly changed the required precision (resolution) of the measurement device. This is eliminated if you chose the instrument based upon the process spread.
> What is the meaning of these crucial sentences in the manual? What is
> total process six sigma standard deviation?
>
> I looked through your forum and it appears that the subject of
> "process variation" has been a problem for several years.
Yes - several years, for sure.
> Maybe the authors know the answer. I have been trying to fax Dan Reid
> and Joe Branski at GM - do you have an Email address for the MSA task
> force?
>
> Thanks for your help so far.
>
> Kind regards - John
I think the complexity introduced by the MSA book is somewhat confusing, however all is not lost. If I was being audited and linearity of a measurement device was discussed, I would explain that linearity of the device is determined during calibration of the instrument against appropriate reference standards instead of first taking a group of parts and (essentially) measuring them and making them 'standards' (that's what the deal is with their saying you have to determine each part's
reference value). Now the auditor says "OK - what about where the MSA book wants you to take 5 parts and (see page 36) ..." I would answer that that is addressed during R&R studies. If you look closely at what is happening on page 36, they are simply combining two components in a way that 'You can do this at home in 1 experiment'.
I think this is an example of the major problem with the MSA book. Example: They do not talk about linearity as determined by a cal lab using instruments with a resolution of 100x or 1000x. They talk about linearity as a function of using production parts and they include repeatability. Remember that your cal lab (internal or external) should have an R study on the calibration method... As you go through the MSA book you may notice a lot of the stuff overlaps.
This said, I'll stop here and wait for some others to comment.