This needs to be moved to the proper forum. If you have the AIAG MSA manual, you can find the information you're looking for. The property of "linearity" consists in the degree of consistency of gage error (or lack thereof) across the entire measurement range of a device. In other words, if a gage at its lowest resolution is accurate to within x measurement units, it shouldn't stray unreasonably from that level of accuracy across its entire range. It's a simple matter to graph this by constructing a chart that shows the measurement intervals and the error values; the idea is for the line between the measurement errors to be linear, or as close to being a straight line as possible. Linearity study should be done during the calibration process, although in rare cases linearity error may be seen during GR&R studies when individuals have difficulty with a gage at one end of its range.
Plot reference Vs Bias for every reading. Reference values are on the X-axis. Bias values are on the Y-Axis. So if you take 5 reference parts and measure each part 12 times, you will have 5*12=60 points. Look for any obvious issues from the way points are dispersed at every level.
Plot a regression line through these points and 95% confidence limits. The formulas for calculating these are given in the MSA manual or can be found in any standard statistics book on linear regression.
Draw a horizontal line at Bias=0 (Y=0). For linearity to be acceptable, this line should be entirely contained within the confidence bounds. In layman's terms this would mean that the the bias across the range can be said to be constant and zero - well almost! This can also be confirmed using the two hypothesis tests given in the manual (ta:slope=0 and tb:intercept=0)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to the use of cookies.