Evaluation of Machine Stability and Performance Using Linear Regression Across Multiple Calibration Weeks

New to statistics

Involved In Discussions
Hello,

The aim of the study is to evaluate whether the machine remains stable and does not experience slippage during calibrations.
For each calibration, I use five consecutive quantities: 0.100, 0.200, 0.300, 0.400, and 0.500, in order to estimate the performance. And machine gives also a linear regression like y = 0.858433x + 0.0032156 etc.

When estimating the equations using Minitab for different weeks, I observe the following forms:

Week 1:
Regression Equation:
Y=−0.00120+1.02400X

Week 2:
Regression Equation:
Y=0.001100+1.017X

However, when I combine them, I get the following forms:

Week 1 and 2:
Y=−0.00015+1.02050X
Y=0.00005+1.02050X


1.What causes the differences when I combine the equations from the individual weeks into one combined equation?
2.Am I following the correct method for evaluating the machine's stability and performance across these calibration weeks?

Note:
I will need to compare data from over 30 calibration weeks. How can I handle such a large dataset effectively in Minitab to get accurate and meaningful results?
 
Elsmar Forum Sponsor
I am trying to picture what kind of machine you are asking about. You mention slippage, which I think of as mechanical degradation, but you could mean slippage rhetorically.

When you perform linear regression on 30 data points with X,Y values, the computer will give you a best-fit linear equation, no matter how well the 30 points actually line up. One indicator of how well the points line up is the R-squared output of the regression program. You should generally plot your x,y data to understand better what you are working with. That graph of a few points posted here would give us helpful context about your question.

The first equation you mention has slope parameter of 0.858433. Subsequent equations you mention have slope-parameters very close to 1.0000. The difference between the slope parameter for Week 1 and Week 2 is less than 1%. Is that a lot? Too much? I don't know. The question I ask is, relative to what?

Does your machine provide some measurement function, either as a primary output or incidental to a fabrication/assembly operation? You mention weekly Calibration. Weekly calibration, as I understand the term, is excessive, but I don't know your circumstances. Maybe you mean Verification, or setting the zero-point (e.g. tare). I would not consider these the same as Calibration. But I can respond to your question as stated, if I know what you mean.

For the sake of illustration, one simple machine we can all relate to, which may be calibrated, is a Scale for weighing. It is common to check (verify) Linearity on a Scale using a series of test weights spaced across the full range of the instrument (such as you identify 0.100, 0.200, 0.300, 0.400, 0.500), because Linearity is a key characteristic of a Scale.

A Calibration or Verify operation has to have defined tolerance(s). Often, you can reference the manufacturer's technical specifications for tolerance(s) on performance capability, but you and your company decide what resolution and precision your application may require. A measuring instrument is typically not adjusted as often as weekly unless test measurement(s) falls outside the tolerance (see Deming's Funnel Experiment on why that is so). There would be a tolerance for Repeatability of a Scale (repeated measurements). There would be a tolerance for the Linearity aspect of a Scale (you might assess the Slope constant using Regression). There would be a tolerance for Stability (repeatability over time or across different operating conditions). These are all terms you use in your question, but they have different meanings. Again, I used a Scale only as an illustration so I can envision what you are asking.

Help us understand what you are asking.
 
The machine determines an element, so we have a powder of this synthetic element that we put in different quantities low-high to represent the quantity we find in your products.

note the quantity of the powder is the same each time 0.100, 0.200, 0.300, 0.400, and 0.500

so i want to see if the Date have significant affect on the calibration.

Lets say i compare 3 dates.

Data all together

Date 1= -0.00120 + 1.0400 Input

Date 2= -0.00060 + 1.0400 Input

Date 3= -0.00160 + 1.0400 Input

Data by itself

Date 1= 0.00000 + 1.02000 Input

Date 2= 0.00270 + 1.013000 Input

Date 3= -0.00340 + 1.03000 Input

I'm trying to understand the difference in the equations when using all the data together versus analyzing each subset individually.

I use Minitab
 
Back
Top Bottom