I must disagree here. The assumption of within sample variability of ZERO is not only plausible, it is required. If a sample is unstable over time, then you MUST incorporate time into your study. Assuming I was doing only a single determination of a sample. If the solution can "go-off" then it will be captured in the repeatability of the method. If the repeatability of the method is not random (degradation over time), then how do you really know the true value of the sample? You do not have a measurement system unless the sample you are measuring is stable enough to be measured. Just my opinion and my
Hi Statistical Steven
It all depends on where you are looking.
For day to day process measurement, I agree completely. The assumption of sample stability is required. If the sample is not stable between the time you sample it and the time you measure it, you do not have a measurement system.
However, the post to which you are referring is about carrying out an MSA study and my mindset is about dealing with the messy real world. My assumption, based upon the posts by qusys, is that the samples used in an MSA study will be process samples, raw materials ect. These are the things that s/he mentions.
In order to collect enough samples to represent process variation, you will need to collect a reasonable number over a period of time. I don't know what period of time this will be. Perhaps a few hours, perhaps a few weeks.
If the time delay between collecting your first sample and your last sample is trivial compared to the stability of the sample, there is no problem. Let's say it takes a couple of weeks to collect the samples, but the solutions are stable enough over a period of 6-9 months. No problem
If, on the other hand, it takes a couple of weeks to collect the samples but the solutions are only stable over a few days, you have an extra factor to take into account if you want to do an MSA study, namely that the concentration that you measured 2 weeks ago for solution #1 will not be the same concentration that you measure 2 weeks later.
In addition, it can be problematic simply getting the time to do the study. Analyst #1 measures the samples 2 weeks after they have been collected. Analyst #2 measures the samples a month later, he had annual leave you see. Analyst #3 can't carry out all of the measurements on the same day due to meetings, a process crisis and a general lack of interest.
This is the context in which I made the comment about the stability of the samples. It's a warning of something to be aware of.
If your calibration is carried out properly, then the long term drift of the instrument will be taken care of. Measuring CRMs regularly should confirm this. Again there are the assumptions about what and how this will be done. My unstated assumption is that the stability study will cover a period of time which will include the MSA study.
"So why are my MSA results so bad?" says Analyst. "The variance is huge and the % GRR is terrible."
"Ahh" says Know It All, "If you look at your stability study, you will see that your CRM results are statistically stable. But if you look at your process sample results in time order, you see that they change over a period of weeks."
"But they're the same samples! If they change, how do we know what the process is doing?" retorts Analyst.
Know It All strokes his goatee beard in a familiar, slightly condescending way. "Well..." he pauses for dramatic effect, "did you know that the assumption of stability with time is not always justified..."
Your comment is totally correct in the context of day to day measurement of process samples. But that's not what was in my mind when I replied to the post.
As I said earlier, there's no way that you can read the assumptions in my mind, I have to state them explicitly. I hope the above explanation helps.
(And not a single bracket in the whole post!!)
Oops.
NC