E
erinmcclure
Hi all – I am hoping for some guidance,
Once you have answered the question of linearity, I am stumbling on how to "properly" apply it.
We have performed Gage Rr studies on the bulk of our existing lab methods and have worked to improve processes and procedures until we were happy with the overall variation a method contains. At the time, starting with Rr seemed like the right step because we had confidence in our calibration and/or check standard procedures, "thought" methods we had in place were to be trusted, and a Gage Rr would identify any methods that were just flat out random.
From my understanding our current practices jive with MSA4 in the following way:
- Precision (Gage Rr Anova method - 10 appraisers, 3 reps, 10 production parts spanning the expected range)
- Stability (plotting daily check standard on control charts to identify in/out control status)
- Bias (calibrations against standards (certified or reference)
- Linearity - big hole.....we have not done anything formally until now aside from multi point calibrations. Now running these studies I am stumbling on how to apply the results.
As an example - On performing a linearity study on our HPLC method- we see that 3 of our 8 components do not pass the linearity test and show a Non-constant, linearly increasing/decreasing bias depending on the component. We used in house reference standards to perform this test because certified standards are not available.
We performed a Gage Rr study on process samples using this method and it returned Rr values ranging from 1-9 depending on the component. We were happy with the overall variation and deemed it a “Good” method.
NOW – doing the linearity, we have confirmed that 3 of the components are more accurate at either the high or low concentration, but now what? We are somewhat limited on the max concentration we can prepare in our reference/calibration standards due to solubility and column overload – but in a perfect world – your process measurements lie within your highest and lowest calibration value. We simply cannot achieve that perfection, so it is “Proper” to do the linearity study to quantify the bias as various concentrations and for those 3 components that are not linear state a different accuracy at different concetnrations if we deem them “Fit for their intended purpose”
i.e.
DP4 values >5%w/v +/- 0.07
DP4 values <5%w/v +/- 0.03
OR
Do we run a 5 point calibration with 5 values in the linear portion and extrapolate more values? Also not ideal since we cannot pick a point on the high end and run a check due to solubility and column overload.
Phew – that was a long question, Thank you in advance for the input.
Erin
Once you have answered the question of linearity, I am stumbling on how to "properly" apply it.
We have performed Gage Rr studies on the bulk of our existing lab methods and have worked to improve processes and procedures until we were happy with the overall variation a method contains. At the time, starting with Rr seemed like the right step because we had confidence in our calibration and/or check standard procedures, "thought" methods we had in place were to be trusted, and a Gage Rr would identify any methods that were just flat out random.
From my understanding our current practices jive with MSA4 in the following way:
- Precision (Gage Rr Anova method - 10 appraisers, 3 reps, 10 production parts spanning the expected range)
- Stability (plotting daily check standard on control charts to identify in/out control status)
- Bias (calibrations against standards (certified or reference)
- Linearity - big hole.....we have not done anything formally until now aside from multi point calibrations. Now running these studies I am stumbling on how to apply the results.
As an example - On performing a linearity study on our HPLC method- we see that 3 of our 8 components do not pass the linearity test and show a Non-constant, linearly increasing/decreasing bias depending on the component. We used in house reference standards to perform this test because certified standards are not available.
We performed a Gage Rr study on process samples using this method and it returned Rr values ranging from 1-9 depending on the component. We were happy with the overall variation and deemed it a “Good” method.
NOW – doing the linearity, we have confirmed that 3 of the components are more accurate at either the high or low concentration, but now what? We are somewhat limited on the max concentration we can prepare in our reference/calibration standards due to solubility and column overload – but in a perfect world – your process measurements lie within your highest and lowest calibration value. We simply cannot achieve that perfection, so it is “Proper” to do the linearity study to quantify the bias as various concentrations and for those 3 components that are not linear state a different accuracy at different concetnrations if we deem them “Fit for their intended purpose”
i.e.
DP4 values >5%w/v +/- 0.07
DP4 values <5%w/v +/- 0.03
OR
Do we run a 5 point calibration with 5 values in the linear portion and extrapolate more values? Also not ideal since we cannot pick a point on the high end and run a check due to solubility and column overload.
Phew – that was a long question, Thank you in advance for the input.
Erin
Last edited by a moderator: