Comparing Two Test Methods - Acceptable Difference(s)?

C

colbyclay

Hi all,
I am looking to change a raw material test method from using ICP to a titration. I had my appraiser run three trial each on ten samples. The results that I am comparing to the appraiser only has one trial on (I have no control over the data from their lab). I have two questions:

1. What would be the best statistical tool to compare the two methods, and do I have enough data for the results to be valid?

2. What would an acceptable difference be between ICP or analytical instrumentation and a titration method?

Thanks
Colby
 

Statistical Steven

Statistician
Leader
Super Moderator
Hi all,
I am looking to change a raw material test method from using ICP to a titration. I had my appraiser run three trial each on ten samples. The results that I am comparing to the appraiser only has one trial on (I have no control over the data from their lab). I have two questions:

1. What would be the best statistical tool to compare the two methods, and do I have enough data for the results to be valid?

2. What would an acceptable difference be between ICP or analytical instrumentation and a titration method?

Thanks
Colby

Colby

Changing analytical methods is more than just comparing the two methods for accuracy. I would recommend you look at ICH Q2R for some guidance on Method Validation.

Given the data you have collected with a single analyst doing 3 runs of 10 samples on titration and another analyst (in another lab) doing a single run on the same 10 samples, you are restricted to a general nested ANOVA model where run is nested in method. Keep in mind that the differences in method is confounded with analyst and lab.

As far as an acceptable difference, what is the specification on the raw material impurities you are measuring. I would usually think that no mroe than a 10% bias is acceptable but that depends on the requirements of the test.
 

Bev D

Heretical Statistician
Leader
Super Moderator
The test you are performing is formally known "method comparison". A method that would help is known as the "bland-altman" approach. It is a standard for laboratory type measurements but I use it for all (continuous data) measurement system comparisons. A quick google search will give you plenty of free and accurate resources on how to perform the test.

Additionally I have posted an article on Measurement Systems V&V that includes the bland-altman and other approaches. You can access the article here.

You must determine how much difference is acceptable. typically this is derived from knowing how well each method determines truth. Since no method is exactly the same, you need to make the determination of how much false accptance and false rejection you can tolerate in the new method vs. the old. (again the approach for doign this is in the article linked above)

you probably don't have enough data. Despite the guidance provided by the popular GR&R tools, 3 replicates is not necessarry; two is sufficient. BUT the number of samples should be greater than 10 - try to get to 30. (same number of readings at 10 pieces done 3 times but much greater effectivness).
I wouldn't worry about the original method having only one reading at this point. unless the bland-altman fails. then you may need to investigate the repeatability of the original method.

If you post your data we can provide further help...
 
Top Bottom