Correlating Maximum 'Range' of Values for a given "RSD" (Control Limits)

v9991

Trusted Information Resource
for setting a control limit of 'relative standard deviation',

we want to evaluate the maximum possible range for given rsd-1 and rsd-2; (viz., rsd 2% vs 3% vs 6%)

request help in calculating 'max possible range' for RSD of 2-3-6.
 

Bev D

Heretical Statistician
Leader
Super Moderator
By RSD do you really mean what is commonly referred to as CV?
CV is the coefficient of variation = SD/Average.

If not can you please clarify?

In any case wouldn't you just plot the average and the standard deviation on a traditional Xbar, S chart? In my experience the closer you are to the true data the better the insight. The more you manipulate (transform) the data the less insight you get...
 

v9991

Trusted Information Resource
Correlating Maximum 'Range' of Values for a given "RSD" relative standard devatio

Yes please, I meant RSD / Coefficient of variation.(i must have expanded the acronym...)

in our case, we are trying to set a limit for RSD for setting up a machine;

we want to understand the maximum range resulting from a given RSD value.


I also noted an usefull article on RSD
 

v9991

Trusted Information Resource
1010 ANALYTICAL DATA—INTERPRETATION AND TREATMENT
1010 ANALYTICAL DATA—INTERPRETATION AND TREATMENT
Determination of Sample Size
Sample size determination is based on the comparison of the accuracy and precision of the two methods4 and is similar to that for testing hypotheses about average differences in the former case and variance ratios in the latter case, but the meaning of some of the input is different. The first component to be specified is , the largest acceptable difference between the two methods that, if achieved, still leads to the conclusion of equivalence. That is, if the two methods differ by no more than , they are considered acceptably similar. The comparison can be two-sided as just expressed, considering a difference of in either direction, as would be used when comparing means. Alternatively, it can be one-sided as in the case of comparing variances where a decrease in variability is acceptable and equivalency is concluded if the ratio of the variances (new/current, as a proportion) is not more than 1.0 + . A researcher will need to state based on knowledge of the current method and/or its use, or it may be calculated. One consideration, when there are specifications to satisfy, is that the new method should not differ by so much from the current method as to risk generating out-of-specification results. One then chooses to have a low likelihood of this happening by, for example, comparing the distribution of data for the current method to the specification limits. This could be done graphically or by using a tolerance interval, an example of which is given in Appendix E. In general, the choice for must depend on the scientific requirements of the laboratory.

The next two components relate to the probability of error. The data could lead to a conclusion of similarity when the methods are unacceptably different (as defined by ). This is called a false positive or Type I error. The error could also be in the other direction; that is, the methods could be similar, but the data do not permit that conclusion. This is a false negative or Type II error. With statistical methods, it is not possible to completely eliminate the possibility of either error. However, by choosing the sample size appropriately, the probability of each of these errors can be made acceptably small. The acceptable maximum probability of a Type I error is commonly denoted as and is commonly taken as 5%, but may be chosen differently. The desired maximum probability of a Type II error is commonly denoted by . Often, is specified indirectly by choosing a desired level of 1 – , which is called the “power” of the test. In the context of equivalency testing, power is the probability of correctly concluding that two methods are equivalent. Power is commonly taken to be 80% or 90% (corresponding to a of 20% or 10%), though other values may be chosen. The protocol for the experiment should specify , , and power. The sample size will depend on all of these components. An example is given in Appendix E. Although Appendix E determines only a single value, it is often useful to determine a table of sample sizes corresponding to different choices of , , and power. Such a table often allows for a more informed choice of sample size to better balance the competing priorities of resources and risks (false negative and false positive conclusions).

Sample Size
Formulas are available that can be used for a specified , under the assumption that the population variances are known and equal, to calculate the number of samples required to be tested per method, n. The level of confidence and power must also be specified. [NOTE—Power refers to the probability of correctly concluding that two identical methods are equivalent.] For example, if = 4.7, and the two population variances are assumed to equal 4.0, then, for a 5% level test9 and 80% power (with associated z-values of 1.645 and 1.282, repectively), the sample size is approximated by the following formula:
Click to View Image
Click to View Image
Thus, assuming each method has a population variance, 2, of 4.0, the number of samples, n, required to conclude with 80% probability that the two methods are equivalent (90% confidence interval for the difference in the true means falls between –4.7 and +4.7) when in fact they are identical (the true mean difference is zero) is 4. Because the normal distribution was used in the above formula, 4 is actually a lower bound on the needed sample size. If feasible, one might want to use a larger sample size. Values for z for common confidence levels are presented in Table 8. The formula above makes three assumptions: 1) the variance used in the sample size calculation is based on a sufficiently large amount of prior data to be treated as known; 2) the prior known variance will be used in the analysis of the new experiment, or the sample size for the new experiment is sufficiently large so that the normal distribution is a good approximation to the t distribution; and 3) the laboratory is confident that there is no actual difference in the means, the most optimistic case. It is not common for all three of these assumptions to hold. The formula above should be treated most often as an initial approximation. Deviations from the three assumptions will lead to a larger required sample size. In general, we recommend seeking assistance from someone familiar with the necessary methods.
When a log transformation is required to achieve normality, the sample size formula needs to be slightly adjusted as shown below. Instead of formulating the problem in terms of the population variance and the largest acceptable difference, , between the two methods, it now is formulated in terms of the population RSD and the largest acceptable proportional difference between the two methods.
Click to View Image
where
Click to View Image
Click to View Image
and represents the largest acceptable proportional difference between the two methods ((alternative-current)/current) and the population RSDs are assumed known and equal.
 
Top Bottom