Which Normality Test more acceptable to FDA; Also, Non-Normal Threshold?

S

snakepitt

We are in the process of determining samples sizes for various tests to support an FDA submission. Prior data are being analyzed to do this, and also being checked using Minitab for normality. Two questions:

1. Is one of the three tests (Anderson-Darling, Ryan-Joiner [similar to Shapiro-Wilk], Kolmogorov-Smirnov) more acceptable to FDA than the others? I have found some background on each that speaks to application but it did not help me. I did find a thread in the QA forum from one "Statistical Steven" several years back in which he commented that he prefers Shapiro-Wilk for sample sizes greater than ~50 and Kolmogorov-Smirnov for smaller Ns. Does anyone have additional input re FDA preference? What about A-D?

2. Between "non-normal" threshold of P=0.050 and P=0.100, is one more accepted than the other?

Thanks for the help.
 

Miner

Forum Moderator
Leader
Admin
My background is not FDA, but is in statistics. So I cannot advise what is acceptable to the FDA. I will however explain some of the differences between your options.


These articles explain the basis for each of the three tests. The Anderson-Darling and Ryan-Joiner tests are equally powerful. Power = 1 - Beta Risk. The more powerful the test the less likely that you will miss a truly non-normal distribution. The Kolmogorov-Smirnov test has fallen out of favor in many (though not all) circles, primarily because it is less powerful than the other two tests. This means it is more likely to miss a truly non-normal distribution. If I were to guess, I would say that the FDA would be more likely to accept the first two tests, but as I said, I am not an FDA expert.
  • The threshold is properly called Alpha. The p-value is then compared to alpha to make a decision on whether to reject the null hypothesis. Alpha is the risk that you are willing to accept that you may incorrectly reject the null hypothesis. In the case of a normality test, the null hypothesis is that the data comes from a normally distributed population. An alpha risk of 0.10 means that you are willing to take a 10% risk of saying the data did not come from a normally distributed population when, in fact, it did come from a normally distributed population. An alpha risk of 0.05 means you are willing to take a 5% risk of saying the same.
This is a decision best made by determining the consequence of your making an incorrect decision. If that consequence is low, you may decide on an alpha level of 0.10. If that consequence is moderate, go with 0.05. If it is high, you may even want to go with 0.01.
 
B

Barbara B

In addition to Miner's excellent posting: My customers in the medical devices area work usually with alpha=0.05 as a default value. This wasn't questioned by the FDA in the past.

Regards,

Barbara
 
S

snakepitt

Thanks to both of you, Miner and Barbara B, for the detailed and rapid replies. I had run across the Ryan-Joiner paper in early searches but the equations intimidated me from really reading through - this time I did.

Thinking some about the alpha level, it seems to make sense that selection of a level could dovetail with one's risk assessment process, with maybe 0.100 linked to the more benign issues while at the other end of the spectrum critical ones might require 0.010; and 0.050 for those in between.

Thanks again.
 

Bev D

Heretical Statistician
Leader
Super Moderator
In general, each FDA statistician or statistical goup will have their own ideas about best practices. I have foudn that establishing a relationship with your statistical group at the FDA to be the best way to go. Typically (depending on where your product is the 'food chain') the statistical group will also prefer SAS or R analysis not Minitab or JMP. A decent 'omnibus' test for Normality that is generally accepted by the FDA (and all cautions about the uniqueness of your specific situation apply) is covered in: "A suggestion for Using Powerful and Informative Tests of Normality" by Raph D'Agostino, Albert Belanger and Ralph D'Agostino Jr. published as a commentary in Teh American Statistician, November 1990, Vol. 44 No 4.
 
S

snakepitt

Thanks for the suggestions; I'll follow up and read the article.
 
Top Bottom