#1
I am reviewing procedures at a new organization that reference accept-support testing methodolgies. It appears it is based off null hypothesis testing. Here is an excerpt:

Accept-Support (AS) Testing Methodology: In AS Testing, the null hypothesis (H0) is the condition that the tester is attempting to support. For (organization), the condition is: the medical device(s) under evaluation meet(s) the defined requirements.

Accepting the null hypothesis supports that the aforementioned condition is true (synonymous with saying the device meets its requirements). Rejecting the null hypothesis affirms that the aforementioned condition is false (synonymous with saying the device does not meet its defined requirements). Type I Error (α) is the error in rejecting the null hypothesis when it is true and is known as manufacturer’s error. Type II Error (β) is the error in accepting the null hypothesis when it is false and is known as consumer’s error. Table 1 displays the AS Testing hypothesis matrix.

This verbiage is contradictory to everything that I understand about hypothesis testing, which it seems to be based on. You cannot prove a null hypothesis to be true. You can only gather information to reject it, or fail to reject it. So how could the verbiage in the procedure be correct. Is Accept-Support testing the same? Has anyone heard of this?

A quick google search reveals limited results, the most prominent being a page that contains a very simple definition:
In this type of statistical test, the statistical null hypothesis is the hypothesis that, if true, supports the experimenter’s theoretical hypothesis. Consequently, in AS testing, the experimenter would prefer not to obtain "statistical significance." In AS testing, accepting the null hypothesis supports the experimenter’s theoretical hypothesis.
(Google search: Accept-Support)

This definition appears to support the procedure, but also contradicts my personal knowledge-- PLEASE HELP!

Thanks!
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#3
I recommend that you read: The Null Ritual What You Always Wanted to Know About Significance Testing but Were Afraid to Ask. (it's free)

A couple of comments:

First I've never found the null hypothesis / alternative hypothesis thing to be particularly useful; it certainly isn't simple or clear. adn it isn't necessary. at all. I don't use it, nor do I teach it. I teach my organization how to determine if a factor causes a change, or if a deesign meets requiremetns and with how much margin. There is SO much more to a study design that will yield conclusions that one can rely on than the null hypothesis.

From my understanding the null hypothesis is generally taken to be that 'no difference exists'. so in the case of determining if requirements are met the null hypothesis would be that the device performance is not different from the requirements. as for the idea of not proving the null hypothesis, there are two considerations to think about: there really will be no situation where two things are exactly the same or that a thing exactly meets a goal or requirement. So the method doesn't use the 'prove' word it uses the 'accept' word. there will always be some difference, the real question is whether or not the difference is of any practical importance. And since there is always some uncertainy that uncertainty is codified in the lack of the use of the term 'prove'. Of course while this is of grave importance to some theoreticians; it is wholly unuseful to people trying to do work and make decisions in the practical world. and so real people will use the term 'prove'. It is true that the null hypothesis can be what the experimenter theorizes - that a difference exists; it is simply convention that the null is that no difference exists.

As a side note: the whole idea of the p value and null hypothesis is really in support of the use of small sampel sizes without real replication true confidence and 'proof' come from replication. replication is the cornerstone of all science and cannot be circumvented by desire for a 'quick' answer.
 
#4
I'd like to thank everyone for their input!

Ultimately I have reached the conclusion that the author misunderstood null hypothesis testing and the test is actually flawed. Null hypothesis testing is designed to identify if there is sufficient evidence to reject the null hypothesis. In these cases, the null hypothesis is that there is no significant difference (or some variant). It appears the author did not understand this aspect of null hypothesis testing and decided that if you can reject the null hypothesis, why not make the null hypothesis that you "did not meet the acceptance criteria". Then you can reject that and conclude that you have met the acceptance criteria... *sigh*.:blowup:
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#5
this is one of the primary problems with the whole 'null/alternative hypothesi's approach; its' just mumbo-jumbo.

I can't say they did the test incorrectly or that they do not meet the acceptance criteria until I understand the study design and see the data...neither should you...
 


Top Bottom