Too often quality practitioners and other engineers and scientists ignore the fundamental requirements of a study design that enable the use of p values and the null hypothesis approach to statistical studies. This renders the conclusions of the study unreliable at worst, at best it leads to confusion regarding what conclusions can be made.
Two case studies are presented that demonstrate the role of non-homogeneity in study designs. An analytic study approach is explained that makes...
There is ignoring and there is ignorance. I think a lot of people using statistics don't even know what they are.
I was also taught that different statistical tests are more robust than others, where "robust" means, as I recall, that the results are less likely to be affected by violations of the underlying assumptions?
‘Robust’ tests are only robust to deviations from Normality. No statistical or probabilistic test is robust against the other much more important requirements.
Homogeneity (independence and replication) is paramount. It’s the study design that matters not which option you click in your statistical software.
I am teaching a stats course for Southern Illinois University at Carbondale. Of course I am covering tests of hypothesis, but with caution. Even the textbook has some sidebar discussions of certain scientific journals moving away from P-values.
If you do look at the issue of Enumerative vs Analytic studies (there is a nice video from Michael Tveite that I am making the students watch) that goes through Dr. Demings concerns in that regards, and the preference towards Statistical Process Control.
Granted, one needs to understand the context of the data and the underlying process(es) to avoid costly misinterpretations.
I don't know what link(s) there might be between p-values, Deming, quality, and SPC. My training in p-values was as a scientist. To me it's all about generalizing results obtained from a sample to a population.
I consider Quality to be one of the best ideas that ever emerged from the human race. Sadly, its time came and went, and it never caught on. I'm only generally aware of SPC.
I've been hearing this since at least the 80s, always articulated with the exact same words..."moving away from." There is no good reason to do this. However, if the journal's readership has no understanding of p-values, nothing lost, I guess. I do understand p-values, so I would never read a journal that had "moved away from" them. Journals are bad enough as it is. Further reducing the amount of information reported in them doesn't appeal to me.
The general problem with statistics in manufacturing is captured in the well-known (but of obscure origins) quote, "He uses statistics like a drunk man uses a light pole--for support rather than illumination."
Another pervasive problem is that QA is the only discipline I know of where people of very diverse levels of education and intellectual development are expected to understand and invoke complex mathematical theory. It becomes even more absurd when the people who establish requirements for statistical analysis so often don't understand even relatively simple concepts.
QA is the only discipline I know of where people of very diverse levels of education and intellectual development are expected to understand and invoke complex mathematical theory. It becomes even more absurd when the people who establish requirements for statistical analysis so often don't understand even relatively simple concepts.
Because there are multiple levels of definitions of "science," people with very diverse levels of education and intellectual development can legitimately call themselves "scientists," and many do. Unfortunately, many of these "scientists" feel the need to conduct scientific research, even though they may lack the education and/or the intellect to do so. They are a "scientist" and therefore it seems that any research they do is, by definition, "scientific" research. In many cases, I'm not sure an understanding of statistics and experimental design is even expected.
This is a VERY general header, that I would shorthand as "statistical inference" or maybe just "inference" (depending on how you draw the generalizations).
Just as an example, you say "...to a population". A-la Deming, Enumerative studies deal with an existing, finite population (e.g. a certain electorate at a given time), while Analytic studies deal with future populations / processes that can in some cases be considered infinite. There are conceptual and mathematical implications to this difference; they should be handled differently. Otherwise one risks GIGO.
Never caught on? It's like saying that long-term thinking never caught on, or that prevention (rather than reaction) never caught on. I tend to agree that deep thinking has never been too popular, but it's always been around (scarsely ) and always will.
In that context "moving away from" doesn't mean just stopping to use something without suggesting an alternative. It means that instead of hypothesis tests and p-values there are now other concepts and methods. So hopefully the amount of relevant information is not reduced (whether a given journal does a good job at that or not, is a separate question).
Another pervasive problem is that QA is the only discipline I know of where people of very diverse levels of education and intellectual development are expected to understand and invoke complex mathematical theory.
In most cases the "QA professional" is only required to apply (or put into use) techniques that rely on complex mathematical theory. True, some understanding is required for proper selection and implementation of these techniques, but not necessarily the complex mathematical foundations. Just like engineers many times successfully (and correctly) apply techniques that they are unable to fully understand the mathematical derivation of - these are sometimes simply too complex to practically master, and it's also unnecessary from an outcome perspective. The most important point is to not lose sight of the techniques limitations and underlying assumptions. Letting it slip is of course too easy - one has to actively and stubbornly fight for the maintenance of the latter, and this is were we usually fail.
Another issue is failing to recognise that "QA professional" doesn't equal "Jack of all trades". "People of very diverse levels of education and intellectual development" should not be drawing up experimental designs based on higher-than-undergrad-level statistical theory. Professional statisticians are there for that. Just like the average "QA professional" consults the plastics expert when they have issues with a plastic raw material, rather than diving into datasheets and chemical formulations. So maybe the problem is in the formal scoping (and internal classification) of the QA profession.