Calling all expert statisticians 
After practising engineering / R&D for quite some years and researching this topic quite a lot (though not as much as I would have liked to  the knowledge, opinions, arguments and theories seem almost endless, and I'm not a statistician), I feel my understanding is still not good enough. What seems to be even worse is that my everyday engineering questions are usually phrased in terms a little different from those mathematicians and statisticians seem to favour, so I'm never 100% sure I use the right concepts / tools and apply them correctly. There is a particular type of situation that seems to recur with some variation, so if any of you could help me establish more clarity on it I'd be grateful. I've researched it quite a lot over the years but I can't achieve certainty (pun intended).
Suppose that a medical device, going through Design Verification, is regulating flow rate. It has a specified nominal rate, and an asymmetrical tolerance. I only have a very limited number of units available for testing (say, up to N=10), but multiple runs on each unit are possible (though the total number of runs should also be kept low  say, max. 30). I don't know anything about the population's distribution, STD or else, and I wish not to make any assumptions in that regard. I also want to complete the testing in a single run, i.e. I can't rely on an iterative process. I want to be able to run the testing (e.g. obtain 10 data points) and complete the following if possible:
I don't mind it if the limits are very far apart (even 0 for LL) and I understand that in order to make the estimate more accurate I might need much larger sample sizes. That's not the issue.
In simple words, I want to be able to answer the question(s) "What is the flow rate range (LL to UL) that at a 99.9% probability (practically certainty) any unit sampled from the population will fall within (excluding outliers)?" [and the same for lower probabilities]. If I need to declare some Alpha or Beta (or make other policy/risk decisions) that's fine, but then I'd like to be able to express their meaning in similar/laymen terms, i.e. "Alpha is the probability that once in a while a unit will exhibit a flow rate outside the stated limits, while Beta is the probability that..." (I realise this might be wrong, it's just an example).
If you could describe or refer to a technique allowing the above, and maybe quote a paper or a book chapter that develops / elaborates on it, that would be great. I'd really like to be able to support any such practise with rather mathematically/statisticallyrigorous arguments (as opposed to rulesofthumb / best accepted practises / empirically demonstrated results etc.), even if I won't be able to follow every little step along their development without obtaining a statistics PhD first...
If you feel that answering this question properly is something that requires a significant amount of time / research, and that payment would be required for keeping it fair  I'd like to know that too.
TIA,
Ronen.
After practising engineering / R&D for quite some years and researching this topic quite a lot (though not as much as I would have liked to  the knowledge, opinions, arguments and theories seem almost endless, and I'm not a statistician), I feel my understanding is still not good enough. What seems to be even worse is that my everyday engineering questions are usually phrased in terms a little different from those mathematicians and statisticians seem to favour, so I'm never 100% sure I use the right concepts / tools and apply them correctly. There is a particular type of situation that seems to recur with some variation, so if any of you could help me establish more clarity on it I'd be grateful. I've researched it quite a lot over the years but I can't achieve certainty (pun intended).
Suppose that a medical device, going through Design Verification, is regulating flow rate. It has a specified nominal rate, and an asymmetrical tolerance. I only have a very limited number of units available for testing (say, up to N=10), but multiple runs on each unit are possible (though the total number of runs should also be kept low  say, max. 30). I don't know anything about the population's distribution, STD or else, and I wish not to make any assumptions in that regard. I also want to complete the testing in a single run, i.e. I can't rely on an iterative process. I want to be able to run the testing (e.g. obtain 10 data points) and complete the following if possible:
I don't mind it if the limits are very far apart (even 0 for LL) and I understand that in order to make the estimate more accurate I might need much larger sample sizes. That's not the issue.
In simple words, I want to be able to answer the question(s) "What is the flow rate range (LL to UL) that at a 99.9% probability (practically certainty) any unit sampled from the population will fall within (excluding outliers)?" [and the same for lower probabilities]. If I need to declare some Alpha or Beta (or make other policy/risk decisions) that's fine, but then I'd like to be able to express their meaning in similar/laymen terms, i.e. "Alpha is the probability that once in a while a unit will exhibit a flow rate outside the stated limits, while Beta is the probability that..." (I realise this might be wrong, it's just an example).
If you could describe or refer to a technique allowing the above, and maybe quote a paper or a book chapter that develops / elaborates on it, that would be great. I'd really like to be able to support any such practise with rather mathematically/statisticallyrigorous arguments (as opposed to rulesofthumb / best accepted practises / empirically demonstrated results etc.), even if I won't be able to follow every little step along their development without obtaining a statistics PhD first...
If you feel that answering this question properly is something that requires a significant amount of time / research, and that payment would be required for keeping it fair  I'd like to know that too.
TIA,
Ronen.
Attachments

10.9 KB Views: 57
Last edited: