Resource icon

A Rational Basis for Design Verification 0

Ronen E

Problem Solver
Moderator
Ronen E submitted a new resource:

A Rational Basis for Design Verification - A method and a rationalisation (including statistical) for medical devices design verification

The purpose of this document is to consolidate a thought process, online research results and some insights, and to provide a rationale-set and a method for use in medical devices design verification.

I researched the topic and prepared this summary with specific focus on medical devices. This field has unique characteristics, and thus my definitions, assumptions, insights and methods are not necessarily transferable to any other field. This summary is intended to be useful...

Read more about this resource...
 

Ronen E

Problem Solver
Moderator
Thank you everyone who showed interest and downloaded my article.

If any of you would like to share any feedback, I would highly appreciate it. If you disagree with something, If anything is unclear, or wrong in your opinion, if you'd like anything else included - I'd like to know about it. I want to make this article as accessible and as useful as I can.

Thanks,
Ronen.
 

James

Involved In Discussions
Ronen E submitted a new resource:

A Rational Basis for Design Verification - A method and a rationalisation (including statistical) for medical devices design verification



Read more about this resource...

Hi Ronen

I really like your paper. However, I'm wondering how the principle may apply where there can be no assumption of homogeneity? For example, we make items written to a prescription and that conform to specifications detailed in a technical file and are made in line with a procedure/ work instruction. They all have homogeneous characteristics (the same materials are used and the procedure for making them is the same) but they will not conform *exactly* because dimensions will be written to suit the patient's requirements. Any views?

Cheers

James
 

Watchcat

Trusted Information Resource
I will let Ronen E speak for himself, and am the first to acknowledge that he's the expert on this subject not me. At the same time, I know a bit about "homogeneity," having spent several years of my youth learning how to pronounce it, along with homogeneous, heterogeneity, and heterogeneous, all without so much as having to take a breath first. :)

So I would say that "homogeneous" is not found in nature, but a term to be defined. It can mean "conforms to specifications" or "conforms to procedure/work instructions." How much that means the specific items vary from each other, those are the details wherein you find the statistical devil. Your materials are not actually "the same" from device to device, nor is the extent to which they meet specifications. That's what "tolerances" are all about.

All yours, Ronen E.
 

Ronen E

Problem Solver
Moderator
Hi Ronen

I really like your paper. However, I'm wondering how the principle may apply where there can be no assumption of homogeneity? For example, we make items written to a prescription and that conform to specifications detailed in a technical file and are made in line with a procedure/ work instruction. They all have homogeneous characteristics (the same materials are used and the procedure for making them is the same) but they will not conform *exactly* because dimensions will be written to suit the patient's requirements. Any views?

Cheers

James
Hi James,

I'm glad you like the paper.

The concept of Homogeneity in this context comes from the world of SPC, i.e. it relates to serial production. When it comes to individual "unique" products, there is little utility and need for statistical handling. I would say that in your case you should verify only those aspects of the device that are not unique (i.e. are not affected by specific-patient customisation) based on the methodology in my article. All other aspects that require verification should be verified 100%, i.e. checked (again, not necessarily tested) in each and every unique device unit, and documented in it's production record. In the case of custom/customised devices, looking at design verification and production verification (you may call the latter "production release", "QC" etc.) is somewhat redundant, because in a way one "designs" and makes just one unit of that "model".

I admit I wrote the article with serial production in mind, as I hardly deal with custom/customised devices. However, I think that some of the concepts and methods may still be useful for custom/customised devices.
So I would say that "homogeneous" is not found in nature, but a term to be defined. It can mean "conforms to specifications" or "conforms to procedure/work instructions." How much that means the specific items vary from each other, those are the details wherein you find the statistical devil. Your materials are not actually "the same" from device to device, nor is the extent to which they meet specifications. That's what "tolerances" are all about.
Thank you.

I think that in the current context Homogeneity is well-defined (refer to Dr. Wheeler's articles for example). To be precise, my article is mostly concerned with whether the subject samples come from a Homogeneous Process or not, and it also prescribes a method for determining that. I tried to be as unambiguous as I could in doing that, and I estimate that in a large majority of the cases the steps I listed will suffice. However, I acknowledge that in complicated/borderline cases the prescribed instructions will call for more judgement calls and analysis acumen. I tried to point in one possible direction for when that happens, but it's difficult to prescribe precise and detailed instructions to cover all possible anomalies and scenarios. I also think that even just realising that one is in that area is valuable, because it means one is already off the highway, i.e. the process is not as robust as it might/should be and maybe it's time to stop, think, and maybe go a few steps back in the D&D process.
 

Semoi

Involved In Discussions
Thank you very much for this summary of your thoughts. As I have written a summary about tolerance intervals myself, I compared your summary with mine. Here are some comments:

1. The "Odeh & Owen table" for the k-factor is only applicable, if we possess a two-sided specification. For a one-sided specification limit we need to use different k-factors.
2. Tolerance intervals use a "frequentist" interpretation of "probability" (= non-Bayesian). This is why your statements such as
  • “There’s a 90% probability that at least 95% of the population falls within [Sample Average ± 3.1 * (Sample StdDev)], and thus within the Specification Limits (the Design Input requirement)"
are difficult to understand. It is much simpler to state
  • "We are 90% confident that at least 95% of the population falls within [...] specification.”
This statement is (a) mathematically exact, and (b) uses the wording from your table.
3. Wheeler's statements about the usage of the k-factors for non-normal distributions is just wrong. Although it is true that the normal distribution possesses the largest entropy (under commonly accepted assumptions), this statement is not true, if we allow for further conditions (such as considering only 90% of the population). To convince ourself, we just have to compare the tolerance intervals for a given {confidence, coverage} pair for (i) the normal distribution and (ii) Binomial distribution -- this was derived by Wilks. Although I doubt that many auditors are able to catch this mistake, I would not place a wrong statistical statement into a verification document.
4. You included the following sample size note:
  • If, for example, one operator measured 5 parts, twice each, 10 datapoints are available for analysis as above (N=10).
To me this statement seems to be out of place. If we try to demonstrate (in OQ or PQ) that we are 90% confident that 95% of our generated products are within specification, it is not enough to take n=5 parts, measure each part r=2 times, calculate the k-factor for the N=n*r=10 measurements, and check if it exceeds k=3.026. The measurements are not independent, which is an assumption of the tolerance interval. To use the critical k-factor k=3.026 for the (gamma=90%, P=95%,N=10) tolerance interval, we have to measure ten independent parts.
However, if our measurements uncertainty is large ("bad gauge"), it is mathematically acceptable to do the following:
i) take n=5 parts,
ii) measure each part r=2 times,
iii) calculate the average value for each part, {ybar_1, ..., ybar_5},
iv) calculate the k-factor of these five averages, and
v) accept the verification if k >= 4.142.
Although this is mathematically correct, auditors won't like this extra step of averaging. Thus, I would use this procedure only if it is hard/expensive to produce additional parts.
 
Top Bottom