Risk-based approach to Test Method Validation for Design Verification?

MajkKruz

Registered
Short version of the question is: Can one use a risk-based approach to deciding whether a Design Verification test needs to be validated or not?

Some (hypothetical) context: When faced with a multitude of Design Input Requirements for a complex medical device, assuming one has to develop physical measurement tests for most them, the additional effort to further validate every single test, can be a very time-consuming task. A task which may not add any value if seen from a Risk of Patient Harm perspective meaning that if the Requirement is not a Risk Control Measure or linked to a Risk/Hazardous situation, then does gaging/measuring the variance of the test method via test method validation add much value? Could one, according to the FDA QSR, use a risk-based rationale stating that the test method does not need to be validated because there is no added risk to the patient/user or something similar?
 

Tidge

Trusted Information Resource
Short-ish answer: Test methods require validation, the alternative is having no belief that the result of the method is correct. As an aside: test instruments require traceable calibration.

Many test methods have been validated by third parties, or by the totality of human experience (examples include using rigid evenly-marked scales to measure linear distances, clocks to measure time).

In the realm of medical devices: If you are truly going to argue that you don't need a validated test method, I recommend removing all references to the risk control... and go ahead and save money (in the design and testing) by removing the physical implementation of the risk control as well. You are essentially arguing that there is no value to this risk control, so why even include it?
 

Enghabashy

Quite Involved in Discussions
The verification & validation are basic requirements of management system , the risk assessment could be amended to the refenced relevant documents/records /form codes & version accordingly
 

MajkKruz

Registered
Short-ish answer: Test methods require validation, the alternative is having no belief that the result of the method is correct. As an aside: test instruments require traceable calibration.

Many test methods have been validated by third parties, or by the totality of human experience (examples include using rigid evenly-marked scales to measure linear distances, clocks to measure time).

In the realm of medical devices: If you are truly going to argue that you don't need a validated test method, I recommend removing all references to the risk control... and go ahead and save money (in the design and testing) by removing the physical implementation of the risk control as well. You are essentially arguing that there is no value to this risk control, so why even include it?

The way I described the situation made for a very binary answer, which you gave and I agree with it (English as a second language). My intention was to ask for opinions on if the amount of effort needed to make test methods valid for use (while still respecting the risk control assessment done on the requirements being verified) could be done with a risk-based approach. Could one, for example, argue that because the requirement has a no risk/low severity risk attached/linked to it, then the test method could be made valid for use by a less time consuming approach like peer/expert review and/or rationalising the variance of the method is acceptable because a worst case approach is taken etc?

I agree that validation for test methods of design verification is always needed, but I am trying to find a defendable corresponding level of design control which matches the results of the risk control assessment. Design input requirements that have no risks linked to them and are not risk control measures, should in my opinion not require a large effort on making the test method(s) valid for use.
 

Tidge

Trusted Information Resource
I agree that validation for test methods of design verification is always needed, but I am trying to find a defendable corresponding level of design control which matches the results of the risk control assessment. Design input requirements that have no risks linked to them and are not risk control measures, should in my opinion not require a large effort on making the test method(s) valid for use.

In the case you describe, generally: validating a test method is necessary to the extent that the uncertainty in the test method is only a small part of the uncertainty in the result of the measurement made using the test method (here, during design verification). The uncertainty (in the "state-of-knowledge") of both the method and a result of using the method can be established via study design, usually this is done by choosing an allowable Type I error (probability of rejecting a null hypothesis when the hypothesis is true) and an appropriate Power (the complement of Type II error; a measure of the probability of correctly rejecting the null hypothesis when the alternate hypothesis is true).

Typically, test methods are validated specifically so that folks don't worry about the uncertainty in the test method (including because the assumptions inherent in the method are adhered to).

If this seems too complicated, or like too much work, because someone has already made a decision that "its just not that important to validate a test method", I can offer two suggestions.
  1. Identify and make verifiable measurements. For many folks, just saying "I chose not to validate ____, so I 100% verified _____ instead" is enough to avoid uncomfortable questions of test method validation.
  2. Alternatively, make a statistically valid (but sometimes indistinguishable from "hand-waving") argument about the actual hypothesis you are testing, while also making a valid counter argument involving obvious boundaries (or negative cases) about the method (which presumably would demonstrate that you just are not "walking the happy path". Also make it clear that any uncertainties from this combined approach are appropriate for the level of risk being considered.
The second point is a subtle one. There are large areas of medical device design verification that (practically) do exactly this, yet may not be cognizant of any statistical foundation for it. "Software validation" for example, will suggest negative and boundary tests (often as "best practice" or "common sense") without recognizing that the (well-justified) practice can be explained by hypothesis testing.
 

Orca1

Involved In Discussions
Short version of the question is: Can one use a risk-based approach to deciding whether a Design Verification test needs to be validated or not?

Some (hypothetical) context: When faced with a multitude of Design Input Requirements for a complex medical device, assuming one has to develop physical measurement tests for most them, the additional effort to further validate every single test, can be a very time-consuming task. A task which may not add any value if seen from a Risk of Patient Harm perspective meaning that if the Requirement is not a Risk Control Measure or linked to a Risk/Hazardous situation, then does gaging/measuring the variance of the test method via test method validation add much value? Could one, according to the FDA QSR, use a risk-based rationale stating that the test method does not need to be validated because there is no added risk to the patient/user or something similar?

When dealing with complex medical devices and their design input requirements, it is true that validating all tests can be time-consuming. However, according to the FDA QSR, design validation is an essential part of the design control process, and it should be performed under defined operating conditions on initial production units, lots, or batches, or their equivalents (61 FR 52602 (V)(C)(83)). Design validation should follow successful design verification, and it cannot be substituted by design verification (61 FR 52602 (V)(C)(81)).

For design changes, the FDA agrees that validation is not always necessary, and verification can be used where appropriate (61 FR 52602 (V)(C)(89)). However, when a design change cannot be verified by subsequent inspection and test, it must be validated (FDA QSIT, Guide to Inspections of Quality Systems, Purpose/Importance, 111).

Risk analysis should be addressed in the design plan and considered throughout the design process (FDA QSIT, Guide to Inspections of Quality Systems, Purpose/Importance, 84). The extent of testing conducted should be governed by the risk(s) the device will present if it fails (61 FR 52602 (V)(C)(81)).
 
Top Bottom