Rationalising the level of effort and depth of software validation based on risk

silentmonkey

Involved In Discussions
Hi All,

I am working with a startup medical device company and we have established our own manufacturing process. The manufacturing team have implemented 2 pieces of software to automate and support some of the manufacturing process steps and hence we are required to validate the software as per ISO 13485:2016.

Software A will be used by operators to record the results of manufacturing quality control checks including the results of visual inspections. The software will inform the operator if the outcome is a pass or fail based on operator input for that device and will also record all the information in a database. This database will form a part of our Device History Record.

Software B is a test jig used to automate electrical testing and firmware flashing of the device. It automatically records the results and informs the operator if the outcome is a pass or fail. The test jig also stores the outcomes in a database which will form a part of our DHR.

Evidently the software are critical to the quality of our product and automates critical QMS processes. I have read TR/ISO 80002-2:2017 and have understood it for the most. My struggle is - how do we rationalise the level of effort and depth of validation required after performing the risk assessment?

Let's say we perform a risk analysis using a Hazards Analysis or FMEA for both process risks and software risks. This certainly gives us an idea of how risky the software and process are but TR/ISO 80002-2 says that the risk assessment performed should be used to drive the selection of tools to use in the toolbox (TR/ISO 80002-2 provides a list of validation tools which we can use to achieve a validated state in Appendix A).

How can I justify what tools I should use and how many tools I should use based on the outcome of the risk assessments? Is there a way to quantify or qualify the risk assessments and then say something in my software validation procedure like "if risk level is X; use these tools" or "if risk level is Y; use these tools".

Any other general advice on achieving software validation would also be very much appreciated! Perhaps I have misunderstood the intent of TR/ISO 80002-2?

Thanks in advance!
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
I will make the suggestion that you need to consider the entire system that the software interacts with. An infamous current example is the 737 MAX. The MCAS software performed exactly as it was too (and would have passed any similar certification tests), but it had faulty inputs, lack of display to the user, and repeated applications without control of the user in a false alarm situation.
 

yodon

Leader
Super Moderator
You should probably consider building a Master Validation Plan and document your approach there (I realize that doesn't answer your question, bear with me).

First off, there are 2 aspects to consider: whether a failure could result in patient harm and whether failure could result in regulatory non-compliance. It sounds like, in what you describe, failures in both software items could result in regulatory non-compliance (incorrect DHF) and the test jig software could have an impact on the patient.

One of the things I consider is the likelihood of a failure being undetected. If your test jig software fails and you incorrectly pass a nonconforming product, what is the effect? If this could result in patient harm, I'm going to beat the heck out of the software during validation. If there's no possible harm to a patient or if the failure would absolutely be caught downstream, then I would dial it back a bit.

You should be able to establish a hierarchy where successive levels increase the level of testing. Don't make it so rigid, though, that you get boxed in. I like to establish the "minimal" amount of required testing and then add on what makes sense. For example, since your software is home-grown, you may want to take additional efforts like code reviews. A *very* simple example might be:
  • Level 1 - No risk to patients / no risk to regulatory compliance: functional testing
  • Level 2 - No risk to patients / moderate risk to regulatory compliance: Level 1 + interface testing
  • Level 3 - moderate risk to patients / moderate risk to regulatory compliance: Level 2 + robustness testing
  • Level 4 - high risk to patients: Level 3 + exploratory
Using something like this as the set of minimum requirements can provide the foundation for what you do. I would also suggest that you have a validation plan for each software item. This will allow you to tailor what you do to be most appropriate for the application. And don't forget to address how you re-validate after changes!
 

Tidge

Trusted Information Resource
Evidently the software are critical to the quality of our product and automates critical QMS processes. I have read TR/ISO 80002-2:2017 and have understood it for the most. My struggle is - how do we rationalise the level of effort and depth of validation required after performing the risk assessment?

You provided the answer to your question: The necessary level of validation is commensurate with the amount of require risk reduction to be provided by the risk control (the automated system used in production).

With more snark: If it was impossible to provide evidence that a risk control was providing a risk reduction, don't waste time 'taking credit' for (or even listing) that risk control.

Having written that: a manufacturer ought to have evidence that manufacturing operations aren't introducing risk to the product (for example, handling during inspection/test). This is one of the motivators for process validation even if the process isn't explicitly identified as a risk control.

Relating risks identified in Process FMEA to a device's Hazard Analysis can be a bit of a touchy subject as most manufacturing processes are typically not selected as risk controls for device design but still can have an impact on the risk profile of a device. I've seen more than my fair share of device HAs which 'trace' to PFMEA through the HA risk controls link and leave the impression that the manufacturing process is responsible for controlling risks when really all those links are doing is demonstrating that the manufacturing process isn't contributing unnecessary risk. There are of course manufacturing process steps which can reduce risks: Sterilization is a clear example of a manufacturing process intended to reduce the risks from certain hazards. This sort of issue can be clarified in a risk controls options analysis (at the HA level).
 

LUFAN

Quite Involved in Discussions

yodon

Leader
Super Moderator
Excellent point, @LUFAN ! It's still, I believe, going to enable a risk-based approach so I think considering patient (and regulatory risk) is still going to be needed.
 

LUFAN

Quite Involved in Discussions
Excellent point, @LUFAN ! It's still, I believe, going to enable a risk-based approach so I think considering patient (and regulatory risk) is still going to be needed.

Yes, I don't think it's going to change the inputs to any CSV, much if any, but I do anticipate the (expected) outputs to be better defined and established. That's MY hope at least.
 

Tidge

Trusted Information Resource
My understanding of the long-awaited FDA guidance for Non-Product Software is that it isn't really going to be anything new, rather that it will (among other things, bluntly) point out that if the use of an NPS system has a low impact to patient/user risk then the validation efforts expected by the FDA will be commensurately less. The effort around this particular guidance has been active for many years, and my understanding is that the FDA itself has been using this approach when it comes to examining NPS validations. To be honest, I can't think of a time when an FDA auditor even came close to wanting to review an NPS validation. In my memory, the closest that they ever came was a review of the Master Validation Plan for a manufacturing facility.

The main reason I am anxiously expecting publication of this guidance is because (per my experience) it has been the non-FDA 3rd party groups which have essentially held medical device manufacturers hostage by essentially requiring that NPS system (regardless of how the NPS system is used) be subjected to extreme validation requirements... as if every NPS system was single-handedly responsible for life-or-death outcomes. The new guidance should make it clear that the FDA won't expect business software (e.g. payroll, training) to be subject to a validation process similar to that of medical device software.

The 3rd party groups who care about 'software' used at medical device manufacturers have been (in my experience) generally ignorant of the teachings of Crosby and Deming (especially when it comes to his view on "driving out fear") and have defaulted to hybridizing (often) inappropriate standards related to medical device software development or equipment control/calibration when establishing the 'best processes' for NPS validation. The GAMP 5 approach should be perfectly adequate for even the most complicated business, but because so much of it is dedicated to the necessary activities for higher-risk production processed (it's origin is pharma, not medical device) just as with other validation approaches, the default has been to make the NPS validation much more complicated/onerous than it needs to be (to make safe and effective products and to run businesses efficiently).

I don't know what the final guidance will say, but I am aware that it was not that long ago that any review of the validation of NPS for use in areas such as regulatory compliance were going to be explicitly de-prioritized by the FDA relative to NPS systems which impact patient/user safety. After all: during most audits and regulatory submissions the FDA is going to be directly evaluating the outputs of a Quality System, why would they waste effort evaluating narrow details of the process tools which generate those outputs?

Businesses will want to invest appropriate efforts into making sure that business systems meet their needs, but the FDA doesn't have a mandate to make sure businesses run efficiently.
 

LUFAN

Quite Involved in Discussions
To be honest, I can't think of a time when an FDA auditor even came close to wanting to review an NPS validation. In my memory, the closest that they ever came was a review of the Master Validation Plan for a manufacturing facility.

Funny, because the first FDA inspection I was ever apart of the inspector asked to see "the binder." Spent maybe 15/20 minutes on it.

I agree with all of your points. My bigger issue with CSV in general is how "old" the guidance has gotten without anything newer aside from 80002 which really to me is just a regurgitation of existing guidance combined with a risk based approach. Technology has changed so much since 2004 that a new level of flexibility is needed to actually continual use software throughout the QMS.
 

Tidge

Trusted Information Resource
Funny, because the first FDA inspection I was ever apart of the inspector asked to see "the binder." Spent maybe 15/20 minutes on it.

Mileage varies, of course. Was that for a software system used for regulatory compliance or a manufacturing process (or something else)?
 
Top Bottom