Product Update and executing only affected System Tests, leaving out unaffected ones

Auxilium

Involved In Discussions
Hi!
I am relatively new to Regulatory Affairs and I stand before the problem to make our Software Development Process leaner specifically regarding the Testing of our Medical Device Software (Standalone AI).
So far, our company has even with the slightest update of our product (e.g. new Version 4.1 to 4.2) repeated all of our defined System Tests.
Our thought is: can we make this Step leaner and only repeat the System Tests that the product update affects and not repeat all the other tests which are not affected?
I got the answer: yes, it is possible from an expert but the person was in a hurry so he couldn't tell me.

My question is: How would you specifically justify this from a regulatory perspective. specifically in the 62304 Norm or other applicable standards/guidelines.
Would you use the Software Architecture for that argumentation and simply write a statement somewhere for why we have left out the other tests?

Thank y'all!
 

yodon

Leader
Super Moderator
The way we approach this is through regression test planning. We write a kind of mini V&V Plan. Tests selected are based on direct impact, but also we select those "around" the directly impacted area. We look at code, requirements, and design when justifying the test set. Oh, we generally always define a "safety set" that is executed each time to be sure we haven't broken any safety functionality.

62304 mostly talks about regression testing in general terms, leaving it up to you decide what gets (re)tested. Here's an excerpt:

Regression analysis and testing are employed to provide assurance that a change has not created problems elsewhere in the MEDICAL DEVICE SOFTWARE. Regression analysis is the determination of the impact of a change based on review of the relevant documentation (e.g., software requirements specification, software design specification, source code, test plans, test cases, test scripts, etc.) in order to identify the necessary regression tests to be run.

Basically, use the materials you have and document the justification for what you do / don't test.
 

Junn1992

Quite Involved in Discussions
The way we approach this is through regression test planning. We write a kind of mini V&V Plan. Tests selected are based on direct impact, but also we select those "around" the directly impacted area. We look at code, requirements, and design when justifying the test set. Oh, we generally always define a "safety set" that is executed each time to be sure we haven't broken any safety functionality.

Hi yodon, can this regression testing be determined after a problem is discovered? Meaning that we discover the software has some minor bug, and we don't want to perform whole of system regression testing. I am assuming here that whole of system regression testing is the default during the planning phase post-release.

So during problem resolution process post-release of the software, we say 'oh here is a minor bug'. And then we do proper change control, configuration item control, and for the testing we say 'we are going to create a testing protocol B for software changes due to this minor bug, because of...'.

I guess the problem here is that, are we going to create a testing protocol everytime a new problem is discovered? I think auditors might not be pleased with this. Wonder if there was an easier way to do this.

Hope I was being clear and thanks for the help!
 

Tidge

Trusted Information Resource
Hi yodon, can this regression testing be determined after a problem is discovered? Meaning that we discover the software has some minor bug, and we don't want to perform whole of system regression testing. I am assuming here that whole of system regression testing is the default during the planning phase post-release.

So during problem resolution process post-release of the software, we say 'oh here is a minor bug'. And then we do proper change control, configuration item control, and for the testing we say 'we are going to create a testing protocol B for software changes due to this minor bug, because of...'.

The SOFTWARE ARCHITECTURE and a traceability document (requirements <-> implementation <-> testing) is what lets you properly plan efforts for discovering the nature of, fixing and testing identified defects. Don't look for the root cause of the defect in areas not allocated to the impacted requirements, don't repeat testing in areas unaffected by the fix.

I guess the problem here is that, are we going to create a testing protocol everytime a new problem is discovered? I think auditors might not be pleased with this. Wonder if there was an easier way to do this.
I see two choices:
  1. You repeat (parts of) previously used test protocols.
  2. You create new protocols specific to the scope of the new project.
Many people prefer the first; I prefer the second. Here are some reasons why I prefer option 2: Option 1 has an aroma of "just do what was done before/trust the people that came before you"... but of course, the previous testing allowed a bug to escape! Directly recycling previous work absolves the current generation from applying critical thinking skills to the problem being worked on. I also happen to find multiple executions of the same protocol to be more problematic than it is worth... as soon as people start applying more thought on how to uniquely identify different executions than is being applied to the actual tests, I can tell that priorities are wrong.
 

yodon

Leader
Super Moderator
  1. You repeat (parts of) previously used test protocols.
  2. You create new protocols specific to the scope of the new project.
We take a hybrid approach: update the protocols to cover the issue(s) found and then use parts of previous protocols to give assurance that the changes didn't break anything.

I think auditors might not be pleased with this.

Speculating on what you think an auditor might or might not be pleased with is the wrong approach. Comply with the standards / regulations. Get in a defensible position. Your questions should be "is this compliant?"
 
Top Bottom