SaMD Verification & Validation

RedDevil_AK

Starting to get Involved
Hi All,

I recently started in QA/RA at a medical device start-up, and am new to the software space (let alone SaMD) - I had a question regarding the V&V plan.

Our software works in transforms, where the output of one unit is the input of the next one. Can a single test run cover all the types of verification testing (unit, integration, and system) if we can prove unit verification? Integration testing can be verified as the units provide the input for the next one, and system verification is completed as the test run is completed. Outside of this, how do we address repeatability other than running it x number of times at point-of-use?

Similar question for validation - given that 62304 doesn't cover validation, and design validation can be proven by a test run and documenting that user needs are met - can that same test run be used for validation?

Would appreciate any recommendations / experiences with the US FDA / pointers to information. Thanks!
 

yodon

Leader
Super Moderator
A good bit to unravel here.

Can a single test run cover all the types of verification testing (unit, integration, and system) if we can prove unit verification?

Unit verification (note that I did not say testing) is really focused on the unit rather than the system. I expect there are cases where a unit level test is needed to verify a requirement but that may be more due to what I might consider as poor software requirements than anything.

The standard DOES allow for integration testing and software system testing to be combined into a single plan and set of activities. So, yes, you can have a single test that you can assert covers integration and software system testing.

how do we address repeatability

What "repeatability" are you referring to? Repeatability of tests (i.e., from 5.6.7 [integration] "retain sufficient records to permit the test to be repeated" & 5.7.5 [software system testing] "In order to support the repeatability of tests" or 5.8.5 "Assure repeatability of software release"?

If repeatability of tests, it's just a matter of ensuring you have proper configuration control over everything, including the dev / build environment and that you capture sufficient information about the test environment. Things like OS versions and any support software (browsers, database engines) can influence the test results so those things need to be captured.

design validation can be proven by a test run and documenting that user needs are met

I'd say that statement is risky. Design validation does, indeed, demonstrate that user needs are met but this may well require aspects such as involvement with end users.

Software validation is, unfortunately, a bit vague and certainly not clarified by 62304's decision to punt. FDA does have guidance on software validation (circa 2002). They also published a draft guidance on premarket submissions for software (in devices) that has definitions of software validation and verification (which, I think, demonstrates "current thinking"). Neither, though, say "do this specifically for software validation. There's a lot of overlap - validation includes demonstrating that all the requirements are met (so verification is a subset of validation). There's also discussions of how the (defined) lifecycle contribute to validation. I always recommend that some structured exploratory testing be a part of software validation. That gives reviewers a tangible "thing" to support validation. I'd suggest that a trace report showing requirements coverage through V&V can also support validation.

Two other things about software. Don't forget how both risk management and cybersecurity need to be addressed.
 

Tidge

Trusted Information Resource
Our software works in transforms, where the output of one unit is the input of the next one. Can a single test run cover all the types of verification testing (unit, integration, and system) if we can prove unit verification? Integration testing can be verified as the units provide the input for the next one, and system verification is completed as the test run is completed. Outside of this, how do we address repeatability other than running it x number of times at point-of-use?

I would never try to use a unit test for objective evidence of integration testing or satisfaction of system level requirements, for several reasons.

(1) To do such a thing implies that the system level requirements are incorrect. I have witnessed a LOT of mental gymnastics involving the defense of requirements and the evidence that they are satisfied and it is never pretty.

(2) The unit testing is supposed to be at the lowest level of attributable detail (i.e. "where is the mistake") by the developer. It is akin to asking a specialist "why led you to believe that this unit would work?" It isn't supposed to be the evidence that the final system meets its intended use.

(3) It is possible, if the software system has no significant risk, that a regulatory body wouldn't want/expect to see "unit testing". There is an (incomplete) sort of implication in the quoted strategy that goes "maybe if it is low risk and I don't need to do unit testing, that should I make a change to a unit maybe I don't really need to do any testing." I'm not saying that any specific group would do this, but I have witnessed similar things being suggested (before being shot down).

(4) Integration testing can be tricky, primarily because it needs a dedicated set of tools and mindsets. The simplest way I describe Integration testing is that "it can't obviously be unit testing, and it can't obviously be system testing."

(5) In a typical 62304-compiant development process, System level testing has regimented, pre-approved protocols (pass/fail criteria, etc.) whereas the testing for Unit and Integration is assessed for "adequacy". Some groups adopt strict pass/fail criteria for low-level testing, but generally it is only where there are (well-written) requirements that pass/fail criteria are unambiguous. Working towards a "unit testing = integration testing = system testing" approach implies all sorts of things I don't want to contemplate, like who is actually responsible for the device working safely? In the proposed scheme it might very well be the person who did the unit tests (or the person that assigned them the work).
 

RedDevil_AK

Starting to get Involved
Thanks @yodon !

I'd say that statement is risky. Design validation does, indeed, demonstrate that user needs are met but this may well require aspects such as involvement with end users.
as it stands, the end users will be company personnel. Another point, the first submission is more setting up a predicate rather than going to market - hence I haven't put too much emphasis on usability when it comes to validation.

What "repeatability" are you referring to? Repeatability of tests (i.e., from 5.6.7 [integration] "retain sufficient records to permit the test to be repeated" & 5.7.5 [software system testing] "In order to support the repeatability of tests" or 5.8.5 "Assure repeatability of software release"?
I was wondering more about the "assure repeatability of software release" section - but as I reread the standard, I came to the realization that it is more software release than verification.

Two other things about software. Don't forget how both risk management and cybersecurity need to be addressed.
Absolutely. Thanks again!
 

RedDevil_AK

Starting to get Involved
Thanks @Tidge !

I would never try to use a unit test for objective evidence of integration testing or satisfaction of system level requirements, for several reasons.
I did not mean that I wanted to use unit testing for evidence of integration and system testing - my apologies if I came across this way.
I was wondering if a single protocol can cover all the testing - but to @yodon and yours' point - I am realizing that unit tests are more of an independent way of saying this unit works as required, so it wouldn't be advisable to combine all of them together - am I getting this right?

Thanks again!
 

Tidge

Trusted Information Resource
I am realizing that unit tests are more of an independent way of saying this unit works as required, so it wouldn't be advisable to combine all of them together - am I getting this right?

Units are the fundamental building blocks of code, and integration tests are how the individual units work together... The full suite of 62304-compliant software development, when it requires Unit and Integration testing, is similar to asking a hardware engineer to have/execute fundamental testing on the smallest "atoms" of a physical design. At a high (system) level, no user or patient really "cares" about the low level details, but their is a recognition that mistakes can be made at this atomic level that could lead to unacceptable risks. We still need to see them (both hardware and software) work at the combined system level, its just that hardware gets a pass... I should say instead "there isn't (to my knowledge) a consensus standard for the development of hardware solutions for medical devices." The stand-in for the lack of a development standard is the safety standard for medical electrical devices 60601, with its collateral and particular standards. (1), (2)

My spin: Because software is under the full control of a developer, the lower level testing is required (based on risk) because there is no accepted uniform mechanism for the evaluation of specified software. Contrast this circumstance with a threaded fastener of a known material. We have more than a century of metallurgical (raw material) and engineering (thread features, driving mechanisms) by which skilled engineers can make straightforward assessments about nuts and bolts without having to do "low-level" (i.e. "unit") testing on such things. If we still lived in an era when machinists had to individually cuts screws, things might be different on this front. As it is, for software... regulators have no reason to trust developers a priori, so they ask to see the lower level artifacts.

(1) I feel obligated to reveal my own shortcomings of imagination. There was a time when I thought that the one physical element of medical devices that would end up with a consensus development (albeit not one we would need for ever) standard was batteries. Batteries (unlike software) are sources of inherent risk, and for a while it looked like they were introducing new ones all the time. Of course, the world outside of medical devices values batteries more than medical device manufacturers, and medical device manufacturers won't bother to design anything custom that they can get on the open market, so this never happened. There is a lot going on inside a battery!

(2) Usability is a consideration like software development... as with software, there are no inherent risks from "usability", and there are an infinite number of user interface solutions. Usability doesn't necessarily have the same sort of "units" for testing, and many medical device manufacturers have gotten by with just "summative testing" at the system level, but a thorough usability development process will have analogs to a thorough software development process.
 

ECHO

Involved In Discussions
Hello, late to the party here.

Without echoing what everyone has said so far, if you setup your Agile process and CI/CD correctly, you should be able to knock out most of the line items in 62304.
 
Top Bottom