Level of V&V in Class II Medical Device (Dialysis Machine)

W

WeightOnWheels

Good day all!
I've been doing V&V in the FAA domain for some time now and just recently came into the FDA domain. For the FAA I'm used to doing formalized Code Reads and formalized Unit Test all the way down to analyzing opcode level code coverage for Do178C Level A components.

I've been notified by my customer that this dialysis machine is class II which surprises me. I would think that it would be class III. Do you have any information or experience on what level of V&V detail will be necessary for the FDA on this type of device? My first reaction would be to do everything down to formalized Code Reads, but of course time is money and I don't want to really do more that what's completely needed (Even though I would if I had the time, I'm all about safety).

I also have a really short time frame to complete this project.

Any information would help and it would be greatly appreciated.

Thank you,
 

Nancylove

Starting to get Involved
The amount of V&V would depend chiefly on the product requirements for the product, the risks identified in the risk analysis, the degree to which prior testing can be used (due to similarity of a prior design), and the product complexity.

I've done quite a bit of V&V; let me know if you need some guidance.
 

yodon

Leader
Super Moderator
The foundation for software in medical devices is still being built, I'd say. FDA has some guidance on what they expect in a 510(k) submittal but not much detail. The AAMI / ANSI IEC standard on software life cycle (62304) is a recognized consensus standard (both the :2006 version and the 2015 amendment) and provides more guidance and allows partitioning by safety class.

As Nancylove points out, your V&V should be driven by risk. And you should identify what your plans are for V&V in a V&V Plan. The Plan should establish the basis and rationale for exercises such as code reviews (per 62304 this would be probably considered unit acceptance).

A couple of more things that are coming to the forefront in FDA consideration: cybersecurity and usability. For the former, they are wanting to see assurance that reasonable means have been taken to protect from cybersecurity threats. For the latter, they want to see that a risk-based approach to usability has been taken, including validation that the system can be used safely and effectively.
 
W

WeightOnWheels

Thanks all!

You mentioned that code review could be considered for unit acceptance.
Do you mean instead of doing actual unit tests?

It's a bit difficult for me since I'm used to doing requirement based unit tests including code coverage analysis.

Do you think dialysis machines should fall under class C software for class B software?

"I've done quite a bit of V&V; let me know if you need some guidance."
Thanks Nancylove, I might have to take you up on that guidance.
 
Last edited:

yodon

Leader
Super Moderator
You mentioned that code review could be considered for unit acceptance. Do you mean instead of doing actual unit tests?

Correct but let me clarify just a bit (I kind of jumped ahead). You have to define what you will do for unit verification and then establish the acceptance criteria for whatever you do. If unit testing is your unit verification approach then there are some specific things to do and to capture.

It's a bit difficult for me since I'm used to doing requirement based unit tests including code coverage analysis.

That's certainly a valid approach. The standard implies that a certain level of rigor is needed for Class B but additional rigor is needed for Class C. Since you have a little more flexibility, let the risk drive the rigor.

Do you think dialysis machines should fall under class C software for class B software?

Realize first that if the architecture supports it, you can have different safety classes for software items in the system.

That said, it's hard to say without knowing exactly what the risks associated with the software are. If failure can lead to death or serious injury (and there are no hardware controls implemented - or as the update puts it controls external to the software system) then yes, C is appropriate.
 

Peter Selvey

Leader
Super Moderator
About 10 years I did a lot of work with dialysis machines in Japan. Generally the construction is a 3 CPU system with one CPU for display and network, one for control and one dedicated to safety. The safety CPU has it's own set of sensors and is generally independent of the control CPU.

Although it is still extremely high risk (there are about 10 different ways to directly kill the patient), the separation of control and protection CPU makes things best suited for Class B approach, with extra emphasis on whole system tests.

It's likely that many manufacturers will be pushed to declare Class C for the safety CPU, but in practice it would be difficult to follow as there is a huge amount of software to pull apart. So it would likely be a fudge job if declared as Class C. There is definitely value in running through the code and checking variables for range limits, overflow, reset, but as for spending a huge amount of time writing up algorithms, well, us humans are pretty bad at trying to visualise or predict what happens in dynamic systems.

We found it was much better to run the tests on the whole system (hardware and software). There were often unexpected results that no amount of code inspection would have revealed. So, if you have limited time, I'd recommend to lean towards more system testing.
 
W

WeightOnWheels

Thanks for the reply Peter. Since you were specific on the ways dialysis machines can kill people, Can you tell me what those 10 ways are? (This may may give me guidance on which components need most scrutiny)
 

Peter Selvey

Leader
Super Moderator
Of the top of my head:
- overheating
- air infusion
- high flow rate
- wrong dialysate composition
- removal of too much / too little waste fluid
- blood loss (detached return needle, burst tube or other disconnection, leaking dialyzer)
- gross heparin delivery
- failure to disinfect
- failure to remove disinfectants after cleaning

These are basically covered by IEC 60601-2-16. There may be special functions or features in the equipment like feedback loops based on blood volume sensors. Not all are 100% sure to kill but most are fairly dangerous.

The standard requires not only independent protection but also start up tests of the protection and in some cases high frequency periodic monitoring of critical systems like air infusion, which can detect a fault in the protection before the air bubble can reach the patient.

So safety is primarily achieved by redundancy and self testing rather than relying on perfect code. This is normally the case in any high risk system.

In principle, if you have two completely independent systems that are 99.9% reliable (e.g. Class B controls) that gives you overall 99.9999% reliability. In practice it's not quite that simple but that's the basic concept.
 
W

WeightOnWheels

Thanks again Peter.
I have another related V&V question.
The legacy code we're looking at has built in start up tests that include, memory test, CRC test and many many instruction tests.
I've never seen and/or heard of any CPU instructions ever failing. Do you see actual value added in having the CPU instruction tests? I understand the memory test, just not the instruction tests. I only ask because it's a lot of code that would require documentation and tests (previous owner of the code did not have proper documentation/tests for this startup tests).

I appreciate your experience on this subject.
Thanks!
 

Peter Selvey

Leader
Super Moderator
This sounds like the "functional safety" which I was taught when I worked in TUV SUD.

Personally I don't subscribe to this as it is virtually impossible to test a CPU's core ALU and the tests that TUV accepted were more of a token just to be able to write "Pass" in a test report.

The real story is as you say the core CPU is super reliable, probability of failure is in the region of 10^-9/year or less. It's the nature of CPUs that they have to be super-reliable to be practical.

It's possible to write it up as a "high integrity component" in the risk management file which then allows it to be excluded from any fault analysis.

But, if third party testing is involved, it might be best to retain it just to allow them to write "Pass" for their functional safety tests. And then just keep some simple records for V&V.

Memory is a different story and I have seen flash memory corruption over long term even in my own products (albeit likely cause being power supply brown out events).
 
Top Bottom