High Integrity Components Definition according to clause 4.9 of IEC60601-1 3rd ED

N

nikolaos

Hi All,
Can somebody, please explain what is thr definition/meaning of high integrity components according clause 4.9 of IEC60601-1 3rd ED? It is non clear to me.
Thank you
 

Peter Selvey

Leader
Super Moderator
An example "high integrity component" might be found in an ECG circuit.

Consider the manufacturer uses RL drive to eliminate noise. To keep patient current low, a series 470kohm SMD resistor is used. This 470k could be considered a "means of protection".

However, if this resistor shorts, the patient current could exceed dc limits.

The manufacturer could:
a) split the resistor into two separate means of protection (240k + 240k), or
b) consider the resistor a "high integrity component" not subject to failure

In the case of (b), data would be required to show probability of failure over the lifetime of the device is less than that required to reduce the risk to acceptable. For example, for a electrode "dc burn" (medium severity) the acceptable probability might be around 1 in 10,000 / year. If the lifetime is 7 years, the probability of the resistor short should be less than 1/100,000. MIL standards indicate an unstressed SMD resistor should meet this no problem, so it could be a valid "high integrity component".

In general, it is a rare case that you will have such a situation. There are some common high integrity components (e.g. Y1 caps pri-sec), but these are adequately handled by IEC component approvals and other parts of the standard. It is not normal practice to list these parts in the RMF.
 
N

nikolaos

Thank you Peter for the example, really clear.


Some more considerations:

According the IEC 60601-1 3rd ED.

"The first step to determine a COMPONENT WITH HIGH-INTEGRITY CHARACTERISTICS is to conduct a RISK ANALYSIS to find those characteristics that are required to maintain BASIC SAFETY or ESSENTIAL PERFORMANCE."
It is not clear to me if this apply only to BASIC SAFETY or only to ESSENTIAL PERFORMANCE or to both.
Your example refer to BASIC SAFETY only, how about ESSENTIAL PERFORMANCE? Can you please illustrate it with an example?
How the expected service life is estimated? It dependes on the component life or it is only a menufacturer decision?
Thank you in advance
 

Peter Selvey

Leader
Super Moderator
For essential performance, it only makes sense if you consider it applicable to high risk situations, which is probably what the writers intended but didn't capture in the definition so well.

A good example is a dialysis system. For high risk devices, it is possible to show that critical protection systems should be regularly tested (e.g. at least once per day), to detect faults in the system. Usually this is done by system self diagnostics.

For venous pressure monitoring it is possible to self diagnose all the electronics, software etc by switching in known signals (usually a two point check, around zero and full scale).

But, it is difficult to check the pressure sensor itself, because there is no way for the system to apply real external pressures.

The options for the manufacturer are:
(a) add a second pressure sensor in the blood circuit, or
(b) treat the pressure sensor as a high integrity component

Approach (b) was often used in dialysis systems I tested, although it was not formally considered a high integrity component back in those days ...
 
R

robertjbeck

Section 14.8 of 60601-1 lists Components with High-Integrity Characteristics as a way of reducing risk via the software architecture specification. It seems to me that if high-integrity components are defined as those with a low likelihood of failure, this can be applied to software components with difficulty. For instance, a computational library that has been around for thirty years and is incorporated into medical device software as a SOUP component might qualify as "high-integrity." I wonder if anyone can clarify or comment on this?

Thanks
 

Peter Selvey

Leader
Super Moderator
I guess it is not the intention of the standard, but it is possible. High integrity does not mean never fail, only that the failure rate is low enough that additional protection is not required.

It is possible to re-arrange the risk evaluation to extract a required probability of failure for the part in question. You can then consider if it is plausible to claim the part can meet that specification.

In fact, there are many high integrity parts, but it is clear the intention of the standard is to consider situations where there would normally be additional protection and the manufacturer would like to go against normal practice and rely on a single part.

It could be for example that the severity of harm is moderate, and there may be factors external to the device which help to reduce the overall probability of harm (for example, with a diagnostic device, the diagnosis might be made based on several sources of information, not just the device in question). So a failure rate of 0.001 for the part in question might be enough for "high integrity". In that case, a software library could be OK for high integrity, and probably nobody expects it to be even written up as high integrity.

However, for an infant incubator, where failure of control is severe (death) and there are few external factors, a high integrity component might need to be in the order of a 0.0000001 failure rate over the life of the device. At those kind of probabilities, it is not only the library but the memory it is stored in. It would be a stretch to consider any part as high integrity for that situation. If a manufacturer did, they would need some solid documentation and objective evidence, and there is no question about it being written up as a high integrity component.
 
R

robertjbeck

Thanks for a very good answer. there are a couple of points that are still not clear to me:

1) per FDA, probability of failure of software is 100%. this is not strictly correct in the real world, but it's based on the idea that:
a. if there is a defect in software (in other words there is a bug in the code),
b. and if this code is executed, the error will always happen.

so by definition, there is no such thing as a high integrity software component, if high integrity implies some low probability of failure.


2) you stated, ".. being written up as a high integrity component .." I'm do not know what you mean by this statement. in American English, 'written up' means cited. I think you meant something else, such as 'defined as.' Please clarify.
 

Peter Selvey

Leader
Super Moderator
The assumption of 100% is only for software classification, as used by FDA and IEC 62304, which in turn affects the type of design controls (procedures, records) you need to keep. Actually even for classification the assumption of 100% does not make sense, but that is a longer story.

Anyway, once the classification is decided it should have no effect on the risk based decisions, it is OK to use a failure rate (or reliability assumption) for software that you consider appropriate for the situation.

A library that is well used (e.g. the core "math" library in a C application) can be considered super reliable, but again for a life/death situation you probably would not rely on a single piece of software; typically there would be an independent, simple microprocessor dedicated to watching the system, running it's own code, compiled using a separate library.

For "written up" I meant included in the records for risk management. In reality there are thousands of risk controls and risk related decisions but relative few of them actually get recorded in the risk management file.
 
R

robertjbeck

I agree with you that the assumption of 100% does not make sense because any particular line of code may not be executed 100% of the time, and in fact given the use of robust error detection and prevention techniques, software defects will be primarily in the code that is less often executed.

Thanks also for the clarification of "written up." I use the term "documented" and while I also agree that it's difficult if not impossible to record every software risk control measure in the risk management file (by which I assume you mean one of the documents listed in appendix H.3 of IEC 62304 and/or ISO 14971), they all should be documented somewhere, even if just in comments in the code. This is of course easier said than done, but it should be done.

The issue I've had with using less than 100% failure rates for software is that auditors who have never written software find fault with this. FDA has published two guidance documents in which they state this as an expectation, so I can't blame the auditors. There may be other documents that also support this view, and I'd be glad to learn about these.
 

Peter Selvey

Leader
Super Moderator
Assuming a 100% failure rate for software is plain wrong, so wrong that any auditor, technical reviewer should be taken to task, including real legal action (not just a threat) if they are stupid enough to put it in writing (e.g. in a formal non-compliance or rejection letter). Someone needs to be brave enough to send a wake up call to these idiots.

The problem is that "experts" have a tendency towards overkill in grand statements like "software should have an assumed 100% failure rate", but they don't follow through with the expected reasonable response if such a statement were really true. They allow manufacturers to deal with the situation with some fudge documentation and no real action. Since there was no problem in the first place, we might write this off as just more trees killed, but the deeper problem is that there are some rare situations where real action makes sense. Because these "experts" can't tell the difference, they continue to allow the same old fudge documentation.

These "experts" are not only useless, but they actually make the world less safe.
 
Top Bottom