Defining risk control measures

KrishQA

Starting to get Involved
#1
Hello All,

Section 7.2.1 of the 62304 standard says we must define and document risk control measure for the potential cause of the SW item contributing to a hazardous situation.

Now, I have a device that performs a measurement. We have identified that if the measurement algorithm itself is wrong it could lead to a hazardous situation. What would be a risk control measure for this though? Apart from test the algorithm? Is testing a valid risk control measure.

I am aware that when risk control measures are identified (from 62304 perspective, those implemented in SW) they have to be verified somehow but can the risk control measure itself be testing? (In this case the algorithm)?

Are there more common ways to handle such scenarios?

Thank you in advance
 
Last edited:
Elsmar Forum Sponsor

Steve Prevette

Deming Disciple
Staff member
Super Moderator
#2
Periodic testing definitely helps. Keeping track of if the periodic testing results are changing is even better, and trending the results in Statistical Process Control.
 

KrishQA

Starting to get Involved
#3
Periodic testing definitely helps. Keeping track of if the periodic testing results are changing is even better, and trending the results in Statistical Process Control.
Thanks for the reply Steve. Good idea to have a periodic tests (sort of like regression) and keep track of their performance indeed.

However, would testing of an algorithm serve as a risk control measure, strictly speaking from 62304 perspective?
 

Steve Prevette

Deming Disciple
Staff member
Super Moderator
#4
Must admit I am not familiar with the specific wording of 62304. If you post a quick excerpt (not violating the copyright) I'd be glad (and I am sure others here) would provide an interpretation.
 

KrishQA

Starting to get Involved
#5
7.1.2 Identify potential causes of contribution to a hazardous situation

The MANUFACTURER shall identify potential causes of the SOFTWARE ITEM identified above
contributing to a hazardous situation.
The MANUFACTURER shall consider potential causes including, as appropriate:
a) incorrect or incomplete specification of functionality;
b) software defects in the identified SOFTWARE ITEM functionality;
c) failure or unexpected results from SOUP;
d) hardware failures or other software defects that could result in unpredictable software
operation; and
e) reasonably foreseeable misuse.
followed by

7.2.1 Define RISK CONTROL measures

For each potential cause of the software item contributing to a hazardous situation documented
in the risk management file, the manufacturer shall define and document risk control
measures.
As per 7.1.2 above we have identified that incorrect specification/implementation of a measurement algorithm could lead to a hazardous situation.

To fulfill 7.2.1, can we just test the software implementation of the algorithm and state the test as a risk control measure or if there are other ways to handle this.

The reason for my confusion is because of the following,

7.3.1 Verify RISK CONTROL measures

The implementation of each RISK CONTROL measure documented in 7.2 shall be VERIFIED, and
this VERIFICATION shall be documented.
If a verification measure itself is my risk control measure then verification of the risk control measure is sort of redundant
 

Steve Prevette

Deming Disciple
Staff member
Super Moderator
#6
Looks pretty straight forward. I am assuming you already know the cause and this algorithm you refer to catches it. The issue with verification, I believe, is you can have the algorithm running, but if no one is monitoring the results, you don't have verification. So the verification strategy will be how will the algorithm be monitored, how will you know that there is a problem, and who will respond.
 

Tidge

Trusted Information Resource
#7
Now, I have a device that performs a measurement. We have identified that if the measurement algorithm itself is wrong it could lead to a hazardous situation. What would be a risk control measure for this though? Apart from test the algorithm? Is testing a valid risk control measure.
What I have written below may seem obvious, or even 'dumb', but I think will provide some context for Risk Management in Medical devices:

The element of medical device design that has the most comprehensive and consensus approach to designing and developing that specific element with a core concept of reducing risk is software. 62304 lays out a "top->down" approach to development. Other elements of safety in medical devices have historically been driven by consensus tests in areas that we now call "basic safety". That is: A standard like 60601-1 requires things like electrical isolation even thought there is no consensus standard on how to design and develop for electrical safety. Similarly there are no consensus standards for the design and development of batteries, plastic materials, etc. instead the consensus standards are all about tests of implemented designs. You can design to pass a test (obviously) but there are no consensus standards describing the sorts of activities to follow to design a battery or plastic (for example) that will pass the tests. A big reason for this is that physical design elements can be subjected to rigorously vetted test methods. I shouldn't have to add this, but I will (for emphasis): the test methods for physical objects can themselves be validated.

Software (as a component) is a completely different kettle of fish (at least right now, and for the foreseeable future). There are no 'standard industry solutions' that can (a) be dropped into a design without modification or (b) implemented in any given design (e.g. 'compiled') in a single uniform way. The 'material' (source code, libraries) doesn't lend itself to this approach, and the software engineers are (frankly) too creative to restrict them in such a way. If you want to get a sense of just how creative engineers can be with respect to a physical component that hasn't existed since the dawn of time, take a look at all the different standards that exist for threaded fasteners... and mass-production of threaded fasteners has been happening for well-over 100 years.

Now back to the question as asked. From first principles, the most fundamental method of controlling risks is to specifically implement a design element to directly address the risk, and then verifying the effectiveness of the risk control measure.

With many sorts of risks to 'basic safety' that are controlled by a physical element, we don't do more than (for example) 'pick a power supply with 2 MOP'. Unless you are a designer/manufacturer of power supplies, you are highly unlikely to concern yourself with concepts like 'buck-boost' and it is even possible that you ignore the difference between a linear and switching supplies... you have a consensus testing standard and can rely on it for all the complicated questions about how the power supply works as intended, and thus helps to reduce risks. I can guarantee that the power supplies were (originally) designed with all sorts of specific tests and studies to end up with a (NRTL-certified) power supply.

Software testing is similar, except that there is no NRTL for testing software; it is entirely in the hands of the manufacturer to demonstrate that the software is working. This is a long winded response that is: Along with a having a detailed requirement that the software 'work as expected', having tests (and results) of the specific software implementation meeting the requirement is the risk control and its VoE. It is the consensus approach to software design and development in 62304 that motivates 'best practices' in this area, with the presumption that these best practices will improve safety.
 

KrishQA

Starting to get Involved
#8
Thank you @Tidge. I am sure 'dumb' answers help reinforce concepts or even introduce new ones, they are always welcome :)

Now back to the question as asked. From first principles, the most fundamental method of controlling risks is to specifically implement a design element to directly address the risk, and then verifying the effectiveness of the risk control measure.
.
Yes, my understanding is also that we specifically implement a design element to address the risk, in this case however we just need to ensure the algorithm is indeed correct. Let me try to simplify my situation too.

IVDR device that measures characteristic A in a fluid.​
- We have an algorithm to perform measurement of characteristic A.​
- We perform risk analyses as per 62304 and identify that failure of the SW block that is implementing the algorithm may lead to hazardous situation (because the measurement itself is then incorrect)​
- Potential cause for failure could be incorrect specification/implementation of the algorithm​
Now, from the risk control part, can we state the following

- As part of ensuring the algorithm is defined and executed correctly we review the specification and perform testing to ensure that the algorithm works as intended an described in the requirements.​
From your experience, would that be sufficient? I understand there is no supporting information for you to make an educated response but just from this much data can you draw any conclusions?

If it would be of any help the SW class according to 62304 is Class B
 

KrishQA

Starting to get Involved
#9
Looks pretty straight forward. I am assuming you already know the cause and this algorithm you refer to catches it. The issue with verification, I believe, is you can have the algorithm running, but if no one is monitoring the results, you don't have verification. So the verification strategy will be how will the algorithm be monitored, how will you know that there is a problem, and who will respond.
Actually the cause would be if the algorithm itself is flawed/implemented incorrectly
 

Tidge

Trusted Information Resource
#10
IVDR device that measures characteristic A in a fluid.​
- We have an algorithm to perform measurement of characteristic A.​
- We perform risk analyses as per 62304 and identify that failure of the SW block that is implementing the algorithm may lead to hazardous situation (because the measurement itself is then incorrect)​
- Potential cause for failure could be incorrect specification/implementation of the algorithm​
Now, from the risk control part, can we state the following

- As part of ensuring the algorithm is defined and executed correctly we review the specification and perform testing to ensure that the algorithm works as intended an described in the requirements.​
From your experience, would that be sufficient? I understand there is no supporting information for you to make an educated response but just from this much data can you draw any conclusions?

If it would be of any help the SW class according to 62304 is Class B
From here, this approach looks correct to me. I have a couple of possible suggestions that you don't need to implement, but are things that I would consider (in general, for what you describe).

Class B software does not (per 62304) require detailed designs for each software unit, but it does require acceptance criteria for each unit. It has been my experience that software developers often rely on acceptance criteria that can be best evaluated only if you have detailed designs for the units. Project managers could point to wasted effort (to document detailed designs for Class B), but my experience is that having such details helps avoid delay with the acceptance of units and the integration testing of units. I bring this up because "correctly reviewing the specification" might require some details in order for this to be meaningful below the system level testing.

Because you are developing a measurement system, it is likely that you could have the system under development participate in something akin to 'round robin' testing as well as direct challenge against a 'gold standard' as part of system-level testing. This is (in my mind) akin to the activities done as part of test method validation. Commonly with software as a part of a measurement system this is just to look for gross errors that could be blamed on the software; I think it is perfectly reasonable to subject 'measurement' devices to this sort of testing anyway to establish the precision and uncertainty budgets for the finished device. I occasionally encounter folks that still believe that because software can store variables at an arbitrary precision that the software is supposed to guarantee that the larger system has the same precision. I should emphasize that this would be system level testing.
 
Thread starter Similar threads Forum Replies Date
A Defining Expected Service Life in Risk Management File Reliability Analysis - Predictions, Testing and Standards 5
U Defining Essential Performance to Achieve Freedom from Unacceptable Risk IEC 60601 - Medical Electrical Equipment Safety Standards Series 15
N COSHH Risk Assessment - Defining a grading system Miscellaneous Environmental Standards and EMS Related Discussions 4
T Defining Criteria for Risk Acceptability - ISO 14971 Clause 3.2 ISO 14971 - Medical Device Risk Management 4
M Defining High/Medium Risk Impact Misc. Quality Assurance and Business Systems Related Topics 11
A Defining a lower ESD test level in IEC 60601 safety test IEC 60601 - Medical Electrical Equipment Safety Standards Series 5
J Defining staff competence - Small mechanical workshop Occupational Health & Safety Management Standards 20
D Question regarding ECO process, specifically for Life Science products and defining form fit and function ISO 13485:2016 - Medical Device Quality Management Systems 1
T Defining sampling plan for different AQL AQL - Acceptable Quality Level 3
M Defining frequency of measurement tools callibration Calibration and Metrology Software and Hardware 3
M Defining and Documenting Record Retention CE Marking (Conformité Européene) / CB Scheme 5
G Defining performance metrics for DFMA implementation Design and Development of Products and Processes 2
S Defining a Quality System from scratch - Preferred system and documentation names Document Control Systems, Procedures, Forms and Templates 4
C Defining Approvals Required for Design Control Documents ISO 13485:2016 - Medical Device Quality Management Systems 6
K Defining Acceptance Quality Level, I need clarity on AQL 1.5, 2.5, 4.0 AQL - Acceptable Quality Level 5
M Defining the lifetime of orthopedic implants joints Other Medical Device and Orthopedic Related Topics 2
C AS9100 rev D 8.5.1 c 2 - Defining the Machine in-process frequency per ANSI/ASQ Inspection, Prints (Drawings), Testing, Sampling and Related Topics 8
V Defining Safety Precautions for Category 4,5 Molecules Occupational Health & Safety Management Standards 2
E European Regulations defining the terms Repair and Refurbish EU Medical Device Regulations 5
T Defining Major vs. Minor Changes to Procedures ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 11
GStough Adequately Defining Which Suppliers to Audit and Frequency Supplier Quality Assurance and other Supplier Issues 8
E Quality Techucuan (Technician) in Electronics - Defining Postion Requirements Career and Occupation Discussions 4
moritz Defining a good Scope for Critical SOPs ISO 13485:2016 - Medical Device Quality Management Systems 7
T Standards for defining audible alarms/warnings for OR instruments IEC 60601 - Medical Electrical Equipment Safety Standards Series 3
M Defining Critical Vs. Non-Critical Suppliers/Service Providers (API Q1, 9th. Ed.) Oil and Gas Industry Standards and Regulations 2
B IEC 60601-2-24 - Defining Storage Volume IEC 60601 - Medical Electrical Equipment Safety Standards Series 2
E Defining Sub-Disciplines for Chemical Testing Laboratory Employee Proficiency Testing General Measurement Device and Calibration Topics 1
T Defining Nonconformances in a Service Organization ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 6
M Defining Reliability and Confidence Levels Reliability Analysis - Predictions, Testing and Standards 8
J Defining Martial Arts and Gymnastics Statistical Techniques Statistical Analysis Tools, Techniques and SPC 4
V Defining the criteria for equipment to be qualified or requalified Qualification and Validation (including 21 CFR Part 11) 2
R Need help on defining scope for Design Verification File for Class III IVD 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 8
J Defining CCP (Critical Control Points) in a Rice Mill Plant Food Safety - ISO 22000, HACCP (21 CFR 120) 9
S Process Map and defining KPIs Misc. Quality Assurance and Business Systems Related Topics 5
5 Major Nonconformance for not "clearly" defining the "device lifetime" ISO 13485:2016 - Medical Device Quality Management Systems 2
E Defining the lifetime of an Implantable Medical Device Other Medical Device and Orthopedic Related Topics 5
B Defining Expected Oxygen Leakage for Safety Testing IEC 60601 - Medical Electrical Equipment Safety Standards Series 2
G Defining Post Mold Cure Ramp-Down Temperature Manufacturing and Related Processes 2
K Audit Nonconformity on Defining 'Outsourced' Infrastructure Maintenance Quality Manager and Management Related Issues 21
G Points to consider while defining the Quality Policy AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 11
G Defining Quality Objectives for Product Realization and Design and Development AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 5
S Developing Documentation and Defining Processes as Subcontractor IATF 16949 - Automotive Quality Systems Standard 6
C Defining ISO 9001:2008 Scope for a Sterilization Company ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 2
S Defining Skilled vs. Semi-Skilled vs. Unskilled Labor Manufacturing and Related Processes 1
I Defining the scope for ISO 9001 Registration - Software, Hardware and Customer Care ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 3
M Product Specification vs. Information Defining Product - The differences? 7.3.3.1 IATF 16949 - Automotive Quality Systems Standard 6
R Defining Interaction of Processes in a Software Company Software Quality Assurance 3
N Where to begin defining and monitoring Quality Metrics in a Machine Shop Manufacturing and Related Processes 9
A Testing Process Audit - Defining a Process Compliance Mechanism Software Quality Assurance 2
R Defining the type of Applied Part - Metal Probe (Applied Part) employs Water Cooling IEC 60601 - Medical Electrical Equipment Safety Standards Series 2

Similar threads

Top Bottom