Examples of inherent safety by design

Tidge

Trusted Information Resource
I almost fully agree with all that (almost - except the part where it seems you place importance on numbers - P1, P2 etc.), but it's all quite besides the point I was trying to make, that number crunching can be very satisfying and can create an appearance of rationality, but it doesn't make a lot of difference in terms of actually making a device safer. If anything, it's many times used as a fig leaf to cover up hazards and hazardous potential that weren't properly mitigated (typically due to resources load or clashes with other requirements).

I don't think we disagree, but we definitely have a different perspective. You describe the assignment of ratings as a 'fig leaf' to (potentially) cover up something, but I see them as something that once identified and assigned, they become something that can actually be challenged. The challenge is best done during design and development, but an identified fig leaf is at least one that can be 'turned over' and investigated after a device has been marketed.

I am aware of the expressed skepticism involving mathematical ratings, let me offer an example that will perhaps illustrate why I don't share prima facie skepticism. Please note that it is not my common experience that mathematics are used in a completely defensible manner within RM files. Man can fly, but it takes some effort.

This example will use pFMEA. This is a slightly different space than Hazard Analysis, but my experience has been that most folks begin their RM learning and take their first steps with FMEA, so I'm starting here. This example assumes that device manufacturer has established the "Occurrence" of a failure mode rating (O) in FMEA as a quantitative logarithmic scale from 1 to 5, with 5 being the most frequent (say greater than 1 in 100 cases) and 1 being the least likely (less than 1 in 100K cases). If I review an FMEA and see that prior to the implementation of controls, based on some 'historical data' the occurrence of a failure mode is set at O=5, I can use some mathematics to construct a study design with an established power to specify the necessary number of samples (to be tested) to justify a change in the O rating. Some numbers: the sample size (one-sided binomial) study design with 90% power looking for 95% coverage looking for an improvement from 50% to 5% requires only about 8 samples, but improving from 10% to 5% requires over 230. The mathematics are left as an exercise for the reader.

In practice, many times FMEA teams (for low volume processes) are shocked by the numbers (especially when shooting for improvements of "many powers of 10" for already low occurrence failure modes) and this exercise gets less-motivated teams to stop making claims that certain ratings have been assessed quantitatively (and then they reach for the proverbial fig leaves?) but occasionally the exercise drives a robust analysis of historical data that mathematically supports the ratings. At a minimum, a quantitative assessment of ratings can be used to motivate appropriate data collection about processes and designs to verify that controls are actually working at the levels claimed in the RM file. The scope of 14971 applies beyond the design and development of a medical device, so if post-market data reveals that controls are not effective at the claimed level a 14971-compliant organization has an obligation to revisit the (controls, ratings within) RM files.

I see a potential benefit to design teams to try to assess changes in ratings quantitatively by leveraging robust study designs. It is my belief that such an approach could drive design teams to focus the appropriate amount of effort on the actual, appropriate risk controls. It has been my experience that RM files often become diluted by an over-analysis of design elements, process steps, etc. that neither act as controls nor could contribute to the risk profile in a substantial way. I am motivated by the idea that a medical device manufacturer with a demonstrably accurate state of knowledge about any given control costs can avoid wasting time and resources on efforts that neither improve the risk profile nor the state of knowledge about the risk profile.
 

Ronen E

Problem Solver
Moderator
@Tidge, for any of that to actually begin to work you have to either already have or collect reliable quantitative data, which the vast majority of medical device developers I encounter don't have and are not willing to devote the necessary resources to (I work mostly with small companies). In theory it's great, but economical / practical considerations typically trip this approach. What's left is typically a mumbo-jumbo of pseudo-quantitative analysis that IMO is not worth the paper it's printed on or the time spent skimming through.
 

Ronen E

Problem Solver
Moderator
We list ergonomic training as part of safety by design.
That statement makes no sense to me. Training (any type) would be a protective measure (or provision of information for safety) in the face of an un-eliminated hazard. Safety by design would be the elimination of a hazard altogether.
 

Not-A-Robot

Registered
"Training (any type) would be a protective measure (or provision of information for safety)"
No. Training is information for safety; never a protective measure.
See regulations and standards.
 

Ronen E

Problem Solver
Moderator
"See regulations and standards."
No. I don't have the inclination.
Unless you provide a more specific pointer, your word is as good as mine. If you have pointers, I'm glad to be proven wrong (and learn something).

"See the Internet". Hahaha
 
Top Bottom