OXYGEN RICH ENVIRONMENT analysis

rothlis

Involved In Discussions
Hi all,
We are looking at using compliance with clause 11.2.2.1 to establish whether it is safe to use our device in the operating room during clinical research, where it will be used at the patient's ears. Despite Peter Selvey's observation that the standard is actually concerned with areas inside the ME EQUIPMENT, we are conservatively assuming that leaks from a mask or nasal cannula could feasibly result in an oxygen concentration > 25% in the immediate area where our device is used.

Our design meets the criteria of "a source of ignition" according to Figure 36 due to the presence of 40 V on a capacitive circuit. The 300 trial ignition test does not easily fit into our current schedule, so we were hoping that we could build upon the standard's advice that "Items 4) and 5) address the worst case where the atmosphere is 100 % oxygen, the contact material (for item 5) is solder and the fuel is cotton. Available fuels and oxygen concentrations should be taken into consideration when applying these specific requirements. Where deviations from these worst case limits are made (based on lower oxygen concentrations or less flammable fuels) they shall be justified and documented in the RISK MANAGEMENT FILE."

Does anybody have any experience justifying deviation from the worst case limits? Our thinking was that the best way to do this would be to locate the origin of the Figure 36 graph and work from the data which informed it to generate a new graph based on our less-than-worst-case conditions, but we have been unable to track that down. Or are there any other suggestions on how to tackle this?

Thanks in advance.
 

Peter Selvey

Leader
Super Moderator
Personally I would start with some experiments. Stand alone oxygen sensors are not that expensive so you could place them around your device either fitted to some volunteers or dummy set up and see how feasible it is to create >25% with an oxygen leak. It could eliminate the discussion about the applicability of Clause 11.2.2.

Also it's worth to have some perspective on the intent of flammability requirements and the relationship with the 25% limit. Normal flammability tests like FV-0 (UL 94 V0) are not designed to eliminate flame or ignition, but instead to show that a fire would be extinguished within a certain timeframe (10s). The point is to ensure that an electrical device does not propagate fire easily. These tests are done in 21% Oxygen (natural air) and assumed to be valid up to 25%. For higher concentrations of oxygen, the time taken to extinguish could be longer than 10s, therefore higher oxygen concentrations could invalidate FV-0 and other flammability ratings. Hence the standard uses 25% as a switch to decide if clause 11.2.2 is applicable or not, because it affects propagation, not ignition.

In terms of patient safety, 10s is already too long time if something is catching on fire directly on the patient e.g. at a person's ear. Also there may be plenty of material around which is highly flammable at 21% such as hair or clothing. Therefore, parts that directly contact the patient should anyhow be designed with negligible risk of ignition. The level of oxygen may still impact the probability of ignition, but the potential for ignition still exists even at 21%.

Of course, this issue applies to a wide range of medical devices that directly contact the patient (for example, pulse oximeter, temperature probe), but in practice no one really worries about it. The reason is that low power secondary circuits have inherently negligible risk from ignition.

I'm struggling to find a reason behind the 0.5µF/35V criteria for capacitive circuits. Actually for both the resistance and capacitive circuits, the sparking is determined as much by stray parameters in the circuit (e.g. self inductance) rather than the capacitor/current value itself. I can only guess that experiments were performed years ago that found 0.5u/35V to be the threshold, but it would hugely depend on the length and gauge of cabling, capacitor type and so on. Perhaps a special pulse capacitor with 1m of wiring? I suspect that modern high density SMD caps with short wires would struggle to make a spark let alone have enough energy for ignition of surrounding material.
 

rothlis

Involved In Discussions
Thanks Peter. We appreciate getting your expert input on this.

Personally I would start with some experiments. Stand alone oxygen sensors are not that expensive so you could place them around your device either fitted to some volunteers or dummy set up and see how feasible it is to create >25% with an oxygen leak. It could eliminate the discussion about the applicability of Clause 11.2.2.
We were going to try and do some estimation based on diffusion, but this sounds like a good back-up if the analysis looks dubious.

I'm struggling to find a reason behind the 0.5µF/35V criteria for capacitive circuits. Actually for both the resistance and capacitive circuits, the sparking is determined as much by stray parameters in the circuit (e.g. self inductance) rather than the capacitor/current value itself. I can only guess that experiments were performed years ago that found 0.5u/35V to be the threshold, but it would hugely depend on the length and gauge of cabling, capacitor type and so on. Perhaps a special pulse capacitor with 1m of wiring? I suspect that modern high density SMD caps with short wires would struggle to make a spark let alone have enough energy for ignition of surrounding material.
Since posting, we're pretty sure we found the source in Kohl et al 2000. There's a sparse set of data points that, along with added safety margin, were translated into the plots we find in Figures 35 - 37. Given the limited data set, there isn't really much there for extrapolating to different conditions. And they didn't test above 30 V on the assumption that relevant designs wouldn't need more than that.
 
Top Bottom