How to consider worst-case device in design validation if using production-equivalent device?

jddad19

Starting to get Involved
Say you have a user need for a cable to reach from the operating table to a console in the non-sterile field, and we set a design requirement of >10ft but we decide to add a safety factor in the actual device and use a 12ft cable in the product. When we perform design validation using the "final" product and make the conclusion that the cable length meets the user need, it seems like we are missing the worst-case we've set in our design requirement.

The options I could think of are:
1. Perform validation with the worst-case cable length, but then the device would no longer be the production-equivalent device.
2. Update the design requirement to the actual cable length used, but the original design requirement is based on the actual clinical requirement so seems contrived to update to our safety factor. Or in another situation, if it were a performance specification that we happened to exceed in design verification, we wouldn't update the requirement to our actual measured performance, but the device the user tests in validation will have the exceeded performance.

Any feedback is appreciated!
 

yodon

Leader
Super Moderator
You're not talking about a "worst case condition" here. Your spec is either 10' or 12' so you would test with whatever you spec for production.

By "safety factor," do you mean your customer needs were at least 10' and you went up to 12' to be sure they got at least 10' or do you mean that you did a risk analysis and concluded that the device would be safer to use if a 12' cable were used (reduces likelihood or accidental disconnection, pulling equipment, etc.)? If the latter, then it's certainly NOT a contrived requirement, it would be driven by the risk control. If the former, then you're just adapting the requirement to what you think the user wants. You still need to flow this down to production so it needs to be specified. (One might argue that the >10' length was contrived since that could be 10'.25" or 100'! Maybe the user need is >10' and your specifications are 12' with tolerance?).

One consideration is usability. Will the extra 2' lead to a greater chance of entanglement? You may want to play that out in some formative studies prior to setting the length in stone.
 

jddad19

Starting to get Involved
Thank you for your input, I understand what you're saying and realize this is perhaps a bad example of what I'm trying to understand.

Let me try again, let's say I have a design spec that a drill needs to spin at 1000 rpms (based on a clinically relevant justification) and when I go to test my device, I find that the lower tolerance limit is 1200 rpms. So I've met my requirement and when I perform design validation on production equivalent devices, presumably they are spinning at 1200+ rpms. Say the users find this acceptable and I release this product to market, but my requirement is still 1000 rpms.

How do I reconcile the difference? The design requirement would still appear as the requirement but would not reflect what the user validated. Let's say over time the device regresses due to non-design change reasons and we find in production that the device is consistently measuring at least 1050rpms. I'd see that this meets my design requirement and would think all is good, but there seems to be a gap in that it's lower performance than what was validated?
 

yodon

Leader
Super Moderator
I still think you're talking about design specs -v- "worst case" scenarios. An example of a worst case scenario is in environmental operations; e.g., coldest temperature and lowest humidity.

For the drill example, I would think you would establish upper and lower limits where the drill operates safely and effectively. You would confirm that it does so in V&V over the range. You would likely include internal checks to ensure it is operating within those limits and throw an error if it isn't. You might set your production acceptance to a "sweet spot" but something has to be considered out of range. If you had a drill going at 1500rpm, would that be acceptable? 10,000rpm?
 

jddad19

Starting to get Involved
You're correct, they way I'm using "worst-case" is in terms of my design requirement. So I'm validating the "nominal" design but not the extremes of what the device could be within the prescribed design requirements. That's the basis of my conundrum, how do I know that a device built within specifications but at the limits of one or more of those specifications would still be acceptable to the user if the user only validated the "nominal" device?

Thank you again for your inputs on this.
 

Bev D

Heretical Statistician
Leader
Super Moderator
I’m not sure I really understand your dilemma. No spec has a zero tolerance. Some manufacturing processes will guardband their manufacturing spec to keep away for the design limit. You should validate parts at ALL OF THE LIMITS. The design limit and the manufacturing imposed limit if different.
Let’s say you have a design limit of at least 1000. Manufacturing imposes a guardband limit of at least 1200. You would validate the 1000 and the highest level allowed by manufacturing. (Manufacturing can always release product to the design limit if they create parts under their guardbanded limit. Also what is the max allowable or possible value? You would need to validate that level as well.

Where is the dilemma?
 

jddad19

Starting to get Involved
Thank you for taking the time to respond. The dilemma is that we validate a production equivalent device, which is like a snapshot in time. We can't build the device to be exactly that way every time. That device has a certain performance that meets the design (and process) requirements, but isn't necessarily representative of the limits of those requirements. I understand that we may have process tolerances in place different (more stringent) than the design requirements, but not all of those process limits are going to be based on the LTL's and UTL's determined through design verification. So, if you have a device that meets your design and process requirements but has different specific performance than the snapshot-in-time validated device, how do you ensure it would meet the user's needs?

The way I'm thinking of it, it's sort of analogous to an OQ/PQ. The OQ challenges the limits so we know if the process operates at those limits it will still produce product meeting specification. Then the PQ is ran at nominal. The DVal study is ran at nominal, but how do you know it would still meet the user's needs if it was built to the worst-case limits?
 

yodon

Leader
Super Moderator
how do you ensure it would meet the user's needs?
Don't mean to sound harsh here, but I think this may be the issue: you don't really know what the user needs are, it would seem. Otherwise, you would spec the device to those needs and control manufacturing to ensure the spec (with tolerance) is met. You don't want manufacturing so uncontrolled that lots have such wide variances.

Unless this is a novel device, you might be able to glean specs from competitors' devices. Your risk analysis may also help drive the specs.
 

Bev D

Heretical Statistician
Leader
Super Moderator
I also think you are handcuffing your own wrists to your ankles. Some of this dilemma can be attributed to the insane separation of Design validation from process validation. While there is some natural separation we must remember that BOTH the design and the process must produce product that works correctly at all limits regardless of their source. Think about it as product validation. After all you don’t sell your design or process. Your Customer doesn’t use your design or your processes. They use your product.

IF manufacturing uses the same spec limits that were validated by the design validation THEN the process validation can be limited to only ensuring that the design specs are met by the limits of the manufacturing process settings. Any other approach is simply putting your thumb on the scale to ‘pass validation’.
 

jddad19

Starting to get Involved
IF manufacturing uses the same spec limits that were validated by the design validation
I think this is my problem, the design validation does not validate the spec limits. It just validates the product that was made at that time. But the product, of course, will always be built to a range.

You don't want manufacturing so uncontrolled that lots have such wide variances.
True, and maybe this is a piece of the puzzle I'm overlooking. But I can still imagine instances where we have a design spec based on a clinical requirement that is far exceeded by the actual performance. Even if we set the manufacturing spec around the actual performance we would still have this lingering requirement that is much lower than what was validated and theoretically tells us the product is good if it exceeds this. Would you go back and update your requirements to all the measured limits from DV testing?

I don't mean to beat a dead horse here and I appreciate both of you taking the time to provide your inputs and expand my thinking on this.
 
Top Bottom