Design Verification: "Difference" in Sample Size Calculations?



Hi. I am a new quality engineering manager for a medical devices company. Previous experiences in medical devices R&D, project mgmt, and operations.

I am trying to determine the sample size for the Design Verification test. We want to demonstrate that the surface roughness of a particular feature is less than 15[FONT=&quot]?m[/FONT]. The historical standard deviation is 0.088[FONT=&quot]?m[/FONT]. Want the power to be 90% and the confidence to be 95%.

In MiniTab, you need to input the "difference". This is where I'm hung up.

Can someone explain, as simply as possible, what the interpretation of the difference is? Is it the difference in amount of surface roughness ([FONT=&quot]?m[/FONT]) or is it a percentage of the design input (15[FONT=&quot]?m[/FONT])? Or something else.

I appreciate any and all comments! Thank you!

Bev D

Heretical Statistician
Staff member
Super Moderator
It is the difference in surface roughness in the units of measure you specify for the characteristic being tested. If you use microns then the difference is in microns. It is NOT a percentage. The formula needs the actual value.

You may specify in words a percentage, but you must translate this to the applicable units of measure and input this into Minitab.

Top Bottom