M
Hi. I am a new quality engineering manager for a medical devices company. Previous experiences in medical devices R&D, project mgmt, and operations.
I am trying to determine the sample size for the Design Verification test. We want to demonstrate that the surface roughness of a particular feature is less than 15[FONT="]?m[/FONT]. The historical standard deviation is 0.088[FONT="]?m[/FONT]. Want the power to be 90% and the confidence to be 95%.
In MiniTab, you need to input the "difference". This is where I'm hung up.
Can someone explain, as simply as possible, what the interpretation of the difference is? Is it the difference in amount of surface roughness ([FONT="]?m[/FONT]) or is it a percentage of the design input (15[FONT="]?m[/FONT])? Or something else.
I appreciate any and all comments! Thank you!
I am trying to determine the sample size for the Design Verification test. We want to demonstrate that the surface roughness of a particular feature is less than 15[FONT="]?m[/FONT]. The historical standard deviation is 0.088[FONT="]?m[/FONT]. Want the power to be 90% and the confidence to be 95%.
In MiniTab, you need to input the "difference". This is where I'm hung up.
Can someone explain, as simply as possible, what the interpretation of the difference is? Is it the difference in amount of surface roughness ([FONT="]?m[/FONT]) or is it a percentage of the design input (15[FONT="]?m[/FONT])? Or something else.
I appreciate any and all comments! Thank you!