#### ASchmitz93

##### Registered

__I have indeed reviewed quite a few other threads here__to try and find the answer to my exact question... but haven't seen quite what I'm looking for. Please help

I'm validating a material manufacturing process where my material strength needs to be shown to be consistently and reliably above a required value, let's call that

__. My company has a ton of historical experience and data with this type of manufacturing process, to the point where I know/expect the process is to be VERY capable. For example, historical data shows sample average ~15% above__

**value X****X**and no single value even within 12% of

**X**... Ppk ~1.8 (sorry couldn't figure out how to insert a picture of actual historical capability analysis).

I would like to use this experience, historical data, and knowledge our company has to be more aggressive with sample sizes (lower them) going forward for process validations... but my boss is stuck on "we need to target a sample size of 59... this is based on a sample size for tolerance intervals at 95% reliability with 95% confidence... it's based on the mil-std / mil-hbk... it's the industry norm, it's what auditors expect..." - blah blah... you all seem very knowledgeable and probably know this opinion all too well.

My issue with his opinion is that when our process is so capable (tight spread and average so far above the requirement) this much destructive testing is wasteful... I could probably test 20 samples and run a capability analysis and it'd show the process is sufficiently capable.

**My targeted question for all you smart people:**

- Am I on the right track by trying to justify a lower sample size by utilizing process capability? Do I just need to push harder on my boss? Or is there another statistical approach to utilize which could justify the lower sample sizes? ... I'm not in the wrong am I?

**A side question I have:**

- Which mil-std / mil-hbk is being referenced here? 105 - SAMPLING PROCEDURES AND TABLES FOR INSPECTION BY ATTRIBUTES?
- I want to really dig into where this industry norm has come from... where do I need to look? What standard or best practice will show me why the industry has trended towards these confidence and reliability limits? (this is in pursuit of me trying to show my boss the original intent was different than what we're trying to achieve)
- @Bev D you referenced it way back here - "the tradition of using 95% confidence - or 5% alpha risk - dates back to Sir Ronald Fisher. Although this is often misquoted as Fisher suggested that a 5% alpha risk woudl be sufficient for an analysis IF the experiment were replicated several times with the same results being statistically significant at a 5% alpha level each time. The Mil Std acceptance tables used a confidence level of 95% for AQL based plans and since then 95% has been traditionally used for confidence levels. Reliability is not traditionally anchored... " in the 'Defining Reliability and Confidence Levels' thread

**Heavily related thread(s) & comments:**

- As @Tidge so clearly defined on another thread "
"*Process validation*: establishing by objective evidence that a process consistently produces a result or product meeting it's predetermined requirements - @Bev D on the 'Sample size considerations in medical process qualification' thread you said - "Both the “AQL” and the confidence/reliability formulas are intended for inspection of lots from a process stream. From your post I think you are talking about validating that a process is capable of meeting a certain defect rate? This would be a completely different set of formulas." - What are those other formulas you recommend? My boss has said we should use the sample size for tolerance interval formulas as well.

Sorry for the long post, I wanted to be thorough