# Statistical basis and justification while comparing / changing sampling plans

#### v9991

Trusted Information Resource
apart from the product consistency or particular unit-operation/machine performance, what are the statistical-points to be considered for comparison/evaluation for changing the sampling plans.

Does it involve OC curve comparison and differences in same.? any example is much appreciated.

following references provide the 'how to create/assess' but not justify or evaluation.
or may be i am missing some point here...! pl help.

http://www.ombuenterprises.com/LibraryPDFs/The_OC_Curve_of_Attribute_Acceptance_Plans.pdf
How do I create an accept on none (C=0) acceptance sampling plan? - Minitab
6.2.3.2. Choosing a Sampling Plan with a given OC Curve

#### Bev D

##### Heretical Statistician
Staff member
Super Moderator
The justification for changing sampling plans is that you need more protection (need to catch lower defective lots more frequently) or you need to reject fewer lots that have an acceptable defect rate.

ALL sampling plans are based on 4 things:
AQL: the maximum acceptable defect rate that can be shipped to your Customer
AQL Confidence: the percentage fo time that you want to accept that AQL
RQL (aka LTPD): the minimum defect rate that is NOT acceptable to ship to your Customer
RQL Confidence: the percentage fo the time that you want to detect (and reject) that RQL.

Comparisons of OC curves help you to determine if you have met those 4 criteria.

#### Mike S.

##### Happy to be Alive
Trusted Information Resource
Why are you considering changing sampling plans?

Another reason you may want to change plans is because a customer demands you sample per standard X but it provides you little or no value, so you want to comply with as little waste as possible.

Or perhaps you have a "we've always done it this way" situation but your sampling plan is overly complex or you feel it is overkill but need to justify changes to the boss/customer.

#### v9991

Trusted Information Resource
Why are you considering changing sampling plans?
two scenarios,
1. wherein we were cited for not having 'risk assessment' in support of the AQL defect list and sample plan/sizes
2. where we want to move away from 'rule of thumb' which is practiced in certain category of products.

Have found couple of articles on the formula and approach, but none outlining the sop or risk assessment etc :-(

#### Bev D

##### Heretical Statistician
Staff member
Super Moderator
I’m not sure what your ‘auditor’ was thinking when he/she cited a lack of risk assessment. The ‘risk assessment’ is very basic: what is the severity of the effect of the ‘defect’? Then what is the maximum tolerable defect rate that your Customer can /will/could tolerate for that defect? This maximum is the RQL (rejectable quality level) of the plan (the RQL is too often ignored for statistical sampling plans - but it is the most important number to the Customer). The AQL is then the defect rate that the Customer can easily tolerate without requiring screening or rework.

The other hidden statistical basis is that sample size is not related to teh lot size - but since this approach is so ingrained in the standards and our collective mythology enshrouding how we’ve always done it, most people simply overlook it.

Unfortunately in today’s environment the tolerable defect rate for most defects is zero....

#### v9991

Trusted Information Resource
Then what is the maximum tolerable defect rate that your Customer can /will/could tolerate fro that defect? This maximumis is the RQL of the plan (the RQL is too often ignored for statistical sampling plans - but it is the most important number to the Customer). The AQL is then the defect rate that the Customer can easily tolerate without requiring screening or rework.
this is clear at definition level, but the question about 'determining' the exact/approximate value for , especially when the end-customer is 'patient'; like we are drug product manufacturers ( combination product), and we have to have an 'decision tree' or 'criteria' for assigning same.

.
The other hidden statistical basis is that sample size is not related to teh lot size - but since this approach is so ingrained in the standards and our collective mythology enshrouding how we’ve always done it, most people simply overlook it.
I hope you are referencing to OC-curve using binomial distribution; BUT
it makes lot of sense to adopt 'hypergeometric distribution' for assessing the affect of given RQL/AQL level for any "samplsize e-vs-lot size size"
can you pl. elaborate;

.Unfortunately in today’s environment the tolerable defect rate for most defects is zero....
this is true, but seen in the context of multiple check and balances, starting from design & process-controls;
if we have really reflect zero-defect rate in QC-checks, then the sample size is going to be very high.

#### Bev D

##### Heretical Statistician
Staff member
Super Moderator
Why do you need a decision tree? Why do you need criteria? Are you not capable of logical thought? Are you not capable of discussing with your Customer what the effects of any defect are? You are under thinking this and trying to over proceduralize it. Think. There is no cookbook answer to this.

The Hypergeometric approach only applies when the lot is fairly small. Most lots are larger than those that would require the hypergeometric and all of th e’popular’ Sampling plans are based on the AQL and lot size...

Yes - zero defect rates would require 100% inspection. Too bad. This is the world of today. Inspection won’t get you there - you need to have capable processes and process controls and mistake proofing....

#### v9991

Trusted Information Resource
Why do you need a decision tree? Why do you need criteria? Are you not capable of logical thought? Are you not capable of discussing with your Customer what the effects of any defect are? You are under thinking this and trying to over proceduralize it. Think. There is no cookbook answer to this.
True,
1) the attempt is to proceduralize it, as there are atleast 10 teams who have to work with it on different products.
2) viz., we presented it as 95% probability, agency;s comment is to tighten is it...now the question is whether it should be 97.5% or 98% or 99% or 99.5% or 99.9% !)
3) and another reason is that, the tug-of-war in "optimizing" the sample size considering the load on quality control .vs. patient impact ( in our case the 'FDA-reviewer') is leading to more repetitive works on same topic. (

The Hypergeometric approach only applies when the lot is fairly small. Most lots are larger than those that would require the hypergeometric and all of th e’popular’ Sampling plans are based on the AQL and lot size...
.
...
Yes - zero defect rates would require 100% inspection. Too bad. This is the world of today. Inspection won’t get you there - you need to have capable processes and process controls and mistake proofing....
an follow through question is that, can mistake proofing and process controls set the argument for lower levels of probability!?

#### optomist1

##### A Sea of Statistics
Trusted Information Resource
well said..."Yes - zero defect rates would require 100% inspection. Too bad. This is the world of today. Inspection won’t get you there - you need to have capable processes and process controls and mistake proofing.... "

If one does not "effectively", control one's inputs, what chance does one have to control the output(s)? The "More Inspection" mantra is mostly a financial and operational sink hole

#### Bev D

##### Heretical Statistician
Staff member
Super Moderator
Multiple teams: the most effective way to drive consistent and effective sampling schemes is not writing a document for them to follow. A brief outline of the basic steps such as determine the defects (failures) and their severity on the patient/Customer, etc. is good practice. However the really effective action is to have the subject matter expert coach the teams. This way they learn and you get good results. Otherwise the teams only remember what to do and you have not improved the collective knowledge of your organization.

High confidence levels: this is something that regulatory reviewers often chase. The FDA is no different. Again the confidence levels are increasing beyond 95% as defects of any kind continue to become less and less tolerated. All of this leads to larger and larger sampling sizes. The only remedy here is to have open and honest communication with your reviewer and the stats group.

As stated before the only remedy to larger sample sizes is to have well characterized processes (KNOW how your inputs effect your outputs) and to implement effective process controls. Note that is plural. A single process control is not sufficient. A better approach to inspection than random sampling at the end is to perform targeted sampling of small sample sizes: when you understand how your processes vary, you can take small sample sizes at the change points to determine the state of the process. Well characterized and controlled processes rarely have random defects. these processes tend to produce systemic defects (not always but often) such that once a defect is produced, the process continues to produce defects until it is corrected. Also remember that SPC will detect shifts and trends before defects are created. For truly random defects - usually related to operator errors or inherent in-capability of processes (in other words not well characterized or controlled) is 1. Mistake proofing to prevent the error or catch the defect and 2. Fix the process. While all of this sounds like hard work it is not as hard or expensive as inspection with large sample sizes, remediation of defective lots and dealing with escapes to your Customers. In fact it is nothing but good engineering and science.

Will the FDA - or any regulatory agency - accept this instead of huge sample sizes? Sometimes. Sometimes they won’t immediately. They need proof and trust. You must invest in ongoing communication and partnership with them. You must characterize and control your processes. You must care about quality. Unfortunately many - not all - of these regulatory agencies are way behind the times when it comes to Quality. Their default ‘control’ is inspection. And that’s the fault of business. As long as we don’t characterize and control our processes, as long as we make short term ‘economic’ decisions about known quality defects these agencies will dig in to the only defense they have: oppressive inspections. It is easy for them to audit and maintain legal recourse in the event of an escape.

Thread starter Similar threads Forum Replies Date
Statistical basis for 30 pieces for FAI 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 7
Statistical basis for square-root sampling Inspection, Prints (Drawings), Testing, Sampling and Related Topics 11
Minimum sample size - Guidance and statistical rationale Inspection, Prints (Drawings), Testing, Sampling and Related Topics 3
Design Verification & Process Validation - Statistical sample sizes Design and Development of Products and Processes 2
Interactive visualization through graphical simulation of statistical concepts Statistical Analysis Tools, Techniques and SPC 3
Profound Statistical Concepts Misc. Quality Assurance and Business Systems Related Topics 63
Statistical Analysis - Check if these organisms at different concentrations affect the growth of wheat seedlings Using Minitab Software 4
Statistical Techniques Procedure - What should be included Document Control Systems, Procedures, Forms and Templates 4
Statistical justification of sampling size in V&V tests ISO 13485:2016 - Medical Device Quality Management Systems 5
It’s time to talk about ditching statistical significance Statistical Analysis Tools, Techniques and SPC 6
Steve Prevette's Statistical Process Control (SPC) "Library" Statistical Analysis Tools, Techniques and SPC 0
A Balanced view of statistical tests Statistical Analysis Tools, Techniques and SPC 3
SPC (Statistical Process Control) for Unilateral Tolerance - Questions Statistical Analysis Tools, Techniques and SPC 6
IATF 16949 9.1.1.3 Application of statistical concepts - Our technicians are quizzed for statistical knowledge IATF 16949 - Automotive Quality Systems Standard 3
Please help identify appropriate statistical treatment Statistical Analysis Tools, Techniques and SPC 13
IATF 16949 clause 7.1.5.1.1 - Statistical studies shall be conducted IATF 16949 - Automotive Quality Systems Standard 3
Statistical Process Control and Inspection in Footwear Production Statistical Analysis Tools, Techniques and SPC 0
IATF 16949 Cl. 7.1.5.1.1 - Statistical studies shall be conducted IATF 16949 - Automotive Quality Systems Standard 3
Statistical Process Control Library Statistical Analysis Tools, Techniques and SPC 17
Happy Birthday Statistical Steven - 2015 Covegratulations 10
L When are Statistical techniques not applicable? Service Industry Specific Topics 16
FDA 21 CFR 820.250 - Does "valid statistical" always mean math? 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 6
Common Statistical Errors Using Minitab Software 1
Y Statistical Analysis of Road Traffic Data Statistical Analysis Tools, Techniques and SPC 11
Class II Medical Device Manufacturer - SOP for 820.250 Statistical 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 3
E Correct Statistical Test comparing 2 Groups Statistical Analysis Tools, Techniques and SPC 14
A Statistical Software Calibration using Ford's "Sample Calibration File" Statistical Analysis Tools, Techniques and SPC 8
Defining Martial Arts and Gymnastics Statistical Techniques Statistical Analysis Tools, Techniques and SPC 4
J Capability Analysis - Unusual Statistical Distribution of my Proccess Capability, Accuracy and Stability - Processes, Machines, etc. 5
M PhD Thesis Data Statistical Analysis Methods Statistical Analysis Tools, Techniques and SPC 2
J Statistical Significance and SPC Control Chart Reports Statistical Analysis Tools, Techniques and SPC 9
N Statistical Quality Improvement Action for Small Batch Production Statistical Analysis Tools, Techniques and SPC 17
Validation of macro - scripts - programs used in statistical software (Minitab-SAS... Qualification and Validation (including 21 CFR Part 11) 5
Statistical Process Control Crash Course - Question Quality Manager and Management Related Issues 10
H Statistical Models for Predictive Management of Software Processes Software Quality Assurance 2
A Statistical Correlation between ordered SKUs Statistical Analysis Tools, Techniques and SPC 8
S Minitab and Crystal Ball Statistical Analysis Software Using Minitab Software 13
M Determining if two different X's have any Statistical Significance on the Y's Statistical Analysis Tools, Techniques and SPC 4
Statistical Stability for the PQ of Analytical Equipment Qualification and Validation (including 21 CFR Part 11) 1
F Statistical Comparison of Product: High Average vs. Low Range Capability, Accuracy and Stability - Processes, Machines, etc. 13
E Using ANOVA during the PQ Validation Run to evaluate Statistical Differences Statistical Analysis Tools, Techniques and SPC 4
W Gage R&R for gage pins used to inspect a hole ID called a Statistical Tolerance Gage R&R (GR&R) and MSA (Measurement Systems Analysis) 3
AS 9100C sec 8.2.4 - " Recognized Statistical Principles" meaning AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 7
R What is PSW - Statistical Process Package + Level 5 APQP and PPAP 7
O Is SPC (Statistical Process Control) always required? Statistical Analysis Tools, Techniques and SPC 4
Any experience/feedback on statistical software tools...minitab -design expert -JMP Using Minitab Software 4
M SPC (Statistical Process Control) Book Recommendations wanted Book, Video, Blog and Web Site Reviews and Recommendations 1
How to use Statistical Process Control (SPC)? Statistical Analysis Tools, Techniques and SPC 8
TS 16949 8.1.1 Identification of Statistical Tools - Requirement Scope? IATF 16949 - Automotive Quality Systems Standard 13
D Appropriate Statistical Methods to set up an Alert/Warning Limit Quality Assurance and Compliance Software Tools and Solutions 15