Handling Out-of-Specification Results: FDA's guidance for the industry

S

superkidz

#1
I was assigned to prepare SOP for handling out-of-specification/questionable results but having a hard time getting the right references. I came along with FDA's guidance for the industry but there's no specific approach as to the number of retests to be done. I know it's a case-by-case situation but I would appreciate if someone could give me the right reference. A sample SOP will be very much aprreciated.
 
Last edited by a moderator:
Elsmar Forum Sponsor

Ronen E

Problem Solver
Moderator
#2
Re: Out-of-specification: killing me

I was assigned to prepare SOP for handling out-of-specification/questionable results but having a hard time getting the right references. I came along with FDA's guidance for the industry but there's no specific approach as to the number of retests to be done. I know it's a case-by-case situation but I would appreciate if someone could give me the right reference. A sample SOP will be very much aprreciated.
Perhaps this?...

http://ec.europa.eu/health/files/eudralex/vol-4/pdfs-en/2005_10_chapter_6_en.pdf

(it's taken from here: http://ec.europa.eu/health/documents/eudralex/vol-4/index_en.htm)

Could also try this:

http://www.ich.org/fileadmin/Public.../Guidelines/Quality/Q7/Step4/Q7_Guideline.pdf

(from here: http://www.ich.org/products/guidelines/quality/article/quality-guidelines.html)
 

Statistical Steven

Statistician
Leader
Super Moderator
#3
I was assigned to prepare SOP for handling out-of-specification/questionable results but having a hard time getting the right references. I came along with FDA's guidance for the industry but there's no specific approach as to the number of retests to be done. I know it's a case-by-case situation but I would appreciate if someone could give me the right reference. A sample SOP will be very much aprreciated.
There is no definitive reference I have seen on the topic. I would recommend you place close attention to the FDA guidance with regard to averaging and outliers (Outlier tests have no applicability in cases where the variability in the product is what is being assessed, such as for content uniformity, dissolution, or release rate determinations. In these applications, a value perceived to be an outlier may in fact be an accurate result of a nonuniform product.). Typical best practices from SOPs I have seen or written is a minimum of twice the number of initial tests. Also, I have my retest limits tighter than the original acceptance criteria.

Just one approach
 

BradM

Leader
Admin
#4
To Steve' point, here is some more information that might be helpful:

http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/ucm281843.htm

In your response, you state that there are controls in place to control variability in the process and in the final product. These controls and variability should have been prospectively assessed through completion of successful process validation studies. In addition, you reference the Cpk values for processes using a (b)(4) versus the processes using the (b)(4). Your response is inadequate because a Cpk value alone is not an appropriate metric to demonstrate statistical equivalence. Cpk analysis requires a normal underlying distribution and a demonstrated state of statistical process control (ASTM E2281). Statistical equivalence between the (b)(4) and (b)(4) could be demonstrated using either parametric or non-parametric (based on distribution analysis) approaches (comparing means and variances). Your response to Observation #1 does not utilize either of these approaches, and lacks the proper analysis to support your conclusion that no significant differences existed between the two (b)(4) processes.
 

v9991

Trusted Information Resource
#5
OOS is handled at three levels.
1) at first instance, finding out, if that, is an laboratory error.
1.1) procedural it triggers an 'incident', and trigger analytical-review/investigation,(here you must have a detailed checklist/guidance for handle 4M(man material, machine, method...), handling kinds of test-parameters,processes etc.,)
1.2) if an laboratory-error is verified(Root cause), look into level/extent of impact, on other analysis...(due to that cause on other batches, results etc.,)
1.3) then re confirm the results through ""multiple/duplicate"" testing (either samples-analysts-equipments, depending kind of error noted)
1.4) also ensure that appropriate CAPA is tracked & trended
1.5) if an laboratory error is ruled out, then it must trigger next process-reviews/investigations, involving respective functions (usually, technology manufacturing, led-by-QA etc.,)

this phase can be either seen as 2) OR 1.5.1) people have different approaches,
2) it triggers process-review/investigations (detailed checklist/guidance to handle various process controls - equipment, operations, area etc.,)
2.1) broadly again , two scenarios, assignable cause, or non-assignable cause;
2.2) if its assignable cause, its first, assessed for level&extent of impact,
2.3) based on kind of situation (found a process error, sampling error, etc.,) it leads to multiple/dupliate sampling-testing ....(similar as 1.2-1.3...)

this is most toughest(freqient) situation which is as good as third section of OOS...
2.4) if its unassignable cause, and initiate full scale investigation, which includes, extensive sampling or experimentation ""or hypothesis testing"";
the point is you have to ''conclude"" upon with reasonable data about the RCA & impact on other batches/products etc.,

apart from above process steps, one ought to describe the responsibilities, communications, documentation requirements in procedure.

this is the trickiest part., the depth and success of investigation depends on the sincerer & seriousness of the team/management handling the problem. so get management involved, which works most of the times; but be sure to let the know the impact&risk,,, that is the key for involving/influencing management.

hope that helps.

loads of references are available...just in case if you have not already seen them...
http://www.iagim.org/pdf/sop10.pdf
http://www.gmp-verlag.de/media/files/Dateien/OOS_Form-UD6.pdf
http://www.pharmchem.tu-bs.de/forschung/waetzig/dokumente/courtesy_translation.pdf
http://pharmtech.findpharma.com/pharmtech/data/articlestandard//pharmtech/032002/6989/article.pdf
...
and still best source is 483s and warning letters...
viz.,
http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/ucm170912.htm
 
Last edited:
S

superkidz

#6
1.3) then re confirm the results through ""multiple/duplicate"" testing (either samples-analysts-equipments, depending kind of error noted)
I’m thinking of 4 retests by the original analyst and second analyst (two tests each analyst and each test is consists of 2 preparations with 2 injections each).
If the retests of the original analyst and second analyst meet the specification and RSD between the two retests of individual analyst is not more than two and the difference between the result of two analysts is not more than two percent) the first result will be invalidated.
The problem I see with the above is that if retests meet the specification but failed the 2% RSD for the individual analyst and/or the two percent difference between the two analyst, is there still a need to conduct another retest and how? How will I interpret the result then?
How will then be the reporting in my certificate? Can I average all the retests so I can come up with a single result if required.
Any answer would be appreciated.
 

v9991

Trusted Information Resource
#7
I’m thinking of 4 retests by the original analyst and second analyst (two tests each analyst and each test is consists of 2 preparations with 2 injections each).
If the retests of the original analyst and second analyst meet the specification and RSD between the two retests of individual analyst is not more than two and the difference between the result of two analysts is not more than two percent) the first result will be invalidated.
The problem I see with the above is that if retests meet the specification but failed the 2% RSD for the individual analyst and/or the two percent difference between the two analyst, is there still a need to conduct another retest and how? How will I interpret the result then?
How will then be the reporting in my certificate? Can I average all the retests so I can come up with a single result if required.
Any answer would be appreciated.
a) you have started in right direction; of employing 'variation' criteria for repeat tests;
but how do you build RSD for two values?; that is one reason why people look at triplicates. and involve third analyst. that way you could statistically account for within&between RSD criteria.
but, then, there is no single/common approach...

b) once again, you pointed out the right thing; aspect of RSD varying etc.,
this is where, it becomes difficult to fit into standard flow-chart or SOP; it depends on test parameter being considered.,.. a interpretation of variation of results for assay is different from that of dissolution or moisture or impurities..etc.,
Briefly,
we need to decide upon the next course of action, viz., to see if its sampling error or really an indication of process variability.

c) which results to be reported...
averaging is actively discouraged; its appropriate to report the correct result with an * indication/traceability to the incident or OOS.
next part is which result to be reported,....result from the repeat analysis of 1st analyst to be reported .(remember, 2nd analyst is only reference)


d) remember above points are related to 1.3...it need not follow same approach for 2.3!! & current trends is emphasis is on "hypothesis testing". which will determine the course of investigatino & conclusions(CAPA) & reporting.

hope that helps.
 
S

superkidz

#8
averaging is actively discouraged; its appropriate to report the correct result with an * indication/traceability to the incident or OOS.
next part is which result to be reported,....result from the repeat analysis of 1st analyst to be reported .(remember, 2nd analyst is only reference)


remember above points are related to 1.3...it need not follow same approach for 2.3!! & current trends is emphasis is on "hypothesis testing". which will determine the course of investigatino & conclusions(CAPA) & reporting.
Thanks for the reply. It’s clear to me now why the need of triplicate analysis and perhaps will increase to 3 preparations and 3 injections per analyst and will consider also the involvement of the 3rd analyst for a total of 9 retests. 2% RSD for the 3 preparations and the 2% difference for the three analysts will retain.

My concern now is what if it fails the above 2% RSD and 2% difference but passed the specifications. You mentioned that it may need not to be followed for the second retest. What would be the ideal then? involving a 4th analyst?

One more thing, the 1st retest of the original analyst will be reported with * indication/traceability. Would I need to put the annotation on the report for that.

Thanks in advance
 

v9991

Trusted Information Resource
#9
My concern now is what if it fails the above 2% RSD and 2% difference but passed the specifications. You mentioned that it may need not to be followed for the second retest. What would be the ideal then? involving a 4th analyst?
DO NOT get into the TRAP of "analyzing till it passes"; this is the time, where you need to focus on the "where this variation(RSD) is originating from"... defining/identifying this-very-aspect will lead you to the number of analysis to be performed. simply put, your RCAs discovered through the investigation will determine the number multiple/duplicate analysis required. (refer the comment on hypothesis testing added in my eralier response)




One more thing, the 1st retest of the original analyst will be reported with * indication/traceability. Would I need to put the annotation on the report for that.
Yes, atleast on the analytical report; and
preferably Yes on batch release certificate/report.



2% RSD for the 3 preparations and the 2% difference for the three analysts...

You must have a solid justification for this acceptance criteria (no. of 2%) because, you may not always achieve them for all test parameters....viz., consider Blend-uniformity/moisture content or related substances or residual solvents...their system suitability & respective acceptane criteria are different right!!! so.the point is ., it depends on test paramter , process attribute it defines and analytical technique as well...



My concern now is what if it fails the above 2% RSD and 2% difference but passed the specifications. You mentioned that it may need not to be followed for the second retest. What would be the ideal then? involving a 4th analyst?
are you referring to my statement...
v9991; said:
d) remember above points are related to 1.3...it need not follow same approach for 2.3!! & current trends is emphasis is on "hypothesis testing". which will determine the course of investigatino & conclusions(CAPA) & reporting.
what i meant here is that, in case of analytical error, you have the liberty of overcoming through re-confirming your problems; BUT when it comes to process, re-testing WILL not be criteria for resolving the issue; You have to pin point the reason for "variation" which has lead to OOS result; that knowledge/confirmation of variation will also tell you the impact... which leads to taking up the decision for conclusion of OOS (batch reject or release etc.,)

also let me emphasize once again, that impact analysis on other batches - analysis - results etc., need to be closely evaluated.
 
S

superkidz

#10
You must have a solid justification for this acceptance criteria (no. of 2%) because, you may not always achieve them for all test parameters....viz., consider Blend-uniformity/moisture content or related substances or residual solvents...their system suitability & respective acceptane criteria are different right!!! so.the point is ., it depends on test paramter , process attribute it defines and analytical technique as well...





what i meant here is that, in case of analytical error, you have the liberty of overcoming through re-confirming your problems; BUT when it comes to process, re-testing WILL not be criteria for resolving the issue; You have to pin point the reason for "variation" which has lead to OOS result; that knowledge/confirmation of variation will also tell you the impact... which leads to taking up the decision for conclusion of OOS (batch reject or release etc.,)
we are only TESTING not manufacturing product so we must make sure that the variation is not due to laboratory error. the 2% RSD would be coming from the system suitability thus the 2 % difference between the analyst came from (a different approach of course for dissolution and content uniformity). Since there would be a total of 10 tests (1 original and 9 retests which came from the total of 27 retest injections), would it be safe then to say that if 2/3 of the 10 tests pass the specification (assuming it fails the 2 % RSD and or the 2% difference and or the specification for some retests) and the average of 10 tests pass the specification then can we still conclude that the sample pass the tests? The average of the ten would then be reported in the test report. The original test result would still be invalidated then if all retests pass the specs, the 2% RSD and the 2% difference.

Would this be ok? I would like to reiterate then that we are only TESTING products.
 
Thread starter Similar threads Forum Replies Date
A Handling of Characteristic FMEA and Control Plans 1
A Report on handling customer property ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 1
V Handling deviations during analytical method validations Qualification and Validation (including 21 CFR Part 11) 4
C Handling in-Process Defect Repairs Nonconformance and Corrective Action 31
T Is handling chemically treated parts safe? RoHS, REACH, ELV, IMDS and Restricted Substances 18
N Handling B-Grade Products within the QMS ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 15
M Handling design changes with use-as-is inventory disposition Quality Manager and Management Related Issues 6
A Complaint review as part of the complaint handling process? ISO 13485:2016 - Medical Device Quality Management Systems 3
S Advice on how to reduce overhead of handling non-conforming material Nonconformance and Corrective Action 7
B Handling lower detection limits for SPC and process performance Statistical Analysis Tools, Techniques and SPC 1
A ISO 10002:2018 Checklist Needed (Complaints Handling) Customer Complaints 5
G Handling Unpacked (Additive Chemical) Product For Automotive Applications IATF 16949 - Automotive Quality Systems Standard 3
A Handling of contaminated medical equipment Other Medical Device and Orthopedic Related Topics 0
B Procedures for Complaint Handling and Post Market Surveillance EU Medical Device Regulations 10
J KPIs or Metrics to Measure a New Complaint Handling Process 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 9
shimonv Rigid rules for handling supplier audit findings ISO 13485:2016 - Medical Device Quality Management Systems 11
Jimmy123 Handling nonfunctional requirements in DFMEA FMEA and Control Plans 5
F Component Molding and Over-molding - Handling Resin Inventory Manufacturing and Related Processes 2
Q Handling Off-the-Shelf Components 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 1
V Handling open points in design reviews 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 4
L AS9100 D- Handling Nonconformance Documentation for an organization that outsources most of the work. AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 11
P HVAC System vs Air Handling Unit (AHU) - Differences Pharmaceuticals (21 CFR Part 210, 21 CFR Part 211 and related Regulations) 1
L GMDN code wanted - Software for handling records Service Industry Specific Topics 9
leftoverture Handling Sort/Return Requests Customer Complaints 8
S Document Handling during an Audit General Auditing Discussions 6
S Handling Cost of Sales requests for Customer Quality Manager and Management Related Issues 2
Q AS9100/AS1180-1 - Handling of Equipment and Calibration Records AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 12
M Complaint Handling Responsibilities for a Design Partner - ISO 13485 ISO 13485:2016 - Medical Device Quality Management Systems 9
S Merging Post Market Surveillance and Complaint Handling ISO 13485:2016 - Medical Device Quality Management Systems 3
A ISO 9001 Ergonomics and Manual Handling Requirements ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 7
C Suggestions requested on handling batch record papers exposed to hormonal products Pharmaceuticals (21 CFR Part 210, 21 CFR Part 211 and related Regulations) 1
A Production and Post-Production and Complaint Handling ISO 14971 - Medical Device Risk Management 2
C What's Your Process for handling rush jobs, urgent or priority orders ? Manufacturing and Related Processes 6
A Medical Device Handling AFTER the device's specified lifetime has expired 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 2
T Handling of Obsolete Material Work Instruction ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 2
A Risk Management, complaint handling and CAPA system ISO 14971 - Medical Device Risk Management 5
W Aerospace Materials Handling Training AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 9
V Handling decomissioned line/equipment during 1st time product-based inspection US Food and Drug Administration (FDA) 1
D Complaint Handling Call Centers ISO 13485:2016 - Medical Device Quality Management Systems 7
W Contract Initial Importer and Order Handling for a Class I Medical Device Service Industry Specific Topics 2
W SOP on Handling of HCl (Hydrochloric Acid) Document Control Systems, Procedures, Forms and Templates 5
A Handling Normal Capacitor Production Fallout AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 2
K Procedure for Handling of Customer Supplied Material (AS9100 Requirements) AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 3
M Procedure for Handling Customer Return Part (Defective Part Return) IATF 16949 - Automotive Quality Systems Standard 1
M Manufacturer's handling of Distributor's Historical Complaint Records US Food and Drug Administration (FDA) 5
C Anyone have experience handling Feed Safety Management System (GMP+B2)? Other ISO and International Standards and European Regulations 1
V ISO10002:2004 (Guidelines for Handling Customer Complaints) Self Assessment Checklist Customer Complaints 5
V Ensuring Improvement and Handling Change FMEA and Control Plans 4
M Internal Warehouse Handling - A Reverse Logistic Case! Process Maps, Process Mapping and Turtle Diagrams 6
C Corrective Action and Preventive Action for Operator Error (Cosmetic - Handling) Preventive Action and Continuous Improvement 15

Similar threads

Top Bottom