Handling Out-of-Specification Results: FDA's guidance for the industry

S

superkidz

I was assigned to prepare SOP for handling out-of-specification/questionable results but having a hard time getting the right references. I came along with FDA's guidance for the industry but there's no specific approach as to the number of retests to be done. I know it's a case-by-case situation but I would appreciate if someone could give me the right reference. A sample SOP will be very much aprreciated.
 
Last edited by a moderator:

Ronen E

Problem Solver
Moderator
Re: Out-of-specification: killing me

I was assigned to prepare SOP for handling out-of-specification/questionable results but having a hard time getting the right references. I came along with FDA's guidance for the industry but there's no specific approach as to the number of retests to be done. I know it's a case-by-case situation but I would appreciate if someone could give me the right reference. A sample SOP will be very much aprreciated.

Perhaps this?...

http://ec.europa.eu/health/files/eudralex/vol-4/pdfs-en/2005_10_chapter_6_en.pdf

(it's taken from here: http://ec.europa.eu/health/documents/eudralex/vol-4/index_en.htm)

Could also try this:

http://www.ich.org/fileadmin/Public.../Guidelines/Quality/Q7/Step4/Q7_Guideline.pdf

(from here: http://www.ich.org/products/guidelines/quality/article/quality-guidelines.html)
 

Statistical Steven

Statistician
Leader
Super Moderator
I was assigned to prepare SOP for handling out-of-specification/questionable results but having a hard time getting the right references. I came along with FDA's guidance for the industry but there's no specific approach as to the number of retests to be done. I know it's a case-by-case situation but I would appreciate if someone could give me the right reference. A sample SOP will be very much aprreciated.

There is no definitive reference I have seen on the topic. I would recommend you place close attention to the FDA guidance with regard to averaging and outliers (Outlier tests have no applicability in cases where the variability in the product is what is being assessed, such as for content uniformity, dissolution, or release rate determinations. In these applications, a value perceived to be an outlier may in fact be an accurate result of a nonuniform product.). Typical best practices from SOPs I have seen or written is a minimum of twice the number of initial tests. Also, I have my retest limits tighter than the original acceptance criteria.

Just one approach
 

BradM

Leader
Admin
To Steve' point, here is some more information that might be helpful:

http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/ucm281843.htm

In your response, you state that there are controls in place to control variability in the process and in the final product. These controls and variability should have been prospectively assessed through completion of successful process validation studies. In addition, you reference the Cpk values for processes using a (b)(4) versus the processes using the (b)(4). Your response is inadequate because a Cpk value alone is not an appropriate metric to demonstrate statistical equivalence. Cpk analysis requires a normal underlying distribution and a demonstrated state of statistical process control (ASTM E2281). Statistical equivalence between the (b)(4) and (b)(4) could be demonstrated using either parametric or non-parametric (based on distribution analysis) approaches (comparing means and variances). Your response to Observation #1 does not utilize either of these approaches, and lacks the proper analysis to support your conclusion that no significant differences existed between the two (b)(4) processes.
 

v9991

Trusted Information Resource
OOS is handled at three levels.
1) at first instance, finding out, if that, is an laboratory error.
1.1) procedural it triggers an 'incident', and trigger analytical-review/investigation,(here you must have a detailed checklist/guidance for handle 4M(man material, machine, method...), handling kinds of test-parameters,processes etc.,)
1.2) if an laboratory-error is verified(Root cause), look into level/extent of impact, on other analysis...(due to that cause on other batches, results etc.,)
1.3) then re confirm the results through ""multiple/duplicate"" testing (either samples-analysts-equipments, depending kind of error noted)
1.4) also ensure that appropriate CAPA is tracked & trended
1.5) if an laboratory error is ruled out, then it must trigger next process-reviews/investigations, involving respective functions (usually, technology manufacturing, led-by-QA etc.,)

this phase can be either seen as 2) OR 1.5.1) people have different approaches,
2) it triggers process-review/investigations (detailed checklist/guidance to handle various process controls - equipment, operations, area etc.,)
2.1) broadly again , two scenarios, assignable cause, or non-assignable cause;
2.2) if its assignable cause, its first, assessed for level&extent of impact,
2.3) based on kind of situation (found a process error, sampling error, etc.,) it leads to multiple/dupliate sampling-testing ....(similar as 1.2-1.3...)

this is most toughest(freqient) situation which is as good as third section of OOS...
2.4) if its unassignable cause, and initiate full scale investigation, which includes, extensive sampling or experimentation ""or hypothesis testing"";
the point is you have to ''conclude"" upon with reasonable data about the RCA & impact on other batches/products etc.,

apart from above process steps, one ought to describe the responsibilities, communications, documentation requirements in procedure.

this is the trickiest part., the depth and success of investigation depends on the sincerer & seriousness of the team/management handling the problem. so get management involved, which works most of the times; but be sure to let the know the impact&risk,,, that is the key for involving/influencing management.

hope that helps.

loads of references are available...just in case if you have not already seen them...
http://www.iagim.org/pdf/sop10.pdf
http://www.gmp-verlag.de/media/files/Dateien/OOS_Form-UD6.pdf
http://www.pharmchem.tu-bs.de/forschung/waetzig/dokumente/courtesy_translation.pdf
http://pharmtech.findpharma.com/pharmtech/data/articlestandard//pharmtech/032002/6989/article.pdf
...
and still best source is 483s and warning letters...
viz.,
http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/ucm170912.htm
 
Last edited:
S

superkidz

1.3) then re confirm the results through ""multiple/duplicate"" testing (either samples-analysts-equipments, depending kind of error noted)

I’m thinking of 4 retests by the original analyst and second analyst (two tests each analyst and each test is consists of 2 preparations with 2 injections each).
If the retests of the original analyst and second analyst meet the specification and RSD between the two retests of individual analyst is not more than two and the difference between the result of two analysts is not more than two percent) the first result will be invalidated.
The problem I see with the above is that if retests meet the specification but failed the 2% RSD for the individual analyst and/or the two percent difference between the two analyst, is there still a need to conduct another retest and how? How will I interpret the result then?
How will then be the reporting in my certificate? Can I average all the retests so I can come up with a single result if required.
Any answer would be appreciated.
 

v9991

Trusted Information Resource
I’m thinking of 4 retests by the original analyst and second analyst (two tests each analyst and each test is consists of 2 preparations with 2 injections each).
If the retests of the original analyst and second analyst meet the specification and RSD between the two retests of individual analyst is not more than two and the difference between the result of two analysts is not more than two percent) the first result will be invalidated.
The problem I see with the above is that if retests meet the specification but failed the 2% RSD for the individual analyst and/or the two percent difference between the two analyst, is there still a need to conduct another retest and how? How will I interpret the result then?
How will then be the reporting in my certificate? Can I average all the retests so I can come up with a single result if required.
Any answer would be appreciated.

a) you have started in right direction; of employing 'variation' criteria for repeat tests;
but how do you build RSD for two values?; that is one reason why people look at triplicates. and involve third analyst. that way you could statistically account for within&between RSD criteria.
but, then, there is no single/common approach...

b) once again, you pointed out the right thing; aspect of RSD varying etc.,
this is where, it becomes difficult to fit into standard flow-chart or SOP; it depends on test parameter being considered.,.. a interpretation of variation of results for assay is different from that of dissolution or moisture or impurities..etc.,
Briefly,
we need to decide upon the next course of action, viz., to see if its sampling error or really an indication of process variability.

c) which results to be reported...
averaging is actively discouraged; its appropriate to report the correct result with an * indication/traceability to the incident or OOS.
next part is which result to be reported,....result from the repeat analysis of 1st analyst to be reported .(remember, 2nd analyst is only reference)


d) remember above points are related to 1.3...it need not follow same approach for 2.3!! & current trends is emphasis is on "hypothesis testing". which will determine the course of investigatino & conclusions(CAPA) & reporting.

hope that helps.
 
S

superkidz

averaging is actively discouraged; its appropriate to report the correct result with an * indication/traceability to the incident or OOS.
next part is which result to be reported,....result from the repeat analysis of 1st analyst to be reported .(remember, 2nd analyst is only reference)


remember above points are related to 1.3...it need not follow same approach for 2.3!! & current trends is emphasis is on "hypothesis testing". which will determine the course of investigatino & conclusions(CAPA) & reporting.

Thanks for the reply. It’s clear to me now why the need of triplicate analysis and perhaps will increase to 3 preparations and 3 injections per analyst and will consider also the involvement of the 3rd analyst for a total of 9 retests. 2% RSD for the 3 preparations and the 2% difference for the three analysts will retain.

My concern now is what if it fails the above 2% RSD and 2% difference but passed the specifications. You mentioned that it may need not to be followed for the second retest. What would be the ideal then? involving a 4th analyst?

One more thing, the 1st retest of the original analyst will be reported with * indication/traceability. Would I need to put the annotation on the report for that.

Thanks in advance
 

v9991

Trusted Information Resource
My concern now is what if it fails the above 2% RSD and 2% difference but passed the specifications. You mentioned that it may need not to be followed for the second retest. What would be the ideal then? involving a 4th analyst?

DO NOT get into the TRAP of "analyzing till it passes"; this is the time, where you need to focus on the "where this variation(RSD) is originating from"... defining/identifying this-very-aspect will lead you to the number of analysis to be performed. simply put, your RCAs discovered through the investigation will determine the number multiple/duplicate analysis required. (refer the comment on hypothesis testing added in my eralier response)




One more thing, the 1st retest of the original analyst will be reported with * indication/traceability. Would I need to put the annotation on the report for that.
Yes, atleast on the analytical report; and
preferably Yes on batch release certificate/report.



2% RSD for the 3 preparations and the 2% difference for the three analysts...

You must have a solid justification for this acceptance criteria (no. of 2%) because, you may not always achieve them for all test parameters....viz., consider Blend-uniformity/moisture content or related substances or residual solvents...their system suitability & respective acceptane criteria are different right!!! so.the point is ., it depends on test paramter , process attribute it defines and analytical technique as well...



My concern now is what if it fails the above 2% RSD and 2% difference but passed the specifications. You mentioned that it may need not to be followed for the second retest. What would be the ideal then? involving a 4th analyst?

are you referring to my statement...
v9991; said:
d) remember above points are related to 1.3...it need not follow same approach for 2.3!! & current trends is emphasis is on "hypothesis testing". which will determine the course of investigatino & conclusions(CAPA) & reporting.
what i meant here is that, in case of analytical error, you have the liberty of overcoming through re-confirming your problems; BUT when it comes to process, re-testing WILL not be criteria for resolving the issue; You have to pin point the reason for "variation" which has lead to OOS result; that knowledge/confirmation of variation will also tell you the impact... which leads to taking up the decision for conclusion of OOS (batch reject or release etc.,)

also let me emphasize once again, that impact analysis on other batches - analysis - results etc., need to be closely evaluated.
 
S

superkidz

You must have a solid justification for this acceptance criteria (no. of 2%) because, you may not always achieve them for all test parameters....viz., consider Blend-uniformity/moisture content or related substances or residual solvents...their system suitability & respective acceptane criteria are different right!!! so.the point is ., it depends on test paramter , process attribute it defines and analytical technique as well...





what i meant here is that, in case of analytical error, you have the liberty of overcoming through re-confirming your problems; BUT when it comes to process, re-testing WILL not be criteria for resolving the issue; You have to pin point the reason for "variation" which has lead to OOS result; that knowledge/confirmation of variation will also tell you the impact... which leads to taking up the decision for conclusion of OOS (batch reject or release etc.,)

we are only TESTING not manufacturing product so we must make sure that the variation is not due to laboratory error. the 2% RSD would be coming from the system suitability thus the 2 % difference between the analyst came from (a different approach of course for dissolution and content uniformity). Since there would be a total of 10 tests (1 original and 9 retests which came from the total of 27 retest injections), would it be safe then to say that if 2/3 of the 10 tests pass the specification (assuming it fails the 2 % RSD and or the 2% difference and or the specification for some retests) and the average of 10 tests pass the specification then can we still conclude that the sample pass the tests? The average of the ten would then be reported in the test report. The original test result would still be invalidated then if all retests pass the specs, the 2% RSD and the 2% difference.

Would this be ok? I would like to reiterate then that we are only TESTING products.
 
Top Bottom