Risk Analysis for moving manufacturing equipment

tebusse

Involved In Discussions
Hi everyone,

The company for which I work is currently the legal owner of a class II medical device and conducts all of the manufacturing (cGMP) on-site in a cleanroom. We are preparing to move into another facility with a cleanroom and I have been informed by our regulatory consultant that it may be wise to prepare a risk analysis for equipment, networked files, etc.

Has anyone ever prepared a risk analysis for moving a facility? Does anyone have an example?

Any suggestions, ideas on how to get started, etc. would be most helpful.

Tonia
 

Ronen E

Problem Solver
Moderator
Hi Tonia,

From a wider perspective, you'd need to validate your new facility / production line. Risk management is a prerequisite for an effective and efficient validation. The move itself is just one aspect of the risk management.

Now, the questions you ask should be directed at your regulatory consultant. If he/she only tells you the "what" and doesn't provide any support on the "how", then you're not getting what you need from that consultant, are you?...

Cheers,
Ronen.
 
P

pldey42

I've done risk assessments in the software industry where, at the time, there were no models I was aware of that we could use. We found that the best way to do it was to get (representatives of) everyone involved together, map out the process we were concerned about, and then ask everyone to brainstorm the risks. In the brainstorming, no analysis was allowed - that came later - so that nobody felt constrained about risks to suggest no matter how outlandish. Once the potential risks were identified, we went through them to consider impact and likelihood so we could prioritise risk mitigations.

I once saw a client (for whom I was working on other things) do something like this when they decided to move a big server from one data centre to another.

If it were me I would involve the manufacturing specialists and the movers, list all the equipment to be moved, and identify the elements of risk - contamination, physical damage, calibration issues, partial or total loss, etc. I'd bring everyone together and map out the moving process - who moves it, when, how, vehicles, routes and so forth. Then ask them to brainstorm the risks, and so forth. I'd invite the consultant too, hoping he or she has experience of other such moves that can be shared.

If it's moving across international borders I'd also consider the border control risks - a friend of mine was responsible for importing a huge block of granite (for use as a table in sensitive measuring gear) across Europe and noticed it never seemed to weigh the right amount at several international borders, because the border guards needed what they called a "tip"; such things need to be budgeted for if they're going to be paid, and if the move is time-sensitive, unplanned delays could be a risk.

Hope this helps,
Pat
 

Ronen E

Problem Solver
Moderator
Hi,

Just to highlight that in "moving" medical devices production lines the risks can be totally unrelated to actually moving anything from A to B. Even if site A was shut down for good, and a completely new set-up was erected at B (without moving anything but the formal production knowledge, the "DMR"), it would still be considered a "move", with a consequent need for validation and risk management.
 

Peter Selvey

Leader
Super Moderator
It probably worth to be clear about "risk" here, as there are two very different points of view.

The first is the general concern of moving production from point A to point B which has nothing to do with regulations. It is clear that there are serious risks involved, and actions which can be taken to minimize the risk (for example, asking engineers involved in the original production to help verify the set up and final product in the new location).

As for trying to predict the things that can go wrong - that's a mugs game. The possibilities are endless, literally infinite. And - as I just found this week - the new location may even uncover product issues that existed before the move that you didn't know about. This is the messy real world where regulations are not intended to apply, at least not directly.

The regulatory world is more defined, and tied to the discrete medical device. For each type or model there should be a regulatory file that has technical details, specifications, bill of materials, risk evaluation, production processes, incoming inspection for parts, outgoing inspection etc etc.

If the production is moved from location A to B you need to review if all this file remains valid and data is representative of the new device that will be produced from the new location. This is systematic, a bit boring but at least well defined and limited. It should include a review of any the effectiveness of any risk controls that may be implemented in production or affected by the production process.
 

Ronen E

Problem Solver
Moderator
Hi,

I'm a bit confused by this last post. Although most of its statements seem reasonable, I think the total sum is a bit misleading.

The first is the general concern of moving production from point A to point B which has nothing to do with regulations.

I disagree. In general, regulations require that devices come from verified and validated designs and production lines. Design would not be normally expected to be affected by relocating a production line, but the validity of production processes most definitely will be affected. This is a risk directly related to the move that regulations require addressing.

As for trying to predict the things that can go wrong - that's a mugs game. The possibilities are endless, literally infinite.

This is true for almost all risk management endeavours. The way we should (and usually do) cope with it is by trying to identify the significant risks rather than all the risks (which I agree are almost infinite). How this can be systematically achieved may be a topic for another discussion, but intuitively we focus on those risks which seem to have the higher likelihood or the harsher consequences or both. Even regulators don't expect every single remote risk listed, even if they won't officially admit it. To summarise, in my opinion all risk managements are open ended to a degree, so this is not something that separates "regulatory" risk management from a "real world" one.

For each type or model there should be a regulatory file that has technical details, specifications, bill of materials, risk evaluation, production processes, incoming inspection for parts, outgoing inspection etc etc.

If the production is moved from location A to B you need to review if all this file remains valid and data is representative of the new device that will be produced from the new location.

The "regulatory world" is actually quite diverse. What you describe matches the EC approach for instance. In the USA regulations, however, the list above would go in the DMR except for the risk evaluation. The latter would belong in the DHF or the process validation file (depends on context). Similarly, different files would be reviewed in different situations in the different regulatory systems. I'm not sure that a DMR review per-se would be necessary when relocating a production line. Of course the DMR itself is very relevant and useful in such situations, however its contents don't have to be challenged when moving - at least not from a regulatory point of view (though it would make sense to do so from a practical point of view - logistics, inputs supply etc.).

Another issue that I feel is worth mentioning is that "validation" in the context of moving a production line from A to B is more than "making sure the files are still fully valid" and even "making sure the risk mitigation measures are still valid / effective". It is first and foremost making sure that all previously-validated production processes are valid in the new location. This is both a regulatory and a "real life" need, and in most cases it will require repeating the validation during and after the move (planning the move and ensuring that the consequent line is validated are inseparable and should occur concurrently). Managing the risks (both reviewing the "old" risk management files and addressing "new" risks due to the move) is an integral part of validating the line because in order to be able to validate a process one needs to first have a good idea of what the process parameters and noises (risks) are. You don't just validate to an arbitrary set of inputs.

Last, IMO the divide between "regulatory" and "real world" is wrong. Regulatory requirements should naturally flow from real world needs and constraints, and IMO they do so to a fair degree. Where they don't, it's our role to expose it and promote change.

Cheers,
Ronen.
 

Marcelo

Inactive Registered Visitor
Quote:
In Reply to Parent Post by Peter Selvey View Post

As for trying to predict the things that can go wrong - that's a mugs game. The possibilities are endless, literally infinite.
This is true for almost all risk management endeavours. The way we should (and usually do) cope with it is by trying to identify the significant risks rather than all the risks (which I agree are almost infinite). How this can be systematically achieved may be a topic for another discussion, but intuitively we focus on those risks which seem to have the higher likelihood or the harsher consequences or both. Even regulators don't expect every single remote risk listed, even if they won't officially admit it. To summarise, in my opinion all risk managements are open ended to a degree, so this is not something that separates "regulatory" risk management from a "real world" one.

The term generally used is known and foreseeable hazards.
 

Peter Selvey

Leader
Super Moderator
Just to explain my previous comment "The first is the general concern ... which has nothing to do with regulations."

It does sound odd out of context. The intended context is that regulations are just the minimum requirement to gain clearance for sale, not the complete picture. It is a bit like going for a driver's licence: you need to study, practice, pass the written test, pass the driving test, pay the money and you get your licence. But that is far from the end of the story about being a safe and conscientious driver.

Regulations need to find a balance between improving safety and not getting in the way. For example, most modern regulations have a big emphasis on recording what you do, but very little on recording why. In risk management, manufacturers often use numerical representations for severity, probability and risk, but if you look closely ISO 14971 does not ask you to document why those numbers are selected; it just says to record the number. ISO 13485 requires a lot of documents and records on production tests, but nowhere is there is a requirement to document why you selected those particular tests, test methods and criteria. In many cases it would be reasonable to keep a such records (e.g. borderline, complex or contentious issues), but it is not a regulatory requirement.

Often people consider what is reasonable and assume that the regulations would naturally support this, but it is not the case. When I first started auditing back in 1999 I was continuously surprised by inability of the actual written regulations to support a reasonable finding, for example, the absence of a reasonably expected production test where the manufacturer is obviously skipping just to save on cost.

But when you realize that that regulations only provide a "clearance for sale" function, it makes sense. It is a balancing act, and there is always common law in case things go wrong. It is the same as driver's licence: having the licence in no way absolves you from liability. If a medical device causes an injury, common law will look at things like whether the numbers in the risk management table were realistic, whether the production tests were reasonable, and whether the actions taken moving production from A to B were appropriate.

That is the context I am coming from when I say "... nothing to do with regulation".
 

Marcelo

Inactive Registered Visitor
ISO 13485 requires a lot of documents and records on production tests, but nowhere is there is a requirement to document why you selected those particular tests, test methods and criteria.

It has, on quality system planning - that?s where the justifications for your decisions should be. However, people (including assessors) usually think that to verify planning you need to verify the final procedures.

In risk management, manufacturers often use numerical representations for severity, probability and risk, but if you look closely ISO 14971 does not ask you to document why those numbers are selected; it just says to record the number.

Again, it does, in the risk management plan, and there?s specific reference in Note 3.

Even when applying the plan, the requirement is to:
For each identified hazardous situation, the associated risk(s) shall be estimated using available information or data.......The results of these activities shall be recorded in the risk management file.

So, the expectations are there, but people do not perform as expected. Why?

I see several factors:

1 - Standards are not meant to teach anything - they are usually a set of good practices from the past of an area, to be read and easily understood by someone with the correct background. In the specific case of risk management, for example, anyone with a background on reliably and safety engineering knows that it?s a good practice to have and record the rational for any estimate - see for example the historical case on the THERAC-25, where part of the problem was an incorrect estimate on some 10000 time the reliability of a part - and this was more than 30 years ago!

However, when those standards became somewhat "mandatory', a whole lot of people with no previous knowledge suddenly needs to be in conformity with standards, and what they do it to get the standard and read it, trying to figure out what is expected (instead of reading some books from the area). And they usually try to apply it in the easiest way possible. Assessors usually do not have the background either, so everything ends up being done generically - does not mean that it is correct.

The same case is for quality, I remember years ago when I was heading into quality, the first suggestion I?ve got was - read the standard. I preferred to get some book from Juran and Deming, and after reading those and some other sources, there?s was nothing "new" on the standard, everything there was a little obvious because i was based on those good practices. For example if you get one of the most knob books from Juran, "Quality Planning and Analysis", the planning stage for any QMS is there, very clear, but almost no one does that today.

2 - Another part of the answer is that standards and regulations became a business. So we need a certificate, and an auditor is coming, and we need the badge on the wall. Who?s is going to take the time do understand what we are doing? :p
And obviously, the assessors/auditors and even regulators do not always have the background to evaluate those things. You would need to either hire experts or give people the competency (meaning, years of studying and training) for things to be correct again.


But anyway, I do agree with you in general with your remarks: regulations are "clearance for sale" (I use the term "license to sell"), and part of the trade-off is that a regulator will verify some stuff, on a sampling basis...and the important part is, the manufacturer is still required to do the right thing, even if it was not spotted in an audit.

That?s why I usually say too: even if you passed an audit and/or your de ice is cleared for sale, it does not mean that you really fulfill the regulations and standards. It means that the sampling done (if any) did not show anything wrong in principle, but it?s still the manufacturer?s responsibility to ensure compliance.
 

Peter Selvey

Leader
Super Moderator
This is overlaying reasonable expectations on top of what is actually written.

The standard requires the action of estimating risk "using available information or data" but the record is limited only to the result. In other words it is not required to keep a record of the "information or data" used to support the result.

Throughout the standard there are similar cases, for example Clause 6.2 specifies that manufacturers must prioritize inherent / protection/ warning in selecting a risk control, but the records is limited to the selected risk control. There is no requirement to justify, for example, why a warning was used.

Clause 6.3 is perhaps the best example.
Implementation of each risk control measure shall be verified. This verification shall be recorded in the risk management file.

The effectiveness of the risk control measure(s) shall be verified and the results shall be recorded in the risk management file.

The first paragraph says implementation shall be verified and the verification shall be recorded. The definition of verification includes objective evidence. In other words, the record must include objective evidence that the risk control was implemented.

The second paragraph says effectiveness shall be verified (the same), but only the results need to be recorded, which is a subtle but important difference. The records do not need to include any objective evidence about the effectiveness of risk controls.

These decisions to limit records are very deliberate, as it would be onerous to document justifications for every decision. For 95-99% of risk controls, it is obvious the control is effective; it is only the occasional risk control where keeping records of objective evidence is reasonable.

At this time, ISO 14971 does not have any structure to separate out different levels of records when appropriate. It is reasonable to do, and common law may find a manufacturer negligent if such records do not exists. Maybe future versions of ISO 14971 will have a "where appropriate" provision.

But not at the moment, and it is important to be clear about what is and is not required in the standard.
 
Top Bottom