ERP software validation - risk assessment vs validation scope

Ben124

Starting to get Involved
Hi,

we have SAP as an ERP system, which is used as part of our QMS system, therfore it is required to be validated, if we want to be ISO 13485 certified, I believe. We made GxP assessment and acc. to GAMP 5, our ERP is category 4, which means we need to do IQ, OQ and PQ. But do we need to do IQ, OQ, PQ, for all SAP modules/functions or only for those which we found critical? I know it is a lot of sources on the internet, but I just can not find correlation between risk assessment of modules/functions vs. scope of software validation.

As well currently we are supplier (not finished medical device manufacturer) of plastic components for medical device, so I guess our ERP failure does not have direct impact on the paitent saftey, therefore I am questioning my self, do even need to do ERP system validation at all? Can we somehow justified, and not do it?
 

blackholequasar

The Cheerful Diabetic
Also, a side note, I did a VnV for QCBD at a medical device facility. I did validation for every module and when we had an auditor come through, they said it was robust and didn't question it. But I tend to lean on the more uptight side of things haha I'd rather be safe and sure than sorry. But the SAP program is written a certain way and if you validate it by bare function vs each module, it might save you some headache.
 

Ed Panek

QA RA Small Med Dev Company
Leader
Super Moderator
Did you install your ERP by yourself or with a consultant or sales engineer?
 

yodon

Leader
Super Moderator
I would consider risk of each function independently if that allows you to down-scope some of the effort. @Ed Panek alludes to a key point: how much of your installation is customized or configured? Let that figure into your risk analysis.

FDA (I know, not 13485) has been working on an initiative to lighten the burden of computer system validation by using a computer software assurance approach. This allows testing done by the vendor, for example, to be utilized, especially for widely-used, commercial applications. For example, basic functionality may be tested by the vendor but certainly the fact that it's in thousands (?) of installs worldwide makes a good case for the function working as expected! I think people tend to take GAMP as gospel but it's just one (rather common) approach. There's no requirement to follow it. Of course, you also have auditors who are set in their ways (this needs to change!!) who may not accept something other than a GAMP-compliant IQ/OQ/PQ. (I'd like to get in on THAT argument!!)

Like @blackholequasar I would vote FOR doing validation; however, I would seriously argue to scope it properly considering the above.
 

Ben124

Starting to get Involved
It was installed by software vendor, and probably vendors will do the validation. So I guess the next step is to define critical functions for our QMS and let vendors know abot them, make validation plan, let vendors to do validation and make reports of validation?
 

Tidge

Trusted Information Resource
I want to encourage folks worrying about Computerized System Validation (and/or Assurance) to abandon the IQ/OQ/PQ way of thinking about validation. The IQ/OQ/PQ methodology was developed for Process Validation, not Tool Qualification. I am well aware that many companies and consulting firms recommend this... but if you aren't familiar with process validation this will be confusing, and if you are familiar with process validation you will spend too much effort trying to map the necessary activities into the IQ/OQ/PQ framework. EDIT: I also think it does a serious disservice to Engineers responsible for Process Development and Validation to claim to be doing "IQ/OQ/PQ" when just trying to prove a software system "meets your needs."

Classroom exercise: Your software system validation involves migration of data from the legacy system into the the new system. Write down where the Data Migration happens, your choices are IQ, OQ, PQ. Be prepared to discuss with the class.

As far as the original question: Let your Intended Use and User Needs determine which elements of 13485 are impacted by the system. Use the level of impact to determine the necessary level of requirements and testing. That's pretty much all there is to building a Validation (or Assurance) package.
 

yodon

Leader
Super Moderator
abandon the IQ/OQ/PQ way of thinking

Probably getting a bit off topic here but I tend to use IQ and OQ when doing CSV work. The IQ gives me assurance that the software is properly installed and has the necessary underpinnings (computer speed, memory, support software that it uses, etc.) Then OQ is just the requirements testing. I even see a place for PQ sometimes; e.g., if you know you're going to have 100 simultaneous users hitting the system - will it stand it? Granted, not the classic process validation definitions but it works.

One other side note on computer software assurance: it supports the idea of unscripted testing - which, to me, is much better (with a good tester) than scripted testing.

Agree that a migration validation doesn't fit the IQ/OQ/PQ paradigm but I treat that as a migration validation rather than a computer system validation. (Splitting hairs, maybe)
 

Tidge

Trusted Information Resource
@yodon describes a typical CSV use of IQ/OQ/PQ. There are echoes of the described activities in (manufacturing) process validation, but no true mapping is applicable (to either Medical Device Manufacturing or Pharmaceutical Manufacturing, which don't agree with each other <- contractually obligated statement). I vote we let process engineers keep their terminology and CSV (or CSA) folks should create their own.
 

diogo19

Starting to get Involved
It all depends on circumstances doesn’t it, when I was working for a medical device company the 3Qs were driven by the master procedure and audited by the FDA so I thought it was expected that we must test using IQ/OQ/PQ, since moving to a new job I introduced the 3Qs for validation and my bosses don’t get the purpose to it,


I guess what it does for software testing is it puts testing into a sequence where you shouldn’t perform testing until qualification is approved (e.g sandbox - live ).


What I’m finding a bit hard to fully understand still are the types of testing that should be done for software with different complexity,


Unscripted testing ? Ad-Hoc testing ? Scripted ?


I have read up on exploratory testing and I like the idea but that wouldn’t work in every environment , and what I mean is some people hate testing or just want written instructions.
 
Top Bottom