Validation for integrated software in QMS

Denzel

Registered
  1. We use Confluence for our eQMS, which is supplemented with various apps for additional functionality. For instance, we use the QC - Read and Understood app to track whether employees have read and understood certain pages. How should we validate these apps? What requirements should we include?
  2. Additionally, we need to determine the schedule for revalidating these systems. My suggestion is to revalidate apps with a medium or high-risk rating every three months, while those with a low-risk rating should be revalidated annually. The standard does not specify how often revalidation should occur, so I'm unsure about the appropriate frequency.
  3. If an application claims to have a default functionality that has been widely used and verified by many users, does this mean we do not need to validate it? For example, consider the requirement "Any user with access to Jira can create issues." Since this is a well-known, standard feature used by all Jira users, do we still need to validate this requirement? I believe it is a default functionality inherent to Jira, and therefore might not require separate validation. Could you clarify this for me?
  4. I want to confirm: in my understanding, applications that involve automation are generally considered high-risk because automation processes have the potential to perform incorrect actions. Therefore, it's necessary to conduct frequent checks and validations. Am I correct in thinking this?
Since our company does not control updates to the apps, we only become aware of changes post-factum, which makes it challenging to plan revalidations. Relying on this method is not ideal for apps that have a medium or high impact on our QMS or product - my thoughts.

I would like to hear your thoughts and guidance please.
 
Elsmar Forum Sponsor
If you are using the software as designed with no or little modification there should be a lower validation burden.

For example, if FDA visits and sees you use MS XLS for records if you use the most basic functions (adding or subtracting) they may not require additional validation. If you have complex Macros and formulas pulling from other data sources that would need more validation. Of course, always consider risk. A very risky output would probably need more testing.
 
  1. We use Confluence for our eQMS, which is supplemented with various apps for additional functionality. For instance, we use the QC - Read and Understood app to track whether employees have read and understood certain pages. How should we validate these apps? What requirements should we include?
  2. Additionally, we need to determine the schedule for revalidating these systems. My suggestion is to revalidate apps with a medium or high-risk rating every three months, while those with a low-risk rating should be revalidated annually. The standard does not specify how often revalidation should occur, so I'm unsure about the appropriate frequency.
  3. If an application claims to have a default functionality that has been widely used and verified by many users, does this mean we do not need to validate it? For example, consider the requirement "Any user with access to Jira can create issues." Since this is a well-known, standard feature used by all Jira users, do we still need to validate this requirement? I believe it is a default functionality inherent to Jira, and therefore might not require separate validation. Could you clarify this for me?
  4. I want to confirm: in my understanding, applications that involve automation are generally considered high-risk because automation processes have the potential to perform incorrect actions. Therefore, it's necessary to conduct frequent checks and validations. Am I correct in thinking this?
Since our company does not control updates to the apps, we only become aware of changes post-factum, which makes it challenging to plan revalidations. Relying on this method is not ideal for apps that have a medium or high impact on our QMS or product - my thoughts.

I would like to hear your thoughts and guidance please.
I am not looking at validation of eQMS right now but can you give me your overall opinion of Confluence. I am starting from zero and trying to choose a good eQMS so we can rise from the ashes of 'too many Excel spreadsheets'.
 
I recommend taking a look at the draft FDA Guidance on Computer Software Assurance. It's a pretty common-sense approach (IMO) and allows you to leverage the fact that this is a commercial system with widespread use. If the vendor did any validation work, you can leverage that. Ideally you establish a master validation plan to frame your risk-based approach and then describe your specific plans for Confluence. (I'm guessing you have quite a few more software applications that should at least be considered for validation)

If you're using the system pretty much out of the box, validation should be pretty simple. I would suggest that you confirm that you have permissions set up properly and they're being used properly (i.e., not everyone is granted Admin rights). You may want to do something for Part 11 (unless the vendor did it already).
 
2. Additionally, we need to determine the schedule for revalidating these systems. My suggestion is to revalidate apps with a medium or high-risk rating every three months, while those with a low-risk rating should be revalidated annually. The standard does not specify how often revalidation should occur, so I'm unsure about the appropriate frequency.

This timetable strikes me as extreme. My opinion: when a system has been validated, the time to re-validate is when:
  1. The intended use/requirements of the system will be changed
  2. The implementation of the system will be changed
  3. The previous validation is determined to have been inferior (see Note of Caution below)
Otherwise, the team involved will most likely be wasting time repeating previously done work. You could certainly establish a timetable for assessing the need to re-validate software systems, but an a priori choice to require revalidations seems unwarranted.

Note of Caution: I don't think "a determination that a previous validation is inferior" is to be made based on paperwork review, but instead should be based on how the software system is being used and how it is performing. There can of course be something like a large scale corrective action involving paperwork (let's say... the team never documented the software's intended use), but software quality is so much more important than simply looking for trivial issues with previously accepted paperwork.
 
Back
Top Bottom