Confirmation of Computer Software to satisfy Intended Application

I

Integrator - 2012

Here's a topic not often discussed I think.

At the end of 7.6 Control of monitoring and measuring equipment, it says:-
'NOTE Confirmation of the ability of computer software to satisfy the intended application would typically include its verification and configuration management to maintain its suitability for use.'

This was new in 2008. Most external auditors I've seen interpret it to mean that some means of proof that software is working correctly is required, e.g. develop a form with some standard inputs and standard outputs; you verify if your regular check gives the right result (within an appropriate tolerance) and retain the record.

If this is the requirement the application seems patchy. There is so much software these days where do you draw the line. Just a few thoughts;

1) Common software normally works OK but what about a common problem that spreadsheet software formulas have become corrupted by poorly trained humans. Should there be some note in QMS documentation that spreadsheet or other relevant software should be protected where possible to prevent inappropriate alteration of formulas, or access restricted by login as required? Should training in relevant software be listed in Training Registers to reduce the risk of such corruption?

2) Can verification of software be requested from software suppliers? Would this be of value? Of course it can't be proven that the end result will be correct if untrained persons are at the controls, but perhaps this can provide some assurance. It can definitely be argued that in well developed software, source codes are usually protected from any kind of 'messing up' by users.

3) On the other hand verification by the end user must give confidence of no gross error. This would particularly be welcome where there is any doubt as to the software’s efficacy, e.g. ‘in-company' developed software where the chance of mistakes may be greater the software development may be less well resourced.

There's a lot to discuss here!
 

John Broomfield

Leader
Super Moderator
Integrator,

Yes, mission critical computer models must be validated and protected before issue for use from unauthorized changes.

As for the software I would say that is a matter of common usage. It would be wasteful to re-validate MS Excel every time we use it to calculate a standard deviation or a set of financial accounts.

But if we decide to select and use software with prohibitive costs of nonconformity such as awarding a multi-billion rail services contract, for example, we would design selection criteria, buy software that met the criteria, validate the software for its purpose, build the model, validate the model and then lock it down.

John
 

v9991

Trusted Information Resource
Most external auditors I've seen interpret it to mean that some means of proof that software is working correctly is required, e.g. develop a form with some standard inputs and standard outputs; you verify if your regular check gives the right result (within an appropriate tolerance) and retain the record.
one way to overcome this scenario is to define the interpretation-scope-approach in your quality manual and then supported by respective procedures/records;

1) Common software normally works OK but what about a common problem that spreadsheet software formulas have become corrupted by poorly trained humans. Should there be some note in QMS documentation that spreadsheet or other relevant software should be protected where possible to prevent inappropriate alteration of formulas, or access restricted by login as required? Should training in relevant software be listed in Training Registers to reduce the risk of such corruption?
This would be specifically true for regulated environments. even otherwise, as a good practice, to control these excel formats/templates for auto calculations (viz., password protected to prevent any in-advertant modifications/deletions) etc.,
where the systems are controlling (PLCs) and where the SCADA systems are installed, any change to configurations of PCs could potentially impact the performance of the m/c; hence control required is obvious.
another way of looking at is the decisions&conclusions arrived from these PCs or S/w. (viz., batch release decisions, stock rejects, issuance or complaints or even doc-control etc)
now this is very approach-scope-interpretation to be included in QM which will form the basis for the implementation (& audit reviews)

2) Can verification of software be requested from software suppliers? Would this be of value? Of course it can't be proven that the end result will be correct if untrained persons are at the controls, but perhaps this can provide some assurance. It can definitely be argued that in well developed software, source codes are usually protected from any kind of 'messing up' by users.
3) On the other hand verification by the end user must give confidence of no gross error. This would particularly be welcome where there is any doubt as to the software’s efficacy, e.g. ‘in-company' developed software where the chance of mistakes may be greater the software development may be less well resourced.
YES; especially for COTS applications; the level of qualification & validation is outlined in GAMP guideline which categorizes s/w into 4 categories.
https://www.vialis.at/fileadmin/files/imgs/pdf/Newsletter/q1-09/08MJ-Martin.pdf

although the pharma & life sciences industries discuss above subject interms of compliance, and 21 CFR part 11 (electronic records and e-signatures)

There is dedicated forum on elsmar to discuss various aspects of this subject (although the scope here in following forum includes qualification & validations as well)
https://elsmar.com/Forums/forumdisplay.php?f=181

(of course this is widely focussed & accepted mainly in regulated environment, but principles are as relevant elsewhere... a quick search for above doc will reveal adequate insights into above doc(s))

hope this helps...
 
Top Bottom