Software Tool (Bugzilla, Subversion and a gnu compiler) Validation

glork98

Involved In Discussions
The SQA guy at our little start-up is struggling with tool validation. We've resolved that we need to validate Bugzilla, Subversion and a gnu compiler.

Any practical advice on how to do this effectively and lightly?
 

yodon

Leader
Super Moderator
Indeed, a potential quagmire.

The 3 products you mention have some unique characteristics and possible answers. I'll address each one.

Bugzilla (and problem management / tracking in general). For this, I would do a very simple validation. Show you have installed it according to documentation, document (under doc control) the customizations / site adaptations you made (noting how they are done within expected use), and walk some issues through the cycle, especially focusing on your customizations / adaptations. Now the unique thing here is that change records are quality records and thus Part 11 comes into play. Bugzilla, to the best of my knowledge, can't be made to be fully Part 11 compliant (exacerbated by the fact that it's open source software) so to get around this, I would recommend that once an issue is closed, a copy of the report be printed and saved as the official quality record. There's plenty of debate about how the "true" record is still electronic (and is certainly so during the life of the issue) but this approach has been at least accepted in the past (which is not a guarantee, of course).

For Subversion, I would take a similar tack and do a validation using your process. Show it's installed per recommendations (IQ). Show how changes are controlled (check out, check in) using your process and how builds are constructed using your process. I presume subversion has a way to tag the build configuration so be sure to show that. The unique thing here is that the software in subversion is not really the final product - the binaries out of the compiler are; however, since it's such a critical control point, I would beef up the OQ part just to show that proper controls are in place. Show, for example, that if someone makes an unauthorized change, how it's likely to be caught. That's probably the biggest issue with versioning systems.

Finally, for the gnu compiler, let's just say it's quite impossible (IMO) to validate a compiler. The tack we've always taken is that you compile your source and do a complete validation of the built software; thus, by default, the compiler is working as expected. Issues from a compiler error would likely be obvious or would be likely caught in product testing. You should do a decent IQ and focus on known issues with the compiler. Analyze the known issues and if any are determined to likely impact your product, user training to avoid or watch for such issues is a good approach.

Sorry for the long post but the answer is complex and compounded by the types of software you cite. Hope this helps.
 

glork98

Involved In Discussions
Indeed, a potential quagmire.

Yes. A "full" validation would exhaust our resources before the product is developed.

Sorry for the long post but the answer is complex and compounded by the types of software you cite. Hope this helps.

Thank you for your analysis. It's along the lines of our approach. The SQA guy is, as he should be, concerned about the "audit strength" of sparse or indirect testing.
 

noorain

Starting to get Involved
We use tools like Matlab, Keil, CCS studio to write code and develop algorithm along with bugzilla, GIT, GERRIT for version control do we have to do software tool validation to all of these.? Is there a template for software tool validation that some one can share.
Thank you
 

yodon

Leader
Super Moderator
We use tools like Matlab, Keil, CCS studio to write code and develop algorithm along with bugzilla, GIT, GERRIT for version control do we have to do software tool validation to all of these.? Is there a template for software tool validation that some one can share.
Thank you

Each should be *considered* for validation and a risk-based decision made as to whether / the extent of validation.

For something like Matlab, I would do something akin to an Installation Qualification (is it installed properly, running on hardware the manufacturer intended, has any applicable support software installed at the recommended revisions, etc.). I would then look at the list of known issues and determine if those impact MY use of the product. Presumably you do testing on whatever you generate out of Matlab.
 

cwilburn

Starting to get Involved
What about re-validation if any of the tools are cloud-based? We are planning to create automated tests to run nightly. What are others doing?
 

TomaszPuk

Starting to get Involved
We use tools like Matlab, Keil, CCS studio to write code and develop algorithm along with bugzilla, GIT, GERRIT for version control do we have to do software tool validation to all of these.? Is there a template for software tool validation that some one can share.
Thank you

I would suggest to split the tools you use into two groups, due the different risk they could bring.
1. Writing code - IDE (Integrated Development Environment)
2. Compiling, building, test - CI (Continuous Integration)

You can argument that group one does not have direct impact on the final product. It is used on local machines to create a source code that latter on via the source code repository is managed on CI environment. Usually developers have their own preferences re. IDE and then managing all of them is next to impossible.

On the other hand, the group two though should be running on qualified environment. Usually here you would have Source Code Repository, Configuration Management System (to manage dependencies on the source code level), CI environment (including compiler), Static Source Code Analysis. These tools should be qualified IMHO. In order to do it, specify what kind of requirements they should fulfill, and when they are configured run qualification tests on them. If you keep the requirements to "realistic" level you should not spend too much time here.
 
Top Bottom