Thanks heaps Roland!
We have of course validated the software in terms of accurate functions, algorithms used within the software in order to deliver accurate output, as well as easy to interpret and accurate user interfaces.
We have also evaluated our software regarding risks with wrong or faulty input and eliminated such risks by means that we do not handle input containing the wrong parameters. However, we have not done much concerning the QUALITY of the input vs the QUALITY of output (assuming right input but with poor resolution). Of course our software still performs the right algortihms and calculations even though the QUALITY of the input is poor, but the output might not be good enough for diagnosis. Looking at that issue, the final output from our software might be insufficient for diagnostics and hence lead to faulty diagnosis due to insufficient quality of the original input. Can those risks be considered eliminated only be defining the desired quality of the input data? Still a forseeable misuse would be to use input with poor quality, wouldn´t it?
Thanks
Yes, inaccurate diagnostics resulting from poor resolution capabilities in software is one type of risk of poor software quality. The risk is not eliminated by defining the quality, as in setting the gates, but by sharpening the resolution.
And yes, it's forseeable to misuse software, resulting in poor diagnosis. For example, if settings must be input by the operator and the method/setting type is not clear; that is, the operator can't figure out how to make the setting, or sets the software to 8 when it should be 10. That's a rough sketch of the subject.
So yes, both accuracy and usability are risk factors, though usability is not a risk the developer carries responsibility for so much as the accuracy: the ability for the software to perform to the intended function.
Some threads of related subjects are listed at the bottom of this page, if you would like further reading.