62304: Class C in practice

drm71

Involved In Discussions
It seems likely I will have to work with some Class C IVDR products using software with assay analysis. Maybe there will be some external risk mitigations that keep the software at a lower class but I am working on the assumption that this won't be the case (maybe some of the SW items will be lower but there will be 62304:C items)

Looking at the standard, there are not so many differences between B and C but wondered a little bit on the practical interpretation of some of these

for example:

5.1.4 Software development standards, methods and tools planning
The MANUFACTURER shall include or reference in the software development plan:
a) standards,
b) methods, and
c) tools
associated with the development of SOFTWARE ITEMS of class C. [Class C]

what is meant specifically by "development standards" in this case? do they mean coding standards and guidelines, code reviews and so on? the fact that standards is separate to methods makes me wonder

and for 'methods and tools' are they talking about coding languages & compilers/interpreters? and by methods as in a development work instruction or more project management stuff like sprint planning etc?


Then
5.4 * Software detailed design

5.4.2 Develop detailed design for each SOFTWARE UNIT

The MANUFACTURER shall document a design with enough detail to allow correct implementation of each SOFTWARE UNIT. [Class C]

5.4.3 Develop detailed design for interfaces

The MANUFACTURER shall document a design for any interfaces between the SOFTWARE UNIT and external components (hardware or software), as well as any interfaces between SOFTWARE UNITS, detailed enough to implement each SOFTWARE UNIT and its interfaces correctly. [Class C]

5.4.4 Verify detailed design

The MANUFACTURER shall verify and document that the software detailed design:
a) implements the software ARCHITECTURE; and
b) is free from contradiction with the software ARCHITECTURE. [Class C]

NOTE It is acceptable to use a TRACEABILITY analysis of ARCHITECTURE to software detailed design to satisfy requirement a).

Any tips, refs or guidelines on a practical example for these?

does this mean for example making a new subset of requirements underneath the base requirements to break down whats going on in units? or is it more like a description of the implementation that would make sense to a new/external developer to understand how to set the unit up in code, or understand the implementation?

the last part makes sense to me if its referring to a test verification of sub requirements but maybe I am also overthinking things.

Can anyone also give an example of a relevant/typical real example of how a unit detailed design contradicts the architecture


sorry for all the questions.
 

yodon

Leader
Super Moderator
what is meant specifically by "development standards" in this case? do they mean coding standards and guidelines, code reviews and so on? the fact that standards is separate to methods makes me wonder
We cite our coding standards (which we also state contributes to the avoidance of common defects - 5.1.12). You may also follow guidelines like MISRA. Even cybersecurity guidelines have consideration here.

and for 'methods and tools' are they talking about coding languages & compilers/interpreters? and by methods as in a development work instruction or more project management stuff like sprint planning etc?
We cite our software development lifecycle SOP (which aligns with 62304) and we do document the tools (compilers, debuggers, etc.).

does this mean for example making a new subset of requirements underneath the base requirements to break down whats going on in units? or is it more like a description of the implementation that would make sense to a new/external developer to understand how to set the unit up in code, or understand the implementation?
Got a bit lost here. Detailed design is how you structure "code blobs" (to avoid terms like "unit" and "module") below the architecture. The intent is to facilitate maintenance. So in that respect, yes, sufficient information for someone new on how to understand the blob.

Can anyone also give an example of a relevant/typical real example of how a unit detailed design contradicts the architecture
Safety is what jumps to my mind. You may architect the system to have safety controls isolated; e.g., to ensure the system remains operational even if bad things happen. You can design something that would basically break that (contradict) by moving safety-critical aspects into areas that could result in those aspects not working. Not sure I explained that well, but hopefully it makes sense.
 

Tidge

Trusted Information Resource
5.4.4 Verify detailed design

The MANUFACTURER shall verify and document that the software detailed design:
a) implements the software ARCHITECTURE; and
b) is free from contradiction with the software ARCHITECTURE. [Class C]
Can anyone also give an example of a relevant/typical real example of how a unit detailed design contradicts the architecture

During reviews of ME device software I have found these sorts of obvious contradictions between the architecture and the detailed designs:
  1. The architecture called out units that did not exist in the detailed design
  2. The detailed design included units that were unaccounted for in the architecture
I had experienced at least one case with a much more subtle set of contradictions:
  • The design relied on hardware interrupts to function, this was invisible except at the unit level during code review
This was a defect in the architecture, because it was necessary to construct appropriate tests to challenge the implementation. We had to do those tests during integration testing, but without these being called out in the architecture the tests would never have been done. Anyone want to guess as to where in the software lifecycle that defect with the architecture was fully recognized?
  • One function had an improperly constructed case that allowed the executable to "fall" into another case, by design.
This was not what the architecture showed was happening, so trying to correct a defect in one part of the code broke another feature of the code. This was more a case of poor development practices and poor documentation lead to a lot of wasted development effort, including testing.
 

drm71

Involved In Discussions
Thanks for the replies

Got a bit lost here. Detailed design is how you structure "code blobs" (to avoid terms like "unit" and "module") below the architecture. The intent is to facilitate maintenance. So in that respect, yes, sufficient information for someone new on how to understand the blob.

Sorry, what I meant is what "physically" is Detailed Design, my example was e.g. defining some lower level requirements that your unit would need to satisfy, or perhaps just a descriptive document? (I imagine a common response from developers is that well commented code, in repos with readmes would satisfy 'detailed design', or?


I guess I'm flagging that I'm still at the stage where mapping "actual stuff" to some of the abstract standard expressions does not come naturally to me.
 

yodon

Leader
Super Moderator
I imagine a common response from developers is that well commented code, in repos with readmes would satisfy 'detailed design'
You nailed that one! I have seen companies that use something like doxygen to extract comments from the code and construct what they passed off as a detailed design document. And, I suppose, that could work to some extent, but if you're doing that, developers will just go to the code anyway and the extraction process has no benefit other than to put something in front of a reviewer. While that may get you through an audit, it's really of little benefit to your organization.

Talk to the software engineers on your team and they can probably give guidance on what's good to document for detailed design. That will likely get more buy-in since you're not trying to just push something down on them.

I think we generally provide an overview of the blob and get into details about lower level interfaces (if not already established in the architecture or elsewhere), discuss interrupts, memory allocation, messaging, etc.
 

Tidge

Trusted Information Resource
does this mean for example making a new subset of requirements underneath the base requirements to break down whats going on in units? or is it more like a description of the implementation that would make sense to a new/external developer to understand how to set the unit up in code, or understand the implementation?

Got a bit lost here. Detailed design is how you structure "code blobs" (to avoid terms like "unit" and "module") below the architecture. The intent is to facilitate maintenance. So in that respect, yes, sufficient information for someone new on how to understand the blob.
Sorry, what I meant is what "physically" is Detailed Design, my example was e.g. defining some lower level requirements that your unit would need to satisfy, or perhaps just a descriptive document? (I imagine a common response from developers is that well commented code, in repos with readmes would satisfy 'detailed design', or?

What I've done (and haven't found too confusing) is to treat the source code (or other elements like pre-existing libraries) as elements of the detailed design. I don't have another layer of requirements for them, but I allocate existing requirements to them. There are a variety of ways to document this sort of allocation.

I have one (potentially large) caveat: I did have an uncomfortable experience with an FDA auditor who (essentially) expected to see ALL testing motivated by a SPECIFIC requirement. In practical terms, this would mean (what follows is only one of the implications) that each and every unit test would require particular requirements. This is not practical, nor is it required by regulation or consensus standards... but you can't really say this to an FDA auditor... so allow me to describe the circumstances.

What this auditor was hung up on was at a higher level: the perceived issue was (software) system level requirements were written at a somewhat general functional level being and were demonstrated by specific tests that were necessary because of the specific design implementation. We had a rather complete set of documentation showing how that one requirement was allocated to a specific part of the architecture, and how the architecture was traceable to the implementation details (in the units). Everything up to the source code are "design inputs", the source code is the implementation of those inputs. We then marched back up through the verification side of the process, showing the unit and integration tests, and ended with the (software) system tests to demonstrate how the specific system level tests demonstrated the one requirement was satisfied.

Eventually, the auditor agreed, but this person was of the opinion that the requirement should have been rewritten so that the test method more obviously matched the requirement. I can't completely disagree with the point, except that development doesn't start with the test methods. It took a LOT of restraint to not try to teach an auditor the 62304 process of software development when the issue was essentially "I don't need to know any of that, I can tell an untested requirement when I see one." We ultimately resolved this impasse by hammering home that we understood the difference between inputs and outputs and that we had the verification that the outputs satisfied the requirements. We did have to agree that it was possible to rewrite requirements, although we did not. (*1)

This was quite frustrating because all of the critical thinking and allocating of requirements was documented. That project was documented to an extent where I was comfortable that we could have bundled up the code and the documentation and sent it to a third party (for maintenance, feature improvement, refactoring, whatever)... but this sort of assurance doesn't mean anything when a layperson wants to pick "any two" deliverables (here, a system requirement and a system level test) from a development project and expect to see an exact 1:1 match between those deliverables.

(*1) It was also incredibly difficult to resist pointing out to the auditor that the request to rewrite a requirement (to 1:1 match the test method) could have been reframed to "rewrite the test method"... but I suspect the auditor recognized (at least subconsciously) that they had as little experience with test methods as they did with coming up with detailed designs. The auditor was very patient, and ultimately did not issue any findings, but this was a very tense situation... as the development team had to simultaneously convince management that testing had been appropriate and complete!
 

drm71

Involved In Discussions
Thanks for sharing, that's a very interesting experience and good to know that these kinds of issues can occur even when you (clearly) are in full control of what you're doing!
 
Top Bottom