Why do you not use detection?
One of the reasons why organizations move away from using Detection (specifically in Design
FMEA) is that the organization cannot come to consensus on what it means to assign a Detection rating for certain failure modes
prior to the implementation of an identified risk control.
A classic RPN ('Risk' Priority Number) FMEA calculation where RPN = D (detection rating or "detectability") x O (occurrence rating) x S (severity rating) I'm familiar with organizations that have no problem coming up with pre- and post-control assessments of S and O, but are flummoxed by D. One element of the problem people have with D for a failure mode is that engineers tend to be very binary about whether the failure mode is valid or not. If the design team doesn't consider it to be a potential failure mode they simply ignore it (and it doesn't get into the RM file); if they think its a possibility they will add it but then they are only comfortable making a post-control assessment of the detectability of the failure mode. If they don't have a pre-control guesstimate then they are at a loss of what it means to have improved D.
Personally: I prefer to keep D as a rating in all FMEA. Detection is distinct from Occurrence and I see value in having it as a rating as seeking improvement in D can drive design and manufacturing choices. (*1)
Within Design FMEA, my personal advice to less mature design teams for assigning D ratings
prior to the identification of risk controls is rather simple and is based on
design methodology rather than
design choices. For example: An ad-hoc (or reactionary) approach to design that emphasizes testing (for the effectiveness of risk controls)
only will merit a poor D rating (prior to implementing risk controls). If the specific design choice is pulled from a library of 'time-honored' (i.e. well understood) solutions specifically relevant to the identified failure a (at least) slightly more improved D rating. For certain Medical Electrical devices particular 60601-2-xx standards explicitly require certain detectability-driven design choices
because of known failure modes. Once a design requirement has become the established state of the art, for particular and specific failure modes those design choices could merit extremely good D ratings. This approach isn't perfect, but it allows for some consistency from design teams that have a wide spectrum of risk analysis experience.
(*1) A possible example of a design choice to improve a D rating: If a medical device has the potential to 'wear out' in away that may not be obvious to a patient or user, something could be implemented to indicate that it is 'wearing'. Presumably the S and O ratings are not changed, but the priority of this failure mode (within the greater context of the risk profile for the finished device) is presumably reduced. This solution is different than 'just use materials that don't wear out', as that would be an attempt to address the O rating.