I think the use of a library function as a risk control is more in line "classic" risk management than using a process. In hardware it is the same as using an off the shelf part that has specifications that fit with the risk control needs.
A "process" refers to the support structure that sits outside the raw decision to use a library function or write a line of code. I think in the real world, the risk will always be a function of both parts, a literal "object" which is implemented (which can be lines of code in software, library functions or purchased controls), and quality of the the process that supports the decision, implementation and testing.
The situation is also confusing because sometimes there are very "small" discrete risk controls which can easily be referred to from the risk management file while other times we are talking about the failure of larger systems where there is no discrete risk control and really only the process to improve the quality and hence reduce the risk.
To give an (embarrassing) example from my own experience: in one of my testers, I used risk analysis and determined that when user sets the output off, there should be two independent methods to shut off the power, to protect against a hardware fault. In software this was done by setting the DAC to zero, and separately turning off the timer that drives pulses needed for an output. Later I found that when the user changed the mode, there was a line of code that set up the timer and turned the timer back on. So the risk control was not effective due to this software bug. But for a while it was hidden because as long as there were no faults in the system, everything worked normally. It only became apparent later when one device developed a fault in the power supply, and the output was on when it should not have been, which then triggered the analysis,
Now, the question is whether a 62304 system would have detected and fixed the problem earlier, which I think is true. As such it would have a role in risk reduction, and is therefore is a genuine risk control measure.
But again there are two parts: one is the risk control object (the double switch in software) and the process (systematic, controlled verification of correct reliable, implementation). So strictly speaking the risk control measure is both of these parts. But normally, in the case of simple risk controls like this, we just refer to that risk control object and assume that 62304 will be there in the background.
That's an example of a small, discrete risk control that can be easily referred to from a risk management file.
At the other extreme, consider complex software that analyses a 12 lead ECG and determines a diagnosis such as an arrhythmia or elevated ST segment. There is a risk that a software bug gives a misdiagnosis. But typically in that case, there is no simple discrete risk control to refer to. Instead, the process itself will contain a large number of small discrete activities which are in effect risk control measures but would be too numerous to refer to from the risk management file. So just referring to the process makes sense, for that situation.
Now at the risk of making this already too long post even longer: the example for the tester is important precisely because it was a hidden function, something that only gets used in an abnormal situation. The result is that it is easy to overlook whether this function is correctly implemented, as happened to me, and I have also found many similar cases in functional safety testing of dialysis, infusion pumps, surgical laser, NIBPs and so on: software bugs in protection systems that only get triggered in fault condition and hence don't get a lot of action in the normal design process.
In that sense, having a risk management system point to small discrete risk controls and then being forced to check they are implemented properly has a lot of value. On the other hand, having a risk management system vaguely point to another process provides virtually no value at all. So there is a possibility that in the future risk management might be restructured to eliminate such useless linkage. But the key point here is to highlight that risk management is most effective when it is helping in rarer, unusual or abnormal situations, rather than the main function of the device, which is better handled by general design processes as per 13485, 62304.