This is a very old requirement (from the 1987 edition), and on the face of it not unreasonable.
The intention is that the user should be able to start from a level that is basically undetectable and build up to a level that is comfortable.
In principle, the requirement is met by having an output control that starts at zero, but in practice there may be small currents e.g. impedance sensing to check good electrode contact, or just leakage across the hardware control component, so there must be some non-zero limit. A value of 2% is reasonable in this context. I expect most designers would likely target a nominal output of zero, but of course there may be small residual current depending on the design.
Just because the standard says 2%, and yours is 2.5% does not mean your device is unsafe. Limits are set based on a number of factors: safety, practicality, simplicity and "testability".
If for whatever reason this approach does not fit with your device, most modern regulations (EU, FDA, Japan, Canada, Australia to name a few) allow the manufacturer to document alternate solutions, i.e. justification based on the fundamentals of a particular device and situation.
The trade off is complexity in establishing a new limit. For this case if could could require clinical data or at least some literature. Anecdotal evidence would not be sufficient.
Current sensation can depend on a lot of factors, such as the electrode area, open circuit voltage, treatment location (current path), frequency, waveform (crest factor) and can also vary greatly with individual patients.
The claim that 5mA pulses are undetectable implies that the 5mA is the peak value, the pulses are relatively short so the rms value is significantly lower. Otherwise, it seems doubtful to claim that 5mA is undetectable.