Thanks for the question!
These are numbers I have always taken for granted, but you spurred me on to doing a little research on the matter.
Regarding the 10:1 ratio, the sources I found refer to this being a "rule of thumb" used in control charting. I can not find any document that cites this, even though, like many others, I have been taught that this is the "preferred" test accuracy ratio, and that if you cannot achieve 10:1 then 4:1 is acceptable.
So where does the 4:1 ration come from? I read in NASA publications that it goes back to the 1950's, when the Navy discovered that they were having difficulty maintaining a 10:1 ratio, and wondered if this was really necessary. Jerry Eagle of the Naval Ordinance Lab studied the matter and came up with a statistical analysis that a 1% consumer risk on calibration results would be considered acceptable. This is translated into about a 3:1 test accuracy ratio, and he then built in a "cushion" to recommend a more conservative 4:1 ratio. This became accepted as the standard.
As to standards requiring calibration accuracy ratios, I have found these:
ISO 10012-1:1992 in Section 4.3 Guidance states:
"The error attributable to the calibration should be as small as possible. In most areas of measurement it should be no more than one third and preferably one tenth of the permissible error of the confirmed equipment when in use". This is the only reference to a 10:1 ratio that I can find, it is "preferred" and not required. Also, the current version ISO 10012:2003 drops references to these requirements.
MIL STD 45662A 5.2 requires a 25% minimum ratio (4:1)
ANSI Z540-1 10.2 b) requires a "collective uncertainty of the measurement standard shall not exceed 25% of the acceptable tolerance" (4:1)
ANSI Z540.3 duplicates this requirement in section 5.3 b)
So it appears the the only current accuracy ratio requirement is the 25% (4:1 TUR)
Hope that this helps.