The QS9000 requirement is that calibration is done at an appropriate frequency (4.11(c)). At the end of QS9000 para 4.11 is Note 18, which says that ISO10012 may be used for guidance.
ISO1001201:1992(E), para 4.11 "Intervals of Confirmation" there is further detail. It does state that confirmation needs to be done at intervals which maintain an acceptable level of confidence that the instrument will remain in tolerance for the duration of the interval.
The problem with use of the manufacturers intervals is that it is not always correct, and not al manufacturers document a recommended interval. If you have a small calibration program, and you can provide a manufacturers documented interval for all instruments, you might stay out of trouble.
In my years of experience in calibration I have found that the manufacturer's recommended interval doesn't always work when equipment gets some age on it. A Hewlett-Packard signal generator, brand new out of the box, used in a clean, controlled environment, is very likely to stay within tolerances (as an example). But even with good brand name equipment, as that same unit ages, its ability to remain in tolerance for that same calibration interval will change. Or many other factors such as a less than ideal operating environment, or moving that piece of equipment often (a signal generator that is picked up, put on a cart and transported to another location repeatedly, for example) would not have the same probability of remaining within tolerance.
There is also an upside to adjustment of calibration intervals. A Fluke 77 multimeter, for example may have an interval of 12 months. If you had 100 of those meters and statistically evaluated its confirmation interval, I have seen intervals on that particular model extended up to 3 to 5 years. I seem to even recollect one company that did interval analysis on that model and ended up with no re-calibration required because they never had a single out-of-tolerance.
Interval evaluation in a small program may well be more work than it is worth in some circumstances. But in a large calibration program, many 10's or perhaps 100's of thousands of dollars per year could be saved, as well as the identification of porblem units. I had some DC millivolt calibrators in my program with a 12 month recommended interval. I had two units. Neither one of these, even after a factory overhaul could stay in tolerance for even three months. I eventually quarantined them and told the user to buy something else as a replacement. The process people ended up with test equipment which would do the job right. And risk to customers product was also minimized.
I have managed calibration programs with fixed intervals, but I am also sensitive that statistical interval adjustment is also sometimes a must in some circumstances.
Hope I didn't stir up the pot too much on this. But I am duty bound to tell the truth the best I can.
As for methods, there are many. I grouped similar instruments together annually (for example Fluke 70 series including model 73, 75, 77, then Fluke 77-II series as a separate family), calculated how many of these instruments (as a percentage) remained in tolerance through their calibration interval for any calibrations done that calendar year. I matched that percentage figure against a chart I developed. A given percentage figure correlated to either an increase, remain the same, or decrease in interval. Depending on the percentage figure, the change was incremental (i.e.: a very low percentage figure corresponded to a drastic decrease in interval, etc.).
------------------