The general rule is that you "may" extend interval when in-tolerance history says the probability is they will remain in-tolerance for the new longer interval, and you "must" decrease the interval when out-of-tolerance history so dictates.
Depending on what you must comply with, it may or may not be reasonable to double the interval. I ran the NCSLI RP1 Method A3 Interval Tester, and at 95% in-tolerance target, the longest new recommended interval was 14.4months (I varied % in-tolerance all the way to 1% and the confidence factor all the way down to 1% as well, and still only came up with 14.4 months as the maximum recommended interval. This is based on the National Conference of standards Laboratories Recommended Practices.
However, on the more practical side, if you really want to adjust the interval to 24 months, I would recommend reviewing the certificates, and perhaps calling the lab that cal'd them for you, and ask to determine if the units were ever adjusted or optimized by them during any of those calibrations. If they have never been adjusted or optimized, that would indicate that the units may have remained in-tolerance for the three years, and 24 month interval may be fully appropriate. Again, depending on what you must comply with (ISO, FDA, FAA, etc.), it may or may not be important for you to document that. The calibration interval is, after all is said and done, the statistically probable amount of time (or usage) within which an instrument may be expected (to a defined degree of confidence) to remain within specified tolerances.
As the first repy indicated, it will not magically go out of tolerance on the due date. What does happen is that from the instant the calibration label is applied (kind of like magic - sorry, it's Monday morning) the is a statistical increase in the probability that it will not remain in specs., which increases until the time at which it will likely be out of tolerance.
The more history you acquire, the higher the confidence in how long it will remain in tolerance. Method A3 uses the average interval and the maximum interval in statistically determining new intervals. IMPORTANT NOTE: When adjusting intervals, use the actual amount of time between calibrations (old cal date to new cal date), not the assigned interval (12 months). If you went significantly over 12 months, during any of the previous calibrations on your meters, that could increase the allowable new interval (per that recommended practice).
Also, as was previously posted, the customer assigns the interval (according to IEC/ISO17025). The manufacturer's interval is not binding, it is their recommendation. What that is based on depends on the manufacturer. You might even talk with someone at the manufacturer and ask them what it is based on, and what the risks might be with a two year interval.
Could be at each calibration that they do a periodic maintenance on the optics that could degrade (surface contamination buildup, for example) if left for two years.
It may well be fully acceptable to go two years (and in my opinion, there is reasonable likelihood that is the case). But I feel duty bound to share the risk potential.
There are two opposing forces in any circumstance where the interval is adjusted:
1. COST: if the interval is too short, too much is spent on unneeded calibrations.
2. RISK/COST: if the interval is too long, there is ALWAYS statistical increase in the risk of undesired out-of-tolerance. I added COST to the second statement, because that is an often overlooked potential impact of too long an interval. The undesired out-of-tolerance may result in "inaccurate" decision based on an out-of-tolerance reading.
Both of the above must always be factored into setting of initial intervals, and adjusting them. How the instrument is used and stored impacts how well it retains tolerance. How it is used also determines it's potential impact.