Caution: soap box ahead!
Let me start by apologizing for coming in almost a month after the last post - Energy's problem may have been resolved by now - but all I can do is plead the super-busy excuse. (8a to 6p at the office, then 8p to midnite at home, blah blah blah)
But on to the issues at hand ...
Temperature measurements: when you are using thermocouples, always remember that there are two parts (at least!) to the measurement system - the meter and the thermocouple. Both parts have to be calibrated if you want to know how hot it is, even to "only" +/-5 degrees. There are many manufacturers with fantastic digital thermometer specs - readouts with resolution of 0.01 degree, and accuracy of +/-(0.05% of reading + 0.3 deg.C) and so on. BUT, when you connect that super-whizz-bang meter to an uncalibrated plain old type K (for example) thermocouple that you just whipped up, your system uncertainty instantly goes to the
thermocouple spec of +/-4 degrees Fahrenheit. (Provided I am remembering the ASTM tables correctly ... they are at the daytime office.) In most cases, the accuracy of the digital meter is trivial when compared to the thermocouple. On the other hand thermocouples are cheap, and pretty reliable when used correctly. For "better" accuracy over a limited temperature range, thermistors might be another choice to consider. For "best" accuracy of course (and kilobuck co$t$) you can use a standard PRT system ... but it's overkill for Energy's needs.
(By the way, calibrating those "super high accuracy" digital thermometers is a real pain. I don't know of a shipping DC microvolt volt thermocouple calibrator that can be automated and has the required uncertainty. [Does anyone else?] That means going back to the old-fasioned way using high-end DC calibrators, and ice point, and reference junctions - manual & slow.)
Time Interval measurements: (as opposed to time of day). I have some alternate opinions about some of the things mentioned. First, Energy is using a controller (as I understand it) that is set for a time interval. The actual time of day is irrelevant - the only consideration is the duration of a period of elapsed time. That is, what is the uncertainty associated with measuring an interval of 120 minutes?
- An AC electric wall clock that uses a shaded-pole synchronous motor (most do) is electrically locked to the AC power line frequency. In North America, that frequency is 60 Hz +/-0.02 Hz. That makes a pretty good secondary time interval standard, actually. Over 120 minutes the maximum possible error is 2.4 seconds, and that is only in the unlikely event that the power line frequency is all the way to one limit for the entire time. For Energy's purposes, it seems that a couple of seconds here and there is trivial.
- There is an easy way to get NIST-traceable time interval, and time of day traceable to the US Naval Obervatory (where NIST gets it from!) for $50 or less. Go to your neighborhood consumer electronics store and get one of those "atomic" clocks. Get one with a signal quality indicator. Provided you can put it near an exterior wall, preferably oriented towards Denver, it will pick up and lock to a digital subcarrier broadcast by WWV radio and you are traceable as long as its signal indicator is OK. Check its data sheet for it's basic unlocked "digital clock mode" drift, and how many times per day it updates to WWV. Simple math will give you the maximum possible error. On the one I have it is much less than the 1-second resolution, so it's irrelevant. AND it is the only clock in the house I do not have to adjust for Daylight Savings Time! It is easier than the computer thing and you don't have to make a long-distance phone call to Boulder, Colorado.
- I believe I have a random memory that average human response time for actuating a stopwatch is something less than 0.1 second. When a long time interval is measured that quickly becomes irrelevant. It is 0.1% at 100 seconds, and in the time period Energy is talking about it is about 14 ppm for on + off response. The response time is important only when things are happening real close together, such as at the finish of a horse race -- that's why they use fully electronic timers now. Once again for Energy's purposes, that response time factor may be irrelevant -- assuming his process does not end in the same way as a horse race! The main factor of concern is the accuracy of the chronometer (a fancy word for a time interval measuring device) over the period you are measuring. Norm is correct in quoting the ISO 9001 standard, but I think it is possible to demonstrate that any timer - even a personal digital watch with a chronometer function - can meet this requirement provided you document the process of checking it against a suitable standard. For instance, start the timer while noting the indicated time of day on an electric wall clock. Some period later, stop it while noting the time of day again. Determine the difference in seconds, and see if it meets your needs.
Notice that I have been attempting to differentiate between
time interval and
time of day. They are related only by the definition of the second.
- Time interval is the reciprocal of frequency, and is defined by physics - a particular vibration of some highly agitated Cesium atoms. NIST disseminates time interval and frequency through the WWV radio stations. It is also correlated with the GPS system, because every one of those satellites has a couple of Cesium primary frequency standards on board.
- Time of day is determined by astronomy - measuring the rotation of the planet on its axis and its orbit around the Sun. That is done by the US Naval Observatory (and equivalent observatories in other countries.) The counting interval for time of day is the second (see above) but the number of seconds in a year varies. (Ever hear of "leap seconds"?) Time of day is disseminated by the USNO, and one way they do that it by the time codes and voice announcements on WWV. It is also available from the GPS system, and by telephone and over the Internet from the observatory.