G
garcas
Hi,
I am currently developing some models for evaluating the uncertainty of measurement in the calibration of acoustical instruments (sound level meters, SLM) conforming to the Guide of Expression of the Uncertainty (GUM).
Although we are talking about acoustical instruments, it is actually an electrical calibration. The calibration is made of several tests, some more complicated than others. In the simplest ones, we apply a stimulus (eg. a sinusoidal signal of certain amplitude and frequency) with a function generator to the SLM and then we check the instrument’s (SLM) response. From an uncertainty point of view, this kind of test does not present any inconvenient to me: I have identified all sources of uncertainty (signal generator’s accuracy, resolution and calibration information; SLM’s resolution; etc) and combined them to find the expanded uncertainty.
The problem arises with the “other” type of tests. In many cases, the tests are what we could call “differential” tests, which means that in those tests we don’t check for the instrument response to a single stimulus, but we check for the difference in the indication of the instrument to two consecutive (and possibly different) stimulus.
One clear example of this kind of test is the “differential linearity level test”. In this test we apply an stimulus S1 and register the SLM response R1. Then we apply a second stimulus S2 whose amplitude is 1 dB higher, and we register the SLM response R2. Then we have to check that the indication of the SLM has incremented in 1 dB (R2-R1=1 dB).
From a logical point of view, I would expect that in this kind of test the uncertainty should be lower than in the “simple tests”, because the systematic errors should be cancelled. But from a GUM point of view, it doesn’t seem to be true: I would have to combine the same sources of uncertainty twice (signal generator’s accuracy, resolution and calibration information; SLM’s resolution; etc), for the first measurement and for the second one. And this combination will be done in squares, so there will be no cancellation. What am I missing? Should I suppose that the two readings are completely correlated (in this case a subtraction term will appear)?
The thing is that in the documentation of a Bruel and Kjaer calibration system similar to mine, I read that the signal generator didn’t have to be calibrated because all measurements where “differential” (this is not 100% true, but almost). Does this mean, as I suspected, that systematic errors should cancel and thus errors from the generator would not contribute to the total measurement uncertainty?
Thanks for any help. I am quite desperate with this stuff.
I am currently developing some models for evaluating the uncertainty of measurement in the calibration of acoustical instruments (sound level meters, SLM) conforming to the Guide of Expression of the Uncertainty (GUM).
Although we are talking about acoustical instruments, it is actually an electrical calibration. The calibration is made of several tests, some more complicated than others. In the simplest ones, we apply a stimulus (eg. a sinusoidal signal of certain amplitude and frequency) with a function generator to the SLM and then we check the instrument’s (SLM) response. From an uncertainty point of view, this kind of test does not present any inconvenient to me: I have identified all sources of uncertainty (signal generator’s accuracy, resolution and calibration information; SLM’s resolution; etc) and combined them to find the expanded uncertainty.
The problem arises with the “other” type of tests. In many cases, the tests are what we could call “differential” tests, which means that in those tests we don’t check for the instrument response to a single stimulus, but we check for the difference in the indication of the instrument to two consecutive (and possibly different) stimulus.
One clear example of this kind of test is the “differential linearity level test”. In this test we apply an stimulus S1 and register the SLM response R1. Then we apply a second stimulus S2 whose amplitude is 1 dB higher, and we register the SLM response R2. Then we have to check that the indication of the SLM has incremented in 1 dB (R2-R1=1 dB).
From a logical point of view, I would expect that in this kind of test the uncertainty should be lower than in the “simple tests”, because the systematic errors should be cancelled. But from a GUM point of view, it doesn’t seem to be true: I would have to combine the same sources of uncertainty twice (signal generator’s accuracy, resolution and calibration information; SLM’s resolution; etc), for the first measurement and for the second one. And this combination will be done in squares, so there will be no cancellation. What am I missing? Should I suppose that the two readings are completely correlated (in this case a subtraction term will appear)?
The thing is that in the documentation of a Bruel and Kjaer calibration system similar to mine, I read that the signal generator didn’t have to be calibrated because all measurements where “differential” (this is not 100% true, but almost). Does this mean, as I suspected, that systematic errors should cancel and thus errors from the generator would not contribute to the total measurement uncertainty?
Thanks for any help. I am quite desperate with this stuff.