# The GUM (Guide of Expression of the Uncertainty) against the logic (?) - Acoustical

G

#### garcas

Hi,

I am currently developing some models for evaluating the uncertainty of measurement in the calibration of acoustical instruments (sound level meters, SLM) conforming to the Guide of Expression of the Uncertainty (GUM).

Although we are talking about acoustical instruments, it is actually an electrical calibration. The calibration is made of several tests, some more complicated than others. In the simplest ones, we apply a stimulus (eg. a sinusoidal signal of certain amplitude and frequency) with a function generator to the SLM and then we check the instrument’s (SLM) response. From an uncertainty point of view, this kind of test does not present any inconvenient to me: I have identified all sources of uncertainty (signal generator’s accuracy, resolution and calibration information; SLM’s resolution; etc) and combined them to find the expanded uncertainty.

The problem arises with the “other” type of tests. In many cases, the tests are what we could call “differential” tests, which means that in those tests we don’t check for the instrument response to a single stimulus, but we check for the difference in the indication of the instrument to two consecutive (and possibly different) stimulus.

One clear example of this kind of test is the “differential linearity level test”. In this test we apply an stimulus S1 and register the SLM response R1. Then we apply a second stimulus S2 whose amplitude is 1 dB higher, and we register the SLM response R2. Then we have to check that the indication of the SLM has incremented in 1 dB (R2-R1=1 dB).

From a logical point of view, I would expect that in this kind of test the uncertainty should be lower than in the “simple tests”, because the systematic errors should be cancelled. But from a GUM point of view, it doesn’t seem to be true: I would have to combine the same sources of uncertainty twice (signal generator’s accuracy, resolution and calibration information; SLM’s resolution; etc), for the first measurement and for the second one. And this combination will be done in squares, so there will be no cancellation. What am I missing? Should I suppose that the two readings are completely correlated (in this case a subtraction term will appear)?

The thing is that in the documentation of a Bruel and Kjaer calibration system similar to mine, I read that the signal generator didn’t have to be calibrated because all measurements where “differential” (this is not 100% true, but almost). Does this mean, as I suspected, that systematic errors should cancel and thus errors from the generator would not contribute to the total measurement uncertainty?

Thanks for any help. I am quite desperate with this stuff.

R

#### Ryan Wilde

Garcas,

Technically, the GUM is correct, except there are a few things that you can do to lower your uncertainty.

1. A 1 dB step from a modern function generator does not generally alter the attenuator setting (a major source for level uncertainty), unless it requires a range switch, and usually it only involves the leveling circuit.

2. The B&K procedure sounds as if it was monitoring the signal generator output with something along the lines of a AC Voltmeter, hence source accuracy is moot. That is actually the way that I would go. If you can get your hands on something with 6-1/2 digits or more with a good frequency range, you're golden. Since you will once again be in the same Voltmeter range, the baseline uncertainty cancels, and only the "PPM of Reading" would be correlated. Several voltmeters are actually specified separately for ratio - which is two measurements in a single range.

Hope this helps a bit,

Ryan

G

#### garcas

Hi Ryan,

First of all, thanks for your help. It's the first time in weeks that I get some.

I agree with you that a 1 dB step usually does not alter the attenuator settings, so I guess in those cases linearity is the most important source of uncertainty in the generator. But the problem is that manufacturers usually give a single accuracy figure in their specifications, so when computing the uncertainty due to the generator I can not distinguish between the “attenuator uncertainty” and the “uncertainty due to linearity”.

I think it would be easier to work with an example. Let’s say my calibration consists of the following steps:
0. Connect the signal generator’s output to the sound level meter (SLM) electrical input.
1. Apply a sinusoidal signal of 94 dB to the SLM. The microphone sensitivity is 50 mV/Pa, so the generator should be configured to produce a sinusoidal signal of 50 mV.
2. Annotate the SLM’s indication (let’s say the indication is 94.1).
3. Apply a sinusoidal signal 10 dB higher: which means configuring the generator with a 158.5 mV signal.
4. Annotate the SLM’s indication (let’s say the indication is 104.5).
5. Check that the SLM’s response has incremented in 10 dB, with a tolerance of +/- 0.4 dB.

Thus, my variable is the difference in indication (104.5-94.1=10.4). So I would have to check that 10.4 +/- U lies within +/-0.4dB of the expected difference (10 dB). Then I have to calculate the uncertainty of measurement U. Here is my doubt and this is what I was trying to ask in my last post: do I have to take into account the accuracy specifications of the generator when calculating the uncertainty of this measurement?

My logic says no, or more accurate “not completely”. What I mean is that: the offset error of the signal generator would be cancelled between the two readings; but linearity and gain errors would not. The problem is that I only have a single accuracy specification figure for the signal generator: its accuracy. So what do I do? Do I use it or not? I think that if I apply the GUM I have to add the generator accuracy in squares, so there would be no cancellation. I only see a GUM case where there would be some short of cancellation: if we say that both measurements are correlated with a positive correlation coefficient.

R

#### Ryan Wilde

Yes, you most definitely have to take your generator level accuracy into account. Actually, if you change the generator the 10 dB you cited in your example, the only amount of change that I can guarantee that you didn't have is exactly 10 dB. You will have to determine what the actual change is, or take the level uncertainty at face value, which is generally going to be an unreasonably large figure for your application.

You are correct that the positive correlation coefficient would be appropriate. The problem is that the coefficient is an unknown, because the function generator isn't spec'd for it. The only way to know would be to monitor it with a ac voltmeter to find out, in which case, you are probably better off using the uncertainty of the ac voltmeter (minus the noise floor spec, which DOES cancel out directly). If you monitor, you don't need to uncertainty of the generator, because you are monitoring it.

In short, unless you are monitoring the generator (HP 3458A, 34401A, or 3478A; Fluke/Wavetek/Datron 1281, etc. all have fairly low uncertainties with good frequency range) you have to either use the level uncertainty twice, perform tests on your generator to prove its gain and linearity error, or make an educated "guess" - which I wouldn't suggest.

Ryan

G

#### garcas

Hi again,

That was really good, thank you. I think you are helping me a lot to find my way

The only way I think I could evaluate the correlation coefficient, would be by repeating several times this measurement and estimating it from the experimental results. Because the coefficient would depend on both the generator and the sound level meter (SLM) under test, isn’t it?

This would lower my uncertainty, but I would have to repeat each measurement several times, which means more calibration time (and I cannot afford that) .

So what about estimating the correlation coefficient from previous calibration data? Does it make sense? I have an historical of all calibrations I have perform until now, so maybe I could do an “off-line” estimation of the correlation coefficient for each generator-SLM pair and then use this to calculate the uncertainty.

This way I wouldn’t have to repeat each measurement several times, unless I would I have to calibrate a SLM that I haven’t calibrate yet. What do you think about this approach?

Thanks again Ryan… (and anyone else on this?)
Guillermo

R

#### Ryan Wilde

Guillermo,

Using past history will give you reproduceability (lack of variation over time), but will not show bias (is 10 dB actually 10 dB?).

For example, if my generator is actually supplying an 11 dB change, and my SLM is reading this as 10 dB, I might unwittingly call it in-tolerance. Next year, if my generator supplies 11.02 dB change, and my SLM reads 10.02 dB, I have fantastic reproduceability, but I am reproducing a false number.

Without knowing exactly what the generator is supplying, my data means very little, since the generator is not specified for that type of measurement. Hence, I monitor it with a piece of equipment that IS specified for that type of measurement, and transfer the accuracy to my supply, and therefore to my SLM.

Does that make sense?

Ryan

G

#### garcas

Hi Brian,

I agree with you, but I think I am already having this in mind as I calibrate my generator once a year and this information is also included in my uncertainty budget.

The problem is about the correlation coefficient between 2 successive measurements. I wonder if I could estimate it from previous data or should I estimate it from repeated measurements.

Another problem I have is about how to include the systematic effect’s correction uncertainties according to GUM clause 4.1.4 when I have a non linear uncertainty model. I have already defined a model for evaluating the uncertainty of measurement in a simple test where I measure a certain quantity X. Then, I have to develop an uncertainty model for a second test where we measure a quantity Y = X1/X2, where X1 and X2 are the result of two measurements of the quantity X. According to the GUM (4.1.4 and H2 example) there are two approximations for this case (determination of the Y=X1/X2). I think I should use the second approximation in example H2, but I do not know how to include the systematic effect’s correction uncertainties in this approximation (the GUM says it should be included, but it does not say how to do it).

Guillermo