Coming back to 'common sense'. I do not for life of me remember where I read it. But what I do recollect is that the origin lies in the 'rounding off' practice, i.e. if the value specified is, e.g. 10 +/- 0.1, it is implied that the upper limit could be anywhere between 10.05 and 10.14 so that the result will be 10.1 when you round off to one decimal point. Similar argument applies to the lower spec limit of 9.9.
So in order to confirm this you need an instrument that has a resolution 1/10th of the specified tolerance, i.e reading upto 2nd decimal point in the above example.
I suppose this is common sense!


