J
Unique(?) situation.
Have to calculate an uncertainty budget for diameter. Gage is a custom-made (by Mahr-Federal) comparator. Standards are gage blocks sent to NIST for calibration. Subjects are gage blocks. Gage blocks and Standards (Masters) are of varying size - Let's say 10 different sizes. They are also of differing materials (ceramic, tungsten carbide and steel).
We measure gage blocks for customers. They are mostly steel, some ceramic and carbides come in.
The comaprator is set up with a Standard/Master. So to measure a 1/4 gage block, a 1/4 in Master is put into the gage and it is zero-ed out. Then the subject 1/4 inch block is measured. NIST says the uncertainty is .000002. When we measure the NIST Master we get .000007 on average.
Room is class 10000 with a constant 68 degrees +/1 .5. Masters are always at this temp. Subjects are allowed to settle to temp before measurement.
To calculate uncertainy, we took the Standards (masters) and had both (2) technicians measure all 10 Standard/Masters 3 times each and record the readings. The Standard deviation was calculated for all of the readings - I assume this is our Type A first step. Again, our averages were about .000005 worse than what NIST got. How do we deal with the differences in measurement we get from what NIST gets - or do we ignore what NIST gets and just factor our own readings?
Do I have to factor thermal coefficients for all three materials used ?
Can I measure a 1/4 inch tungsten carbide subject with a 1/4 inch ceramic master?
How can I factor gage uncertainty for a custom gage that we calibrate in house?
Should I be measuring something other than the NIST Masters because the NIST Masters are what we set up the comparator to?
Does anyone have an idea of what factors traditionally are considered in addition to standard deviation (type A) factors?
In it over my head and looking for guidance - thanks!
Have to calculate an uncertainty budget for diameter. Gage is a custom-made (by Mahr-Federal) comparator. Standards are gage blocks sent to NIST for calibration. Subjects are gage blocks. Gage blocks and Standards (Masters) are of varying size - Let's say 10 different sizes. They are also of differing materials (ceramic, tungsten carbide and steel).
We measure gage blocks for customers. They are mostly steel, some ceramic and carbides come in.
The comaprator is set up with a Standard/Master. So to measure a 1/4 gage block, a 1/4 in Master is put into the gage and it is zero-ed out. Then the subject 1/4 inch block is measured. NIST says the uncertainty is .000002. When we measure the NIST Master we get .000007 on average.
Room is class 10000 with a constant 68 degrees +/1 .5. Masters are always at this temp. Subjects are allowed to settle to temp before measurement.
To calculate uncertainy, we took the Standards (masters) and had both (2) technicians measure all 10 Standard/Masters 3 times each and record the readings. The Standard deviation was calculated for all of the readings - I assume this is our Type A first step. Again, our averages were about .000005 worse than what NIST got. How do we deal with the differences in measurement we get from what NIST gets - or do we ignore what NIST gets and just factor our own readings?
Do I have to factor thermal coefficients for all three materials used ?
Can I measure a 1/4 inch tungsten carbide subject with a 1/4 inch ceramic master?
How can I factor gage uncertainty for a custom gage that we calibrate in house?
Should I be measuring something other than the NIST Masters because the NIST Masters are what we set up the comparator to?
Does anyone have an idea of what factors traditionally are considered in addition to standard deviation (type A) factors?
In it over my head and looking for guidance - thanks!