Comparing two different Ultrasonic Tester Calibration Techniques

tahirawan11

Involved In Discussions
Hi,

I would like to compare two ‘calibration techniques’. We have an ultrasonic tester which is used to detect defects in the product. Before we start testing we need to calibrate the instrument. The instrument is calibrated by testing the ‘Reference plate’, We know the ‘true defects’ (true value) of the reference plate and we adjust ‘two parameters’ in the ultrasonic machine until we see the ‘true defects’, once the defects are visible in the tester, we say the instrument is calibrated and ready to use. The two ‘calibration techniques’ differs from each other on the basis of 'sound frequencies'. The technique A uses 2dB and technique B uses 0.5dB. Here are the results (three columns, sorry for bad formatting) The technique which has less variance will be better one, Can any one suggest if I can perform a statistical analysis, to say which is better than the others, such as ANOVA.


Tech P_A P_ B

1 0 7
1 1 7
1 1 8
1 1 7
1 1 7
1 1 7
1 1 7
1 1 7
1 1 7
1 1 7
2 2 9
2 2 8
2 4 8
2 2 8
2 2 8
2 2 8
2 2 8
2 2 8
2 2 8
2 2 8


where Tech=techinque nr, P_A=parameter A and P_B=parameter B

/thanks in advance
 
Last edited:

Hershal

Metrologist-Auditor
Trusted Information Resource
Re: Compare calibration techniques

Your devices are typically used for NDT I believe.....welders use that type of tester, with the IIA blocks to "calibrate" the units exactly as you describe.

The key to that of course is the block itself, which needs to be periodically re-checked, and many calibration labs actually can do that, minus certifying the engineered defects specifically.

The other approach still will use the IIA blocks if I understand the post, but rather than marrying the display to an engineered defect, a change of signal power is used.

In pure theory, the dB approach is more accurate; however since each of the blocks is slightly different (regardless of the fact they are all machined from the same drawings and specs), ultimately you will still have to marry the scope part of the system to its block.

Best approach if you want total confidence in the calibration, is to send the block to an accredited calibration lab for dimensional measurements (you will have to provide the IIA specs for the block), and send the scope portion also to an accredited calibration laboratory. Many labs have dimensional and scopes (often under Time/Frequency) in their scope and so can take care of the calibration. As for the ultrasonic transducers, generally they work or not, so a calibration of them results in the actual dB response over some frequency range.

Remember to ask the lab for actual results - not a pass or fail - so you know exactly what your system in particular is doing.

Hope this helps.
 

gard2372

Quite Involved in Discussions
Speaking as being a former NDT UT Level II, I would also like to think outside the box by saying that SPC of your CAL test procedure between the 2 different transducers when you set up your gates makes no difference on actual product testing if your test standard IIW block (or other) isn't the same material type under test.

The material under product testing may have material propreties such as fatigue, heat treat characteristics, embrittlement, casting vs. forging etc... all play an important roles in setting up your (speed of material) in your flaw detector.

Bottom line, ensure that you have the same material test standards as the material your actually testing, then run your SPC. :2cents:
 
Top Bottom