![]() |
![]() Because of normal variations in the properties of materials used to construct radiation temperature sensors, new instruments must be individually calibrated in order to achieve even moderate levels of accuracy. Initial calibration is likely to be performed by the sensor manufacturer, but periodic recalibration--in-house or by a third-party laboratory or the original manufacturer--is necessary if any but the most qualitative measurements are expected.
The ongoing accuracy of a non-contact temperature sensor will depend on the means by which the calibration is performed, how frequently it is recalibrated, as well as the drift rate of the overall system. Ensuring the absolute accuracy of non-contact temperature measurement devices is more difficult than with most direct contacting devices, such as thermocouples and resistance temperature detectors (RTDs). Limiting the absolute accuracy to 1% is difficult; even in the most sophisticated set-ups, better than 0.1% accuracy is seldom achieved. This arises, in part, from the difficulty in accurately determining the emissivity of real bodies. Repeatability or reproducibility is, however, more readily achievable than absolute accuracy, so don't pay more if consistency will do.
Why Calibrate?
In any case, the radiation source must completely fill the instrument's field of view in order to check the calibration output. If the field of view is not filled, the thermometer will read low. In some instruments, calibration against a blackbody reference standard may be internal--a chopper is used to alternate between exposing the detector to the blackbody source and the surface of interest. Effectively, this provides continuous recalibration and helps to eliminate errors due to drift.
| |||||||
Top of Page |
Next Chapter: Calibration of IR Thermometers Continued
|