Safe Sterilization

Autoclaves must be validated minimum once per year according to international regulations. Some of the most important requirements are the measurement and analysis of time, temperature and pressure. The results from the analysis are used to control if the autoclave complies with international standards such as ISO 17665. Furthermore, it is critical to determine if the steam used in the autoclave is saturated.

If the steam is not saturated there is a risk that the sterilization process is carried out at a high temperature, but with steam that contains very little heat energy. In that situation, temperatures at or above 134°C could be measured, but the actual transfer of heat energy from the steam to the load in the autoclave is very low. In processes where an efficient heat transfer is required it is critical to use saturated steam. Only saturated steam transfers enough heat energy to the load to secure an efficient sterilization.

Non-Sterile Load at 134°C

If heat is added to the saturated steam there is a risk that it will superheat and dry out. The superheated steam’s ability to transfer heat energy to the load is very poor. The superheated steam must cool down to the saturation temperature before it condensates and thereby release a large amount of heat energy. The amount of heat energy transferred by the superheated steam during this cool down is very limited compared to the energy released when it condensates.

This means that during the time it takes to cool down only a small amount of heat energy is transferred and there is no positive contribution to the sterility of the load, even though the measured temperature is 134°C or higher. As a consequence there is a risk that the autoclave fails to sterilize the load! This emphasizes why during validation it is critical to make a steam analysis and document the presence of saturated steam.

Steam Analysis

The analysis of the steam can be carried out by comparing calculations and actual measurements of temperature and pressure. According to the laws of thermodynamics, temperature and pressure are mutually dependent when having saturated steam in the autoclave.

The temperature values measured during the sterilization phase can be converted to theoretical pressure values using the algorithm also used in the saturated steam table. This theoretical pressure can be compared to the actual measured pressure, which enables an analysis of the steam. If the theoretical pressure is equal to the actual measured pressure during the sterilization phase it can be concluded that the measured parameters are mutually dependent, which is only the case when the steam is saturated.


When steam reaches the surface of the load a condensation will happen and heat energy is released.

Heat Energy

The temperature of boiling water and steam in a pot is the same, but the heat energy per mass unit is significantly larger in steam.

Saturated Steam

When steam in a chamber is exposed to pressure, saturated steam can appear. In the phase when the condensation first begins the steam is saturated.

Superheated Steam

When water is heated from 0°C to the boiling point it will follow the line for saturated water until it cannot contain more heat energy (A to B). When adding more heat the water will change phase to a mixture of water/steam and the heat energy will increase (B to C). If the heating continues the steam will dry out and further heating will make the steam temperature rise. This is called superheated steam (C to D). As an example heat could be added to the steam when it touches a hot surface.

Reliable Calibration Procedure

Above it is argued that making a steam analysis and documenting the presence of saturated steam is vital. Basing the process assessment on the measured temperatures only, may lead to a passed validation result, even though the load is not sterile. Since the reliability of the steam analysis depends on a calculation based on actual measurements of temperature and pressure, the calibration procedure of the measuring equipment is critical.

The ISO definition of calibration is: “Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication.” When calibrating a temperature sensor its readings will be compared to the true readings of a temperature standard traceable to a national metrology laboratory. The main reasons for having an instrument calibrated are:

  • To ensure readings from the instrument are consistent with other measurements.
  • To determine the accuracy of the instrument readings.
  • To establish the reliability of the instrument i.e. that it can be trusted.
  • Ensure that a specific process is within specifications – every time / every day.
  • Avoid overheating, which may result in change / loss of product.
  • Avoid insufficient heating, which may result in unsterile products.

Please note that calibration should not be confused with adjustment of a measuring system, nor with verification of calibration. The ISO definition of adjustment is: “Set of operations carried out on a measuring system so that it provides prescribed indications corresponding to given values of a quantity to be measured.” Please note that calibration is a prerequisite for adjustment. When adjusting a temperature sensor its measuring properties are modified.

After completing a successful adjustment the temperature sensor’s readings will be closer to the true readings of the temperature standard. The purpose of calibration and adjustment is to improve the accuracy and precision of the measuring equipment. The accuracy of the sensor is the systematic error of the measurement. The accuracy can be improved through adjustment. The precision is the random error of the measurement. The random error cannot be explained, but estimated using statistical methods.

Improving the Reliability of the Calibration

The reliability of the calibration procedure may be improved by performing calibrations according to predefined templates. The calibration template could be created by the metrology responsible using calibration software. The software would allow the person to define the calibration points, the required temperature/pressure stability, and the maximum allowed deviation from the measuring equipment to the temperature and pressure standards.

Different templates can be generated making it simple for other users to perform a calibration, without having to specify calibration points and stability/deviation criteria. Also, the reliability of the calibration procedure can be improved by ensuring that the readings from the measuring equipment as well as the temperature and pressure standards are logged and stored in the calibration software automatically. Human errors may occur if the temperature, pressure and time indications have to be registered manually. The use of automatic data logging can reduce risk of human errors significantly.

Finally, the reliability can be improved by using systems that document the relation between the measuring equipment and the temperature and pressure standards in a calibration report automatically.

Quality and Design

The quality and design of the measuring equipment will impact the calibration results and should be considered too. Resistance temperature detectors (RTDs) have low drift and show excellent linearity. These are factors that will contribute to an improved calibration result. Common measuring elements used in RTDs are Pt-100 and Pt-1000. If using thermocouples, the cold junction compensation is crucial.

At least one cold junction compensation per measuring system is required. However, the calibration results can be improved significantly by using measuring equipment with cold junction compensation for each individual thermocouple channel. It is common to calibrate and adjust thermocouple systems before and after use. Also, it is typical that the offset values and adjustment is stored on the computer where the calibration software is installed. In which case, the adjustment will follow the computer and not the thermocouple. This makes the measuring properties of the thermocouples dependent on the use of a specific computer.

It decreases the flexibility of use of the thermocouple based measuring equipment and it adds another risk factor (the computer). The situation can be avoided by using measuring equipment that allows the adjustment to be stored in each individual thermocouple. As a consequence, the measuring equipment can be used as a standalone system or with any computer, and still maintain its state of calibration.

Calibration Interval

To ensure safety and product integrity, calibrations should be performed throughout the lifecycle of the measuring equipment. The purpose is to avoid changes in performance over time (drift). To establish the calibration interval it is necessary to make a risk assessment, and these factors should be taken into consideration:

  • What is the risk that the measuring equipment is faulty?
  • To what degree do potential inaccuracies impact product quality/sterility?
  • What do previous calibrations of the measuring equipment show?
  • What does the measuring equipment’s manufacturer recommend?
  • What is the required level of reliability and accuracy?
  • When in use, how is the measuring equipment being handled?
  • What are the risks of mechanical damage and shocks?
  • What are the risks and consequences of establishing a calibration interval that is too long?

It can be concluded that failing to document the presence of saturated steam by making a steam analysis, may lead to a passed validation result, even though the load in an autoclave is not sterile. The steam analysis is based on actual measurements of temperature and pressure, which makes the calibration procedure and the quality and design of the measuring equipment crucial.