Setting priorities starts with a risk assessment. The method used for risk assessment should be objective in nature, simple to apply and can differ between inspecting authorities. The information gathered in the previous step will be used as input. The output of the risk assessment are assigned priorities that can be defined as objectives.
The main goal of a risk assessment is to prioritize the workload of an inspecting authority. The result of an assessment within the framework of the IED will result in an inspection frequency of site visits of inspection objects. The reason for prioritizing our workload is that inspecting authorities have limited resources (inspectors and finance), which should be distributed among the inspection objects in an accountable way. In a risk-based approach, most inspection effort should be expended on the objects with the highest risks (highest risk first).
Limited resources on the one hand and a multitude and variety of statutory tasks, for which they are responsible, on the other, make it necessary to set clear priorities. Priorities are set using the outcome of the risk assessment, which could be a list or an overview of all the identified/selected installations and activities and their respective risks. These installations and activities can on the basis of their assessed risks be classified, for example, in ‘high risk’, ‘medium risk’ and ‘low risk’.
In addition the inspection approach for each level can differ: the higher the risk level, the more attention it will get from the inspecting authority.
The inspection approach will as a consequence also determine the claim on the available resources, and is therefore equally relevant for the inspection plan and in the inspection schedule.
Risk assessment
There are many definitions for the concept “Risk”. For assessing risks of industrial activities we use the following definition: The Risk of an activity in inspection planning is defined as the (potential) impact of the activity on the environment or the human health during periods of non-compliance with the regulations by law or permit conditions.
To begin, it is necessary to make some basic assumptions and to define concepts:
Risk is a function of the severity of the consequence (the effect) and the probability this consequence will happen: Risk = f (effect, probability)
In this guidebook, Risk is defined as:
Effect depends on the source (how powerful is it?) and on the receptor (how vulnerable is it?); What is the impact of the source on the receptor? In this guidebook, effect is represented by Impact Criteria (we realize that in this concept, Impact criteria can also include some probability).
Probability is considered to be a function of the level of management, the level of compliance with laws, regulations, permits, attitude, the age of the installation, etc.In this guidebook, probability is represented by OperatorPerformance Criteria.
In this section Impact criteria, Operator Performance Criteria and the methods to determine the risk will be further explained. Because not all the criteria will have an equal importance we also address the topic weighting here.
Impact Criteria (IC)
To assess the effect, the object is rated against impact criteria. The impact criteria can differ between inspecting authorities and tasks. When assessing the risk for IPPC (IED) installations examples of appropriate impact criteria include for example: Quantity/quality of air pollution; Quantity/quality of water pollution; (Potential) pollution of soil and ground water; Waste production or waste management; Amount of dangerous substances released or present; Local nuisance (noise, odour). See factsheet 3.02 for a full list of Impact criteria.
Factsheet 3.02
IC1 = amount of the substance that is emitted
IC2 = the distance and vulnerability of the surroundings or receptor.
Operator Performance Criteria (OPC)
Probability is considered to be influenced by the quality of management, the level of compliance with laws, regulations, permits etc., the attitude of the operator, the age of the installation, etc. To take this into account, the object can be scored against operator performance criteria, for example: Attitude; Compliance record; The implementation of an environmental management system e.g. EMAS; Age of the installation.
Operator performance criteria can influence the risk in a positive way (good compliance) or in a negative way (age of the installation). See factsheet x for a full list of Operator performance criteria.
Factsheet 3.03
Determination of the risk category
Different methods for risk based approach are being used across Europe. These methods can be classified in four groups: Linear Mean Value; Mean Value of Risk and; Maximum Value and the Rule based method.
Types first 3 groups work as follow:
- Linear Mean Value: Risk = (C1W1 + C2W2 + … + CnWn)/n
- Mean Value of Risk: Risk = (C1W1 + C2W2 + … + CnWn)/n * P
- Maximum value : Inspection frequency = Max(IT1,IT2, …,ITn)
Where:
C = impact criterion
W = weighting factor
P = probability of occurrence
Max = maximum of
IT = inspection task with fixed frequency
All systems work either with a database or a spreadsheet within a network or in a stand-alone system. Although most methods and tools are a copy from systems used in other organizations or Member states they all have been tailor made to fit the exact needs of the inspecting authority. There are no good or bad systems. They come with their own advantages and disadvantages.
Rule based method (IRAM)
The fourth group is the Rule based method, IRAM (Integrated Risk Assessment Method). This method was developed by the IMPEL easyTools project team by combining the advantages of the three methods, while limiting the disadvantages.
IRAM also differentiates between impact criteria, probability criteria and risk categories. The scores of the impact criteria are directly linked to the risk categories and therefore to the inspection frequencies, similar to the maximum value method. In the maximum value method a specific inspection task – such as Seveso inspections – induce the highest inspection frequency, but in IRAM the inspection coordinator decides before the start of the assessment how many highest scores of an inspection task are needed to induce the highest inspection frequency. Within IRAM this is called “The Rule”. The more impact criteria are used for the assessment the higher the number of highest scores that is “necessary” to induce the highest inspection frequency. This is a clear difference to the mean value methods; the highest scores cannot be leveled out by low scores of other criteria.
IRAM Principles:
- The inspection frequency is determined by value of the highest score;
- The inspection frequency is reduced by one step, if the set minimum number of highest scores (called “the Rule”) is not met;
- The inspection frequency can be changed by only one step up or down based on operator performance;
- The higher the sum of scores, the longer the inspection time.
See factsheet 3.04 for more details on IRAM.
Factsheet 3.04
Output: Assigned priorities