+1 925-885-9353 morris@riskcom.global


This is the first Practical Risk Bulletin from RiskCom.
In this first bulletin rather than investigating previous accidents and suggesting practical advice, we wanted to take a look at keeping your balls in the air. I’m sure we have all heard phrases such as “I’m juggling too many priorities at the moment”.

Juggler Image

How many times have managers been asked to take on additional actions on top of an already busy job? Examples could include:
“Jack, would you mind getting all the audits closed out before the end of the quarter.”
“Rhonda, could you make sure the revised emergency response plan framework is implemented throughout the fleet by year end.”
“Steve, I know you have a lot on, but could you just ……..”
And so it goes on……
I like to think of this in terms of juggling – you might have four or five balls in the air and someone throws you another. Sometimes, everything ends up on the floor.
This got us thinking – is there a better, more effective way?



If you could keep only four balls in the air which ones would/should they be?

With literally hundreds of initiatives and actions encompassed by the term safety (or risk) management, we have long been advocates of simplification, practicality and FOCUS.
If you look through the 15 to 20 main sections of a typical safety management system description they are all important. Personally, I have not been able to cross any out and these documents are getting bigger by the year not smaller.
But are they equal, or are some more equal than others?

Keeping these four balls in the air would be a good start (hypothesis)

Imagine you are forced into the situation where you can juggle four balls only – prioritize only four initiatives. They’d have to be good. What four would you choose?
This is our starter for discussion:
1. Identify your major hazards and make sure critical barriers are in place and working correctly.
2. Employ competent, disciplined people and keep them that way. Maintain a culture of following procedures.
3. Develop and maintain SOOBs or MOPOs.
4. Prepare for the worst.

Why did we come up with these four?

We came up with these four because, firstly, it is impossible to do a good job of managing hazards unless they are identified and understood. Part of managing hazards is having appropriate risk controls in place – critical controls.
Secondly, having reviewed many investigations following major accidents, it is common for people to have been at the center of the accident – either deliberately or inadvertently not doing the right things. People often don’t “mean” to break rules or take short cuts, but sometimes the environment can lead them down that path.
On a similar vein procedures are also a common issue cited as the cause of accidents ie poor procedures or lack thereof. We have already covered people not following them. SOOBs and MOPOs can greatly help with establishing effective procedural controls for an operation.
Finally, why prepare for the worst? Well, in our experience it is almost always prohibitively expensive to “design out” the worst case scenarios – there will always be some residual risk, albeit usually low.

Ball 1 – Identify major hazards, define & verify safety critical barriers

Using your hazards and potential accident scenarios to understand your critical barriers (commonly referred to as safety critical elements) – those that prevent hazard realization and those that are used for response and recovery. Bow tie analysis is a great way to do this – see image below.

RiskCom Bow tie diagram FOr Website FInal 2


The items labeled “controls” or “safety critical elements” (SCEs) in blue are critical. Important to note that it is normal to have two or more barriers so that there is at least one level of redundancy – only one is shown in the diagram to keep it simple.
Once these are identified, it is important to develop a set of criteria that defines how critical elements should perform when needed – referred to as performance standards.
Having performance standards alone, though, is not enough – it is necessary to have a system to check them regularly to make sure that they will continue to work properly. In the offshore world this is often referred to as the written scheme or verification scheme. Historically, performance standards have usually been defined by four criteria:
1. System definition and role
2. Function
3. Reliability
4. Survivability
Using these criteria, each safety critical element should be defined in such a way that its performance can be verified. Regularly verifying increases assurance that it will work in practice when required. This is not new, although some organisations would have you believe otherwise, I was involved in fire pump testing offshore 30 years ago.
Ideally, establishing safety critical elements and defining performance standards should start during design and continue through operations. For installations already in operation and without the performance standards this can mean trawling through design documentation, regulations, guidance and standards to pull them together. The UK HSE has a useful semi permanent circular (SPC) on verification of the suitability of Safety Critical Elements (SCE) on existing installations: UK HSE Verification SPC.

Ball 2 – Employ disciplined and competent people


People come in a multitude of shapes, sizes and personalities. Some people are naturally inclined to follow rules whereas others are naturally inclined to break rules. As described above, it is the rule breakers that often cause accidents because they are more likely to skip procedures.
Consequently it seems to make sense to employ people who are naturally more disciplined and, therefore, more likely to follow procedures.
There is a multitude of personality profiling tests that can be used to help make sure you get the right people. Over the years we have used various methods successfully to get the right mix of people in our risk practices.
Only last month the Wall Street Journal published an interesting article about the increased use of personality testing as part of the recruitment process. WSJ article.
Culture is also important to consider when looking at avoiding procedural violations. Culture is often referred to as the “way we do things around here”. I’m sure we have all heard about accidents that occur in an environment where it is quite normal for people to do their own thing and not follow procedures. Although difficult to solve it is down to management to make sure they do not allow this type of rule breaking to escalate.
There have to be consequences for people who do not follow procedures.
So what about competency? NOPSEMA, the Australian government regulator for oil and gas, has very good guidance to help organizations develop and maintain training and competency management systems. NOPSEMA Competency Assurance.

Ball 3 – Procedures and statement of operational boundaries (SOOB or MOPO)

Poor procedures or no procedures are a common cause of accidents. Almost every accident investigation recommends an improvement to procedures.
Poor procedures can take many forms from being voluminous and overly complicated to being too simplistic and missing important information.
The limitation of procedures is compounded by the complexity of trying to define combined or simultaneous operations (ComOps or SimOps) together with situations that potentially increase risk. For example, missing or impaired safety barriers together with storms.
Even the most experienced person in charge can have difficulty making decisions in such circumstances. It is unreasonable to expect anyone to carry hundreds of permutations of what is acceptable in their heads.
This is where the manual of permitted operations (MOPO) or statement of operational boundaries (SOOB) can make operations simpler and safer.
The essence of the SOOB is to define what operations can and can’t be done together, when critical systems are impaired or there are other external limitations. An example of the SOOB in matrix form is shown below.

Screenshot 2015-06-08 16.01.37

In this example, hot work cannot be done at the same time as well test operations – the intersection of those two activities are shown with a red square – these combined operations are not allowed. Similarly helicopter operations cannot be conducted if the deluge system is impaired – an operation with an impaired safety system.
The SOOB or MOPO should be prepared in a workshop environment similar to a hazard identification (Hazid) or hazard and operability study (Hazop).  Team selection is very important to make sure that there are sufficient people present that are currently acting, or have acted, as the person in charge.

Ball 4 – Prepare for the worst

The right hand side of the bow tie diagram should identify the worst potential consequences. It is rarely practicable to design out the risk of the worst-case scenarios even though the risk is normally very low. One has to, therefore, prepare to respond in an effective way to reduce the worst potential consequences.

RiskCom Bow tie diagram FOr Website FInal 2


What this means in practice, however, is less clear-cut. “How far should I go, I don’t have an unlimited budget.”
Rather than use a modern day example and run the risk of upsetting people, I will first go back to the Titanic accident.
The Titanic hit an iceberg on 14 of April 1912 causing more than 1500 deaths. The vessel was labeled as unsinkable and as such did not carry enough lifeboats for all the passengers and crew on board.
The sinking and subsequent death toll raised so many questions about the safety standards that the United Kingdom Government proposed holding a conference to develop international regulations. And so, International Convention for the Safety of Life at Sea (commonly known as SOLAS) was born in 1914. See “a brief history of SOLAS“: The Conference, which was attended by representatives of 13 countries, introduced new international requirements dealing with safety of navigation for all merchant ships; the provision of watertight and fire-resistant bulkheads; life-saving appliances; and fire prevention and fire fighting appliances on passenger ships.
Safety standards and what is expected by duty holders (owners, operators, lease holders, etc) changes following a disaster. The law, regulations and guidance are often rewritten or augmented to help prevent such an accident from recurring. Similarly, industry bodies such as the American Petroleum Institute (API) or International Association of Drilling Contractors (IADC) update their recommended practices to improve safety. These contribute to defining the new norm.

A more recent example is the Piper Alpha disaster. This was a North Sea platform that was engulfed in fire in 1988, causing over 160 deaths. This led to sweeping changes in the UK regulatory regime, including a change of governing body from the Department of Energy to the UK Health and Safety Executive. These changes reverberated around the world.
One of the new norms was that all pipelines to and from platforms were required to have riser emergency shutdown valves to be installed. Another was that all installations operating in the North Sea required a Safety case.
The rub is that it often takes a disaster to cause significant change to the way industries manage risk and safety, including being prepared for the worst.
We can probably all think of situations where much safer ways were practicable but were not followed because of the mistaken fear of setting new and expensive industry precedents.
Perhaps we need more bravery to do what is right.

About the author

Morris Burch is a chartered Mechanical Engineer with nearly 30 years experience in the upstream oil and gas industry, predominantly as a process safety and risk management consultant.
He started his career at Shell and then joined Genesis as a risk and safety engineer and has spent the remainder of his time in consulting.
Beginning in 1997, he co-founded and grew several successful risk-related businesses in Australia, USA and UK – International Risk Consultants (IRC) and IRC Risk and Safety.
His breadth and depth of knowledge has come from working with most sectors of the energy industry including majors, independents, engineering firms and drilling companies (MODU operators).
More recently he acted as a testifying expert (commonly referred to as an expert witness) in the discipline of process safety management for BP in two litigations associated with the Deepwater Horizon blowout.
He is currently an independent process safety management & risk consultant.