Jodi Goodall
EP
67

Safety Lessons From High-Reliability Organizations

In this episode, Mary Conquest speaks with Jodi Goodall. Jodi introduces Safety practioners to the High-Reliability Organization (HRO) framework and uses practical examples to demonstrate how the 5 characteristics of this approach can enhance workplace safety. Jodi will help you look at the system rather than the worker and actually welcome safety failure.

In This Episode

In this episode, Mary Conquest speaks with Jodi Goodall, a High-Reliability Organization (HRO) expert who’s been in the safety profession for 20 years. Her operational experience spans mining, defense, explosives, heavy maintenance and logistics, and she’s currently head of organizational reliability at Brady Haywood, a consultancy in Brisbane.

Jody’s approach to Safety is based on systems thinking and the practices of HROs - and she begins this enlightening interview by explaining what strategies and organizations this acronym covers.

She walks EHS practitioners through the 5 characteristics of HROs, using practical examples of how they can enhance workplace safety:

  1. Preoccupation with failure
  2. Reluctance to simplify
  3. Sensitivity to the operations
  4. Commitment to resilience
  5. Deference to expertise

Jodi explains that HRO is dotted throughout all the current safety theories and calls for HSE professionals to focus on practices, not academia and recognize that we’re all heading in the same direction.

She believes there’s too much investment in traditional Safety approaches and encourages the profession to welcome failure, trust the workforce, and be less judgemental and more helpful.

Transcript

- [Mary] Hi there. Welcome to "Safety Labs by Slice." One way to develop a skill is to learn from an expert. The organizational equivalent for safety work would be to look at companies that are doing it right. And that's exactly what our guest today talks about within the framework of high-reliability organizations, or HROs. Jodi Goodall has been in the safety profession for 20 years.

Her operational experience spans mining, defense, explosives, heavy maintenance, and logistics. Jodi is head of organizational reliability at Brady Heywood, a consultancy in Brisbane where she collaborates with boards and senior management in high-hazard industries. Her goal is to help leaders shape their organizations to work in ways that prevent fatalities and major accidents.

Jodi's approach is based on systems thinking and the practices of high-reliability organizations, and she joins us from Brisbane. Welcome.

- [Jodi] Hi, Mary. How are you going?

- Good. Good, good, good. So let's start by talking about what you mean when you say high-reliability organization. Now, it may be because it's not my industry, but when I first heard the term, all these questions came to mind. Is every company in a high-risk sector like mining or aerospace considered high reliability? Are there HROs in lower-risk industries? Is there an official designation of some kind, or is it just sort of a shorthand for doing high-risk activities?

So, put all my questions to rest.

- All right. Well, I suppose every organization that's in a high-hazard industry would hope to and strive to be highly reliable. And, you know, the original theory was built off understanding...it was back from the '80s, and it was really about understanding what are the characteristics that are common in the, you know, that top of the cream of those sectors that get it right all the time.

And those sectors are nuclear, there are a lot of power generation, petroleum and gas, some parts of mining that have I suppose big process safety risks like strata failure, gas explosion, things like that.

Yeah. So there are certainly...you know, as like every industry, there's a spectrum of people who are doing it really, really well, and ones that aren't doing it so well. And they're the ones that seem to be in the news all the time, the ones that aren't doing it so well. But those ones that get it right, that are highly predictable and reliable, they have a lot of characteristics that are the same.

And that's what was originally studied back in the '80s. And, you know, they kind of came up with five characteristics. And those have been, you know, I would say the safety theories and practices that we're hearing about now. Things like HOP, you know, resilience engineering, all of those things fit beautifully into the high-reliability framework as well, but they're not characteristics that belong specifically to high-hazard industry.

You know, there are a lot of industries that are just really complex Mary as well. So, you know, you might think of those like healthcare is a really good example, emergency services. And, you know, what we've seen over the years is actually a lot of the high-reliability organization, I'll call it HRO, if that's okay. The HRO theory and practice and a lot of the papers that have been written in the last, you know, 10 or so years have actually come out of healthcare.

So those guys are really, you know, taking those best theory and the practice and building on it and testing it in their highly complex environment. So, in short, I would say it's a set of practices or characteristics that came out of high-hazard industry, but certainly being used across highly complex industries as well.

- So, it started more as descriptive, just descriptive, like someone it was studied and found commonalities between these organizations.

- Yeah, that's right. Yeah. They came out with kind of five characteristics. They're really around that ability...the mindset of the organization to have a preoccupation with failure. And, you know, they're always chronically uneasy about yesterday's success and thinking that, you know, they know their organization changes all the time and it's very dynamic.

So, they're always thinking that failure is kind of on the cusp, and how will they identify that and manage it in advance? They certainly are very...you know, they're reluctant to simplify the information that's coming to them. So they don't dumb stuff down. They don't think about things as the, you know, what's immediately in front of them. They're always thinking about it from a systems perspective.

And they're certainly sensitive to their operations. So, you know, typical organizations will have this hierarchy where, you know, the leader believes that they have all of the information and understand what's going on. But in a HRO, very much thinking about what's going on at the frontline and transparency of information right through the system.

So, they're very sensitive to what's going on at the frontline. And that's kind of the three characteristics that make them be able to anticipate failure really well. And then there's a couple more, which are around resilience and having a real commitment to being able to contain issues and recover quickly. And then certainly deferring to the experts is something that you see, which is pretty unusual in a lot of other types of organizations.

- Yeah. Well, we're going to go through each one of those, and I'll have you explain it so we can really understand it. How did you come to look at HROs and who should pay attention HROs and how they handle operations? And why or how did it pique your interest?

- Piqued my interest when I was working back in defense, actually. I think, you know, the U.S. Navy is a U.S. nuclear known as HROs. And certainly, there is a lot of talk in Australian defense about the concept of HRO. And, you know, back in the day, 20 years ago, I did study it at uni. And the theories make sense.

They really do. When you start to turn those things into practice, you know, it's very obvious that they're the things, you know, that in a complex environment, you can't just build a system and expect that it's going to work every day like that. You really need to keep learning about how it's changing. And so people that need to be really sensitive to this stuff, I would say, are the top-level leaders in a business.

These are theories that really help leaders understand why they can't, you know, just put a set of procedures in front of a bunch of workers and expect them to follow it because, you know, every day is different. They're constantly adapting all of those things that you would know from your other speakers. But so, you know, I think these are leadership theories that really help people dissect and understand complex business.

- Specifically with safety, fatalities are the worst possible outcome for any organization. So, you would think that businesses would pour a lot of resources into understanding the risk in potentially fatal areas or operations. Do you find that to be the case? And if not, why would that be?

Or if so, why would it be?

- Yeah, look, I work with a lot of organizations. Most of them are in kind of mining oil and gas, but also regulators. And I think everybody has the right intention. You know, I really don't... Yeah. And there is a lot of resources and money and effort going into thinking about how do we get better at safety, but, you know, we're just creatures of habit and it's very easy to go back to the behavioral piece and think that it's the workers...you know, if we can fix the worker, we'll fix everything.

And I think that's the great thing about the HRO theories is because they really take it away from that and not just say, you know, HOP is beautiful in the fact that are human organizational performance theories and principles, you know, that really leads you to ask the worker, you know, how they adapt, what's not going right for them, that kind of stuff.

But, you know, as a leader, there's so much more that you need to build into your system to make it easy for people to...you know, for things to go right. And that starts back with how you define your key performance indicators, really understanding, you know, goal conflict.

There are so many pieces of the puzzle that a leader needs to be thinking about, but unfortunately, very traditionally, it's very easy to spend money on compliance and behavioral observation programs and things like that that, you know, we know don't work. We know they just give the leader comfort. It's just like you know, that whole concept of disciplining a worker if they make a mistake, that just gives the leader comfort, you know, but it doesn't actually fix the system.

And so, you know, I think we spend a lot...we're still spending too much time on traditional safety methods that we know don't work.

- I think it's interesting that, you know, on this show, we do end up talking about a lot of different sort of newer approaches, safety differently, new view, safety II. And all of that...and there are discussions about, you know, how they're the same, how they're different, which is better, which is worse. But it's interesting to me that it seems like this framework kind of already encompassed a lot of the stuff that we are now talking about is different.

- Yeah, you are right. I kind of do laugh at all the different theories because they're all bits and pieces of HRO theory if you ask me. And it's really easy to observe that when you start to just observe good practice in organization. And you can see, you know, how beautifully the HOP principles fit into, as I said number three, sensitivity to operations, because that's really what it's all about is understanding the worker.

And number five, which is commitment to resilience HRO theory, that is resilience engineering. And, you know, safety II is dotted all the way through there. So, you know, I'm certainly not, you know, hardline any of those. I think they all overlap themselves and whilst everyone's kind of maybe pedaling their own thing, just kind of as an industry and as a profession, it'd be so lovely for us just to start to think about what are the practices associated with all of these theories and really just recognize that we're all actually heading in the same direction, and yeah.

- Yeah, I hear that a lot from guests is, you know what? It doesn't matter what you call it. Let's just move forward. So, let's get into these five characteristics as you've presented them before, I'd like to go through each one and just sort of ask you what does it mean, what is its significance, and maybe how does it look in practice?

So, the first one is a preoccupation with failure. What does that mean?

- Yeah. So. Preoccupation with failure is I suppose the primary thing that, you know, if we're working with clients, we always start with as well because it's the mindset that, you know, HROs have. And, you know, if we stand back and think about what a normal organization does, a normal organization, you know, they build systems and processes, incident...you can go into any organization, you'll find an incident reporting system.

You'll find risk management tools. You'll find a maintenance system that manages, you know, the equipment. You'll find KPIs. All of those kind of things. It doesn't matter what organization you go into, pretty much they'll all have it. And so that's a normal organization. And then they'll have a big failure and that organization will, you know, just say they have a fatality event, Mary, what will happen is they'll bring somebody in and what they'll find is

[inaudible] things they already knew that the audits that they were doing were already highlighting these issues. That they had a backlog of their maintenance. That they had incidents that are in the system that they've been capturing that are almost identical, that are having the same control failures as the fatality.

- And, you know, this is really common and it's repeated in all of the big failures over and over in the world. And HROs see all of those things that were found in the normal organization, they see those as warning signs of failure. And, you know, they understand that their system is really fallible and that they're continually going to find those things.

And so they see those as...they have a healthy skepticism about those warning signs, and they understand that they're the things that are going to prevent fatalities. So, they see them as good news and they're continuously looking for them. So, that's really what a preoccupation with failure is all about. It's really about constantly being vigilant and having a healthy skepticism about how your risks are being managed and recognizing that when you get an incident report that says you've had a failure, or when you get a bad audit or somebody says, this equipment doesn't work in this particular work scope.

You know, they see that as, aha, thanks for telling me. Amazing. System didn't work. That's great. We can fix that. You know, it's not seen as bad news or trouble or any of that kind of stuff. So, you know, and it's really difficult to get that.

So, if you're a leader in a business, and this is something that you see in mining quite a bit, is, you know, leaders will allocate a certain amount of equipment. They'll put in, you know, work teams that they think are competent. They'll overlay it with procedures.

And then when it doesn't go right, it's seen as bad. But actually, this is a really good opportunity to learn and explore. And, you know, this concept of chronic unease, which is preoccupation with failure really, there's chronic unease about having a healthy skepticism about how your risks are being managed. It's very much just very different to how a normal organization would act.

There are a number of practices that you see in organizations that encourage chronic unease, or preoccupation with failure, you can use those terms interchangeably. Sorry.

- Okay. I was going to ask at the end of this how chronic unease fits in. So, is it one and the same, or is it just a lot of overlap?

- Yeah. So there's obviously the preoccupation with failure, which is really about building...it's the number one part of the theory, but the mindset that sits with that is a chronic unease. So, it's a psychological process. It's of individuals, but a good HRO will build the practices into the organization that makes everybody have chronic unease.

And, you know, those kind of things are like, you would've heard this, encouraging teams to have really good psychological safety within the teams. So being able to express alternate opinions, report incidents, stop work if you're feeling unsafe or you feel like something's not right, you know, those kind of things. So, there's psychological safety.

You build practices around having systems that detect warning signs of failure. So, what you're seeing, normal organization is they compartmentalize all of their data sources. So, you know, you have your incidents separate to your critical control monitoring, which is separate to your maintenance equipment, you know, backlog.

But realistically, all of those things need to come together to be able to see the patterns of failure. And that's really, you know, something that is if you can build that into your organization and you can see those patterns, make them vivid, really easy, you start to have chronic unease naturally because you can see things failing, you know? So, yeah, psychological safety, detecting warning signs of failure.

There's a couple more. Oh, having a real questioning attitude. So, you know, you'll see HROs, when they're trying to encourage this preoccupation with failure, they're really questioning green days. So, you know, if they've had a run of success and things are going really well, that's kind of a time to have heightened awareness because they know that things could be being normalized, stuff like that.

- Yeah. And the last thing you see which encourages chronic unease and this preoccupation with failure is fundamentally the organization right through from the very top the board all the way through the experts and the frontline experts, the workers, you really see risk competence. So, everybody does not just understand the risk, but they know how the controls fail, how these, you know, big events are caused.

There's a lot of talk and discussion all the time around failure and around how these things happen. And, you know, naturally when you're talking about failure all the time and keeping those things alive, it's really quite easy to have a chronic unease and be able to start to identify when things aren't going right.

Yeah. So it sounds like a pretty hectic place to be, doesn't it?

- Well, I was thinking a couple things. One that chronic unease sounds almost like anxiety, and yet the way you describe it, it's not anxiety. I think skepticism is probably a better word. And even like happiness at the opportunity to see these red flags and to learn from them. Like, it's a new relationship with red flags, I would say.

- Absolutely. Yeah.

- But I'm curious about how to make that shift, like that's a huge mindset shift. And changing one person's mindset is difficult. Changing an organizational mindset is really, really difficult. Do you have any observations about that?

- Yeah. When we start working with companies, they may have had a couple of serious accidents, but they really...you know, as I said before, if you can get all of the information that you've already got in your organization or your data sources, and you can start to overlay those patterns, becomes easy for the leaders to see that things are failing.

So, sometimes it's really just about making the data sources meaningful that you have and showing the patterns. And I can't remember her last name. First name's Linda. It'll come to me. She is a researcher from the Netherlands and she did a piece of work with the...must be the regulator over there.

And she looked at thousands and thousands of serious accidents near misses. And what she found was...do you remember the old Heinrich triangle that everybody would talk about where it's got fatalities, injuries, minor injuries, and that there's supposed to be a correlation there? You know, I think that's kind of been disproven in some ways.

But what she found is that you can take all of the little control failures in your business if you understand one primary hazard. So, when I talk about primary hazards, let's talk about electricity, for example, or gas would be another one. And you can look at...you can indicate and predict your higher level failures if you very clearly understand what controls you have in place to manage those failures and then you start to look for when those controls are failing, actually, they will give you beautiful patterns about when you're going to have a serious accident.

And so, yeah, so we help organizations really...it's passion of mine is starting to help organizations to really pull their data apart and start to look at it in terms of primary hazards and then deeply understand your risks, how you control them, what they look like when they're out of control, and what your sources are to collect that information.

And then you can start to see those patterns, and, you know, that would be one way of starting to get leaders because it's factual. And, you know, if you're working with companies that are highly engineering-based, things like that, you know, all the fluff goes out the window and they just want to see the data. So, you know, you start to show them that and talk about how that all fits together and they can see, you know, small failures lead to big failures.

So that's one way of making that link. The other way is, I suppose initially, as I said about building practices. So, you know, you can actually teach people how to have a better questioning attitude. And, you know, we get boards role playing, giving and receiving tough questions to each other and to do that in front of their executives and then their executives start to do that in front of, you know, other people.

And when you're rewarding those practices like they're the things you should be rewarding, not less incidents. But all of that drives the culture in the business of having a preoccupation with failure or chronic unease.

- I was just thinking there too, that if you understand...you can picture an organization where someone says, what is this measure? I don't know. It measures this thing. Why? I don't know. Well, that's the lack of, you know, the connecting the dots between, okay, what are our controls like, why are we measuring this, and what does it actually mean?

That's sort of the patterns that you're talking about in a sense.

- Absolutely. So, a couple of years ago, Grosvenor Underground Coal Mine in Queensland had a gas explosion that injured five really severely. And, you know, those guys will never be the same again. Some of them lost ears and, you know, had extreme lung issues and burns to a lot of their body and just very lucky that they didn't die.

Very lucky, came very close. But, you know, there was a lot of information in the public circle around that because there was a board of inquiry around it. So, it's a really good and recent case study on exactly what you're talking about. So, when you overlay the years before that event, they were having gas exceedances, so they were actually exceeding the limits to what would come into an explosive atmosphere.

And it had happened so many times and they were explaining it away in terms of they would find the individual control failure and they wouldn't link it back to a larger kind of inherent issue, and so they were normalizing the failures.

And, you know, if you don't have a clear understanding of each of your metrics and why you're measuring things and why they're at the limit they're at, then because you don't often get a bad outcome, it's very, very easy to see those as we had success rather than we had failure. And I think that's another example of the difference between a HRO is a HRO would see that as, oh, my goodness, I have a pattern of failure here.

I need to think about this and escalate this and do something broader rather than just band-aid the individual issues.

- Okay. Well, let's move on to the other characteristics here. So, the next one is a reluctance to simplify.

- Yeah, actually we've probably just tackled that one a little bit more with the Grosvenor explosion. So, you know, I suppose a reluctance to simplify the interpretations of the information that you're getting is really about this whole compartmentalizing thing. And it's also about simple explanations for failure. So, it's really easy for an organization.

We know that when we have a failure, it's very easy to explain it away by saying, you know, the last person to be at the scene was the person that made the mistake or made the error. And, you know, if we think about things in terms of just human error, what happens is we don't get the richness of actually the full systems failure.

And so the error will repeat itself over and over again because we are putting people back in that same situation to repeat that, you know, same issue. So, you know, HROs are really, really good at thinking about failure and about how things go right as well in terms of a systems perspective and understanding, even though the proximity of, you know, the senior leaders and the decisions they make might not be so clear to that failure, they need to look for those links and they're continuously doing that.

And if they don't have...the other thing you see with HROs, which is really interesting, is they might have a really good outcome. So, I worked with a mining company a few years back that they were mining...they were getting a higher level of their mineral that they were mining than they expected, and they saw that as a failure. They were like, okay, so even though we are smashing it here, and we are way above our targets, we don't understand this success, and not understanding is just as important as a failure.

You know what I mean? They see that as one and the same. And, you know, they're the kind of behaviors that you see HROs is this I must understand everything because that makes me more reliable, more predictable.

- Yeah. I mean, it would be so tempting to just say like, yay, where without understanding, like, I can absolutely see how that would happen. And also there's a difference between like...sorry, a difference between success and lack of failure, right?

- Yes.

- Like, what I mean is I guess what you're talking about, right? Like, this measurement is always higher than what we've been told is the acceptance tolerance, but there's a lack of failure. It's nothing bad has ever happened, but we'll interpret it as success. That's the simplification there, right?

- Yes. Yeah, absolutely. Yeah, you nailed it.

- Okay, good. I'm glad. So the next one, and this is where I think HOP, or HOP, comes in, you've mentioned before, is a sensitivity to operations.

- Yeah. So, it definitely fits in here. I mean, HOP is a set of principles. People make mistakes, blame fixes nothing. You know, the leader response matters. All of those things fit beautifully into being very connected to your frontline and what's really going on at the coalface or the work front. So, HOP is all about understanding how work is done and making it easier for successful work.

But, you know, the other part to that is the leader has a responsibility to transfer information and to be across all of those things to help them make better decisions as well. So, you know, whilst HOP is very worker-focused, you know, I think HRO is driving the leader to understand that they have a role in this process as well.

And it's not just about asking the worker what they need, it's about being the...providing good decisions and equipment and all of those things up front, like being proactive around that as well. And to really deeply understand that, you know, one of the things with HROs is worker experience and expertise is very, very important because, obviously, you know, you're working in high-hazard or complex environments.

And, you know, you think about healthcare, for example, the frontline workers are the most experienced. You know, they are the ones that totally understand what's going on. But the leaders very clearly recognize that they are a support mechanism, but they have a responsibility to make a great system for people to be able to respond and react as well. So, you know, I think it's HOP plus it turns it on its head and says, hey, leader, don't just wait, drive an excellent outcome and make sure you're letting, you know, senior workers as the people that need the information, current, up to date, you know, the best information and access to experts as well.

- Also, I'm sure you've heard, everyone, has heard the work as imagined as opposed to work as done. So, it sounds to me like HROs are just not even...I mean, they're not so concerned about work as imagined. They're really, really focused on work as done all the time.

- HROs?

- Yeah. Does that scan?

- And yes and no. I think as the work gets more complex and there are fewer ways to get success, you know, I would say that, you know, the black line, which is perfect system, perfect work, perfect day, and then blue line, which is how people need to adapt to the system, they are very interested in understanding that. And for the critical steps of the job, so if you're working in, you know, creating chemicals, for example, there are critical steps when you're blending chemicals or refining things that absolutely can't be deviated from.

And so that's where the blue line can't be different to the black line. And they're just very interested in...the sensitivity is about understanding where is it you need to deviate because we need to understand exactly when you need to deviate and why, not you have that freedom to do it. There's also probably another level in terms of the black line/blue line, which is these are my critical steps along that line.

And they really, you know, I suppose leaders and workers are very clear on what those steps are and help understand, you know, there's transparency when those things might need to deviate so that they can not deviate, I suppose. Yeah. So, sometimes it's easy we allow work as imagined, but sometimes it's about really coming back to...oh, sorry, work as done, but sometimes about coming back to work as imagined.

- But, yeah, in the critical steps and critical controls, I suppose. Okay. Let's move on to number four, which is a commitment to resilience. And you've said this is resilience engineering, and I have heard this term, but honestly, we haven't discussed it. So I'm not...

- Oh, really?

- I can imagine what it might mean, but how does it look in practice?

- Yeah. So, resilience engineering's pretty new, I think. I mean, oh, it's not new in other...you know, physics and engineering and stuff like that, but it's certainly new in the safety space. And I think you might want to get onto David Woods and those guys. They're kind of the people that talk about this a lot. And I think it fits beautifully in here because it's really about enhancing the positive kind of capabilities of people to be able to do their job safely and really understanding that adaption.

So, this is a lot around helping understand how people keep their controls reliable in the field and how they manage to most of the time do work not aligned to the process, but actually still have success. So, it's really about understanding that richness of daily work, but it's also about, you know, this expectation that things will go wrong and not denying that and planning for that.

What redundancy can we put in place to make things go right? What's our plan B? And really having that at a mindset of leaders as well. So, you know, leaders don't just kind of go, okay, we're going to do this work. We're going to do it at the same time as this work here.

Planner, go and make that happen. Everybody, you know, turns up on the day and you get your stuff. There's very much forward-thinking about what could go wrong. If that happens, what can we do differently to make sure that people are still safe and we fail safely? You know, how will we physically do this and still be successful when it doesn't go to plan because we assume it's not going to go to plan?

So, there's very much that forward thinking and, you know, usually a lot of people involved in that. So that's the first part that's really the anticipation of the failure and being able to respond or have a backup plan. And then the other part is really around having this kind of capability to contain failure quickly so it doesn't escalate into, you know, big consequence.

I think that's probably the right way to put it. So, you can fail and you can have no consequence, you know, near miss kind of idea. The really good HROs practice that a lot. And it's not just an annual, you know, fire drill, they're really getting into scenario-based training and starting to imagine how things can go wrong and putting these great scenarios together.

And actually, the mining company I'm working with at the moment is very good at this. You know, when they do their practices of underground evacuations and things like that, they plan all of these like, you know, little side issues to happen. There'll be people picketing out the front.

There'll be, you know, all sorts of things happening. But that's actually what happens when failure happens. You know, it's never straightforward. It's never like everyone can evacuate out through the main exit and they'll wait on the hill. You know, it just doesn't happen like that. Yeah, so just getting a bit of imagination, putting the complexity that really is, and then practicing that over and over is really, really important.

- I think emergency management and perhaps the military both know a little bit about that, like practicing disaster scenarios. And, you know, like, okay, we know what to do if there's an earthquake, great. What if it's winter as well? Oh, we didn't think of that.

- Yeah. Yeah. That's exactly right.

- The side issues that you're talking about.

- And most organizations, like, you know, when you go and do a management system audit which I never...that's not my passion at all. That's actually something I try and avoid like a passion. But, you know, I remember back in the day doing management system audits and they would say, "Have you got an emergency management plan?" And that would be what people cared about. But, you know, everyone's got a...what's the saying?

Oh, no, I always get this wrong. I think it's Muhammad Ali, "Everyone's got a plan until they're punched in the face." There's something like that. Really your plan goes out the window and it's all about your experience and your ability to work with the people around you and, you know, have a basic framework to be able to work together outside of the normal hierarchy, you know?

So, have you heard of the AIIMS system?

- No.

- AIIMS is like Australian Incident Management System, and it's actually this really cool little framework which just sets out roles that a company can use, but also like a national emergency response can use and, you know, all of that. And so they're all actually thinking in the same framework. And those kind of structures actually help you to be very, you know, resilient and to recover quickly because, you know, you don't have to go through all that thinking up front.

Everybody knows who the incident controller is, everybody knows, you know, who's managing the resources, you know, who's supposed to go get the trucks, you know, go and sort more water, all of those kind of things. And having those basic thinking frameworks that multiple companies and multiple organizations can work together is something that is really, really important in basic resilience.

- I think that's like incident command system here. But yeah, it occurs to me that it's like giving people the tools to improvise, knowing that they will have to.

- Absolutely.

- And then the last one is a deference to experience. So when I read that, I always think, whose experience? Anyone, everyone, have we been paying attention to the wrong experience maybe?

- Yeah. It's experience and expertise. So, you know, and those things are sometimes different. So, obviously, the expert is the kind of technical, you know, go-to person, the technical guru, and the experience is usually somebody who has been doing that for a long time and has seen all of the things that go wrong.

So, my uncle works in construction. He's a construction manager and he's always going on about the architect. So, the architect's probably the expert and the construction manager, you know, he's the experienced guy. And we were having this conversation, and he's like, he designs these things and, you know, they sound good on paper, and then I try and implement them and they don't work.

But then he knows that they're so much better when they work together and they get a much better outcome, you know, when they come together. And I think that's the same, you know, in any kind of experience-expertise type arrangement. So, you know, the fifth part of that is the expert...I think what I'm trying to explain here is whose experience, whose expertise is it's a bit of both.

It's a bit of technical and it's a bit of long-term all the ways that things can fail and they tend to work better together. But what you find in most organizations these days, especially as cost-cutting has happened, Mary, is we tend to move our experts to these central functions. So, you know, our key engineers are not based on sites anymore or, you know, the people planning the work even who have expertise are not based next to the people who are doing the work.

And the challenge with that is as soon as you get that divide, you really start to...oh, I suppose you just don't do the work as well and you don't get the feedback loops back to the experts so that they plan the work better and better.

- You can't do the work as planned and you can't change the plan according to what's coming up in the work, right?

- Yeah, that's right. And when things do go really wrong, especially in, you know, situations where if you're at a control panel and you are dealing with a situation you've never dealt with before and it's not in the manual, you are relying on whether you've dealt with that before because you don't know how that control panel is designed and you don't know how that system is designed, and you need your expert right next to you to kind of have that banter and really solve that problem together.

And that's what HROs are really good at, is they recognize that their experts need to be very accessible to the experts, their frontline people, and, you know, they build close relationships with those people as well.

- So I'm going to zoom back out a little bit now and ask, what are the biggest challenges that you see facing occupational health and safety in the next 5, 10, 20 years? This is about as big a question as I can ask.

- Oh, look, I don't think things are getting any simpler. Certainly things are getting more complex. I believe AI is going to be a really big challenge. It'll bring so many good things as well. You know, I've got to say, ChatGPT, checking it out, seeing every day, finding out more and more and learning more.

I think if we're not careful as a safety profession, we are learning a lot, but what I notice in companies is we're still not really transferring that learning to our leaders. And so, you know, resilience engineering, HOP, all of those things, they're fantastic techniques and tools and everything associated with that.

But, you know, as safety profession, we kind of have to decide what is best for our company and then teach our company the why behind it, whereas I think we're really good at just going, let's implement learning teams, let's do this, you know, and we don't really put the background to it and help our leaders understand why we're moving in that direction.

So I think probably one of the biggest challenges is we're learning [inaudible 00:42:09] as a profession, we need to bring everybody else on the journey. I think that's probably one of the big things and certainly technology.

- Yeah, I think persuasion is the more I talk...persuasion and teaching and understanding because you have to get the resources to change the way things are done. So there's two ways to ask this next question. What single change in the safety profession do you think could have the most far-reaching positive consequences?

And the more fun way to ask that is, if you had a magic wand and were granted one wish to improve, OHS, what would it be?

- It would be to implant in every person's head that every failure is a systems failure, and almost never is it just the frontline...you know, is it the human error thing? It's something we talk about lots, but I still don't think it's sink...it's sinking into most safety profession, but not, you know, into daily leadership.

Still some really old-school thinking out there.

- We need some more cultural absorption going on.

- Yeah. At the board level. It's the board executive level that really aren't absorbing this. And, you know, it starts with...it's called the Australian Institute of Company Directors. And, you know, those guys almost are not listening. You know, so they're the people that set the curriculum almost for boards. You know, they're the courses that people do to become company directors and they're still in the old-school profession.

And, you know, anyway, it's very frustrating that, you know, people don't see these as the next way of thinking.

- I think there's sometimes a reluctance to change as well, unfortunately. But I think most humans have a difficult relationship with change.

- Yeah.

- What is giving you the most hope for the safety profession, or where do you see hope?

- As well as being dangerous, I think technology, because a lot of, you know, the problems with...if I think about the frontline, and just yesterday I was underground, I don't know how many kilometers underground at a coal longwall. And the ability for people to still get access to the things that they need to access in terms of the information, you know, is still a challenge and we're still very paperwork-based when it comes to the frontline.

You know, we're all techy everywhere else, and it's not just coal and it's not just underground, it's actually everywhere. It's remote sites. It's, you know, lots of things are... I think the biggest challenge is definitely still, you know, the way we can move forward is really making it easy for people to get information they need in the format that they need it.

- Yeah. There's never been a better time really. So, I'm going to go to the questions that I ask every guest. And the first one is about training the next generation of safety professionals. So, in terms of interpersonal skills, what do you think would be the most valuable to integrate into safety curricula?

Like, a skill to develop that's really going to help safety professionals once they get out in the work world.

- The ability to influence. So I think it's, you know, just those basic kind of influencing skills. I think we still a lot of the time are very good at flinging over the fence and thinking...you know, and judging rather than seeing ourselves as coaches and mentors and, you know, taking people on the journey. Yeah, I still see that quite a bit in the safety profession, and I think it's a shame because that's our role is to coach and mentor and get beside people and help them be better.

- If you could go back in time to the beginning of your safety career, what's one piece of advice that you might give to yourself?

- Look, it would be the exact thing that I just said there is to recognize that I'm there to help and that I don't actually know what goes on in the frontline until I go and ask. The other thing, you know, I've moved industries a bit, which I've done on purpose, very, very, very purposefully.

And I always said to myself before I started the job, I was going to go and spend a couple of weeks outside of safety just with the workforce. And, unfortunately, never really given that opportunity or very rarely are you given that opportunity to, you know, go spend two or three weeks out on the job just shadowing teams and learning about work and how things really operate.

Oh goodness, for functional people, and it'll be HR, it'll be safety, it'll be, you know, any of the engineering functions, any of that kind of stuff, if companies would recognize that the people who we're all trying to serve are the frontline to make things easier and better for them, then it would be almost something you would build in is a month just on the job before you go into your role.

So that would be something that I would have done differently, I would've almost negotiated that.

- It would be a good onboarding practice for most roles really. So, how can our listeners...if they're interested in some of the topics that you've talked about and realizing that the idea of HROs is not new, but if they want to learn more, where would you steer them towards, books or websites or anything like that?

- I'd definitely steer them towards any of Andrew Hopkins stuff, Prof. Andrew Hopkins out of Australian National Uni in Australia. But there's also an awesome book called "Extreme Operational Excellence," it's the U.S. Navy, and it's by Matthew DiGeronimo and Bob Koonce.

And it's pretty new, actually. It's only a few years old, but it's such a good book. You know, what I love about it, Mary, is it's all practices. It just takes you through all the practices they use in the organization to really actually get high excellence, but it's all HRO theory, it's just beautiful.

- If our listeners wanted to reach out to you, where could they find you on the web?

- Yeah, I'm on LinkedIn. And also I work for Brady Heywood, so you can just jump on our website, bradyheywood.com.au, and check us out, or we speak quite a bit around the country and overseas. So, yeah, you can come and listen to one of our stories around big failures and then we usually dissect those afterwards and talk about the systems issues and the learnings out of those.

- Good. And we'll have those linked in the description. Well, that's a wrap for today's discussion. Thanks for listening, listeners, and don't forget to rate, review, and share the podcast. And we, me and listeners, appreciate your time and insights, Jodi.

- Thank you very much. Appreciate the offer and the opportunity.

- And my thanks to the "Safety Labs by Slice" team, highly reliable since the beginning. Bye for now.