Special Episode
EP
79

Expert views on Measuring Safety

This is a special episode exploring the controversial subject of Workplace Safety measurement. Traditional metrics dominate this key element of Safety - but what impact do they have on keeping people safe? Are there more effective options for EHS professionals? We’ve compiled this collection of expert views to help you decide.

In This Episode

In this episode, we’re conducting a thorough investigation of Safety metrics to help HSE practitioners measure (and manage) the factors that most accurately reflect Safety performance.

We’ve compiled a collection of thoughts, opinions and ideas about safety data, analysis and reporting from 19 of our previous guests discussing various aspects of this diverse and divisive subject.

Many question the existing reliance on lagging indicators and proxy KPIs, while the value of Zero Harm initiatives is particularly challenged.

But what are the alternatives? Are they easily quantifiable, will they satisfy senior leaders, and where does psychological safety fit into this discussion?

This Safety Measurement special features multiple perspectives on these crucial questions from experienced HSE professionals, consultants, authors and academics. We can’t promise definitive answers, but it will help you come to your own conclusions about Safety metrics - and how they can best keep your co-workers safe.

Featured guests (in order of appearance):

Transcript

- [Mary] Hi there. Welcome to a Safety labs by Slice special investigating the multiple dimensions of safety measurement. Safety professionals often need to provide metrics, but which metrics really reflect the effectiveness of safety programs? This is a device of subject. Traditional KPIs and goals are under attack for creating unintended consequences that some guests claim decrease workplace safety. But if that's true, what are the best alternatives? We've compiled a collection of views about measuring Safety performance from our past guests: experienced HSC professionals, consultants, authors, and academics. We hope their thoughts, opinions and ideas allow you to focus on measuring what matters for safety.

- Tim D’Ath, Head of Safety at Yarra Valley Water.

- [Tim] One of the most frustrating measures that I'm sure you hear from a lot of your guests is injury frequency rates and using those as measures of safety. And, you know, I like to use the analogy of health. You know, the absence of illness doesn't indicate the presence of health. And I think the same can be said in safety, that the absence of injury doesn't indicate the presence of safety. But the fact that 90% of organizations, if not more, use injury frequency rates as a measure of safety, makes it really, really difficult for those organizations who want to break that mold, drop that measure, and lean into more human-centric measures. It's a hard thing...big leap for boards and execs to make. So that's definitely one barrier, I think, because so many others use these sort of measures. It's really tricky for the minority to move away from them. It's really tricky for the minority to move away from them. I think another one...I'll use the example of zero harm. I'm sure that that comes up as well quite a lot. But with these zero-harm goals, my view of it is that if goals are unrealistic, But with these zero-harm goals, my view of it is that if goals are unrealistic, but they can be achieved by cheating, then people will cheat. So the notion of zero harm's absurd. People are fallible. And that number of days without an injury as it increases, it becomes harder and harder for people to report. No one wants to break that streak. Meanwhile, you know, as the organization sort of tick off these 30 days, 60 days, 90 days without injury, with their barbecues and their lunches and other activities, you know, this false sense of safety paired with the lost opportunities to learn from the minor unreported incidents, you know, that increases the risk of injury. So there's this paradox there around zero harm and increasing risk of injury. There's been quite a few studies that have actually confirmed this as well. So I think when using those two examples, outdated frequency rate lag indicators with these other popular goals around zero harm, it makes it really, really difficult for organizations wanting to break the mold.

- [Mary] Bob Edwards, Human & Organizational Performance Consultant at The HOP Coach.

- How do people in general and safety professionals, in particular, tend to respond to complexity?

- [Bob] Boy, that's a six sigma answer. It depends. Some people, I tell you this, I think the people closest to the work tend to do this. When we start talking about this stuff, they're like, yes, and we've known this all along, right? And thank you for acknowledging that this work is a mess, that we really are out here… A lot of operators struggle going on here. We're trying to get this stuff done. I think sometimes they're higher in the organizations where they, unfortunately, don't get to see enough of the sort of the context of work. Unfortunately, we do this to ourselves. We would provide higher levels of metrics, KPIs, dashboards, scorecards, but no real context behind all that. So sometimes when people hear this, it can be uncomfortable for them. They're like, "Well, wait a minute. Our numbers are really good." Well, yeah. So are a lot of companies who have fatalities. So the numbers being really good don't mean anything to me. It's the context behind those metrics. And I think we got to get much, much better at being sort of open and transparent with the context. What does it really take to get that done? Best production day ever. Yeah.
We almost blew a unit up. Nobody knows that because nobody's talking about it. But the metric said best day ever. So, it could be hard to hear, but I can also tell you this, once people sort of grasp this, then they're not satisfied with simple answers anymore. Well, Mary, she just pushed the wrong button. Well, okay, but let's look at the control panel that she's interfacing with, right? Let's look at the production
pressure that she was under. Let's understand all the stuff that was going on that she was managing in that moment. And all of a sudden, it becomes pretty obvious that the reason we don't have a lot more events is because Mary normally doesn't push the roll button. And so that can be hard for some people initially.

- [Mary] It makes sense that people higher up don't have the context. It's not their job. And it makes sense to some degree that they rely on data as a bit of a shortcut. Yeah, but shortcuts can lead to problems.

- [Bob] Todd says this, "Great leaders make great decisions if they have great information. A lot of times it's a data input problem." That's context. If I had the best production day ever, and I don't know that they nearly blew the unit up, all I see is it's the best. I'm probably buying pizza for everybody, right? Because I don't know what they actually had to do, because nobody's talking about that, not higher up in the organization, because they want to see those numbers and they run by the numbers, all that stuff, right? And I'm not saying get rid of metrics. I'm just saying metrics without context can be deadly. It's certainly dangerous. Maybe deadly a little is strong, but they actually can be. I have to know the context. So the more curious leaders get about context and the complexity of the work, which is where all that context sort of it is, the less satisfied they'll be with just, "Hey, we made good numbers this month." They want to know, what did it actually look like to get those numbers?

- [Mary] Joelle Mitchell, Global Head of Psychological Health and Safety at FlourishDX.

- [Joelle] When incentives are built or introduced to drive particular behaviors, the question always comes, "Well, how do we measure whether or not those behaviors are happening?" And, you know, you're not going to have somebody out there dedicated observing, you know, sort of that making a tick every time they see somebody do that particular behavior. And so what we require is something that's a proxy measurement. And so again, you know, traditionally, we can look at things like the take five process on a site where people need to do their sort of, you know, spend five minutes, reassess the job site for hazards, see if anything's changed since they went on break or whatever it might be, that sort of thing. And so, okay, well, we want people to do these, so we're going to give everybody a daily KPI that they need to do two or three or however many a day. And so the goal there is we want people to be checking their work site for any changes since they went on break or, you know, since they finished their last shift. And this is the mechanism that we're giving them to do that so that we can monitor whether or not that's happening. And typically, what we end up with is that people will just pre-write them in the morning in the crib room before they go for the day, and sort of they'll write their three and they'll pop them in the box and that's it. They've met their KPI. So, what we're rewarding is just paperwork that's not actually achieving the outcome that we want to see. And there's lots of examples of this sort of thing.

- [Mary] Christian Hunt, Author of 'Humanizing Rules'.

- [Christian] We equate solving a behavioral problem with following a due process. So, we assume a connectivity between the outcome we're looking for and the process that we're imposing on people. And sometimes that is absolutely correct, right? If we can do things...constrain people's behavior in the physical world, then we can guarantee the particular outcome. But very often, compliance processes that we've got don't always deliver or have challenges. But very often, compliance processes that we've got don't always deliver or have challenges. And so if we want to focus on getting the right outcome, don't assume that because you've designed the perfect process, that maybe the world has changed around it. So, a process that works in the physical world might not work in the digital world. And so we need to think...you know, often we think, I've written a rule, job done, or I've done a training. Training's the best example. If I've done a training course, job done, they now know. If you've been running that training for five years, it may well be they're bored of hearing whatever it is you're telling them, and so it's no longer effective. And so we need to measure the outcome, not just the thing. So, people showing up to training is an input metric, not an output metric. Doesn't mean they've paid any attention whatsoever to what you've told them. And we often leave the...you know, we move to the thing where we say, oh, because they turned up to training course, job done. Wrong. That doesn't mean they've necessarily listened, even in the best-designed training program. So, look at the outcome, you're looking for compliances about that, not about going through the motions. And we often pick the wrong thing to measure because it's easier to measure, you know, attendance or training than it is, has this training been effective?

- [Mary] Clive Lloyd, Author of ‘Next Generation Safety Leadership’.

- I do want to ask you about some ways that safety leaders might inadvertently be undermining trust. Can you think of a few examples?

- [Clive] Yeah. And I like the fact, Mary, that you've got that word inadvertently there again because often the things they're doing are very well intended, but again, often with unintended consequences and unintended negative consequences. Let me rattle through a few because there's a lot. For example, a goal of zero harm. It would make sense philosophically. I mean, let's face it. We don't want a goal of a little bit of harm, right? Let's just have a little bit of harm. Let's just hurt three people. How about Nigel, Phillip, and Terry? That makes no sense at all. But the thing is psychologically, a goal of zero harm we know from the research is actually quite harmful. It's a binary goal. It's zero or it's not zero. Now, unfortunately, again,
the unintended consequence is some organizations have become a little bit overzealous. And then, for example, they become intolerant of incidents, intolerant. Now, because when that happens, we send messages out, and the message often received by the workforce is just keep quiet. And so we find reporting tends to go underground. Now, again, we can't fix a secret. Well-intended. And again, there's some fairly current research from 2018, I think it's called the Zero Paradox now that found that organizations with that zero goal tend to have more life-changing incidents and fatalities. Not less, more. And that is often a function of the fact that reporting goes down. We tend to be reluctant to, you know,
report incidents or report near misses. So, again, really well-intended, but just doesn't tend to work out. Over here in the mining industry, for example, organizations proudly display these things on their shirts, you know, zero harm, zero incidents. And I was at a site recently, they had beyond zero. I'm thinking, what? Beyond zero. Are they going to reincarnate people now? It all gets a little bit... Negative numbers. We're going to start reincarnating people and get...it's gotten a little bit silly. And so these are what I would refer to often in organizations as the safety platitudes. Often may be meant with sincerity, but they're viewed by the workforce as platitudes. The other big ones, of course, safety is our number one priority. Well, no, it's not or it is until it isn't. You know, priorities change and the workforce know this. Here's a recent case study. A company I was working with, they had a lost time injury, and quite a nasty one. But what they did, they brought this fellow back who'd been injured, they brought him back on light duties. Now, again, sometimes that works for both parties. It can be good to get people back to work and so forth. On this occasion, it was done purely for metrics. In other words, so it did not count as a lost time injury because, hey, zero harm, right? I don't want to spoil that. So, that's happened, everybody knows that's happened. And then the general manager just days after this was doing a state of the nation address. And first words out of his mouth to the workforce was, I just wish to remind everybody that your safety is our highest priority. And you could hear a pin drop. I know in their heads they're thinking, "Yeah, right." But think about the damage that is doing...well-intended, sounds nice, but what's increasing here is cynicism. What's decreasing is integrity and trust. So, in other words, we're actually just kneecapping the
culture from doing these well-intended things. I don't know how long we've got, I got all the rest of these, things like safety walks, Mary, which most companies do. Now, unfortunately, the way they do their safety walks is been sort of polluted by BBS, if you will. So, their idea of a safety walk is all managers, and they usually have KPIs around this, by the way, they'll don the hard hat, they'll put the high vest on, and they're out there with a pad and the pen and they're looking for bad stuff, right? They're looking for violations. And so first up, it shouldn't be called a safety walk, it should be called an unsafety walk because let's face it, that's what they're looking for. Now, the KPI often is managers need to do eight safety walks a month, effectively, two a week. Now, very few of these leaders enjoy doing them for obvious reasons. They're not pleasant. So, what they tend to do is leave it to the end of the month, right? And then you get these swarms of leaders across sites all looking for bad stuff. And it's not like the workforce don't know they're
coming because they do this every month. And so their supervisors will be saying, "They'll be out soon. You know, go and have a tidy-up." So, leaders, number one, they're not seeing things as they actually are anyway. And where there's fear, you get bad data. It's just a tourism. And so think about that. What is that doing for trust? They're out there looking for bad stuff. That's not helping trust, that's creating fear. They're not seeing things as they really are anyway because people have already tidied it up. That's what I would refer to effectively as safety clutter. But we keep doing it. And what we're doing is we're not helping... Now, all I say to the leaders is if you really believe you need a KPI around this, and by the way, you don't. If you really want to wreck safety quickly, just have lots of KPIs on it. How about this, rather than a KPI of these alleged safety walks, how about this as a KPI? I've said this to one leader. What percentage of your people's children's names do you know? And let's say that was 40%. What would that also infer that I've been doing? Well, I've been out there, I've been having conversations, God forbid about not even work or safety, just conversations, building relationships, building trust.

- [Mary] Jodi Goodall, Head of Organisational Reliability at Brady Heywood.

- You can picture an organization where someone says, what is this measure? I don't know. It measures this thing. Why? I don't know. Well, that's the lack of, you know, the connecting the dots between, okay, what are our controls like, why are we measuring this, and what does it actually mean? That's sort of the patterns that you're talking about in a sense.

- [Jodi] Absolutely. So, a couple of years ago, Grosvenor Underground Coal Mine in Queensland had a gas. So, a couple of years ago, Grosvenor Underground Coal Mine in Queensland had a gas explosion that injured five really severely. And, you know, those guys will never be the same again. Some of them lost ears and, you know, had extreme lung issues and burns to a lot of their body and just very lucky that they didn't die. Very lucky, came very close. But, you know, there was a lot of information in the public circle around that because there was a board of inquiry around it. So, it's a really good and recent case study on exactly what you're talking about. So, when you overlay the years before that event, they were having gas exceedances, so they were actually exceeding the limits to what would come into an explosive atmosphere. And it had happened so many times and they were explaining it away in terms of they would find the individual control failure and they wouldn't link it back to a larger kind of inherent issue, and so they were normalizing the failures. And, you know, if you don't have a clear understanding of each of your metrics and why you're measuring things and why they're at the limit they're at, then because you don't often get a bad outcome, it's very, very easy to see those as we had success rather than we had failure.

- [Mary] Cameron Stevens, Digital Safety Transformation and SafetyTech Strategy Expert.

- [Cameron] So health and safety data sets, horrible, almost [...], almost 100% of every organization's health and safety data sets. And I say sets deliberately because you have so much data that could be considered data that can be used to improve design experience and safety of work. This is typically horrible. And let me explain a little bit. So, good quality data will be available when we need it. It'll be accessible, it'll be structured, it'll be in the format that we need, and it'll typically be...if it's going to have personal data, it will be managed from a privacy standpoint. It'll be secure. So, all of those tenants of good quality data, I generally don't observe when I look at a health and safety data set. So the good, rich, quality data will be on Jenny's computer in a secret spreadsheet. It won't be in the corporate database. So many reasons why that's the case, but that's an accessibility issue. You know, Jenny can have it on her computer, but Paul can't see it on his...he can't get access to that because it's only on Jenny's private drive. Or we've got a lot of checklist-type data, which is check boxes, but with no context, no pictures, no videos, no conversations, nothing to give us an understanding of why the no was a no and whether that means... We've got a lot of high-volume, low-quality data that offers almost zero value to a machine. So this is the other thing we need to...I think the question you asked is, you know, what value can health and safety obtain from artificial intelligence? Well, right now, not a lot because our health safety data sucks. Well, right now, not a lot because our health safety data sucks. So, we have to be clear about what types of things we're hoping to achieve, and then make sure that our data is structured, stored, processed, pre-processed, available for a machine to learn from. We also have to appreciate that right now, we don't have what's called general artificial intelligence or general AI. We have very discreet AI. So, the machines can learn or the computers will learn over time with reinforcement, typically with human in the loop, but the original dataset is full of bias. It's our bias. We're the ones that generate the data. Sure, you can get sensor-derived data, so weather data, for example, but there's still bias in that data. There's a lot of bias in the data that we generate. And therefore, what the machines are serving up to us is full of bias. So right now, there's significant opportunity for what computers can do to assist us through decision making. But we've got a responsibility to provide unbiased to the best of our ability, so much less bias in the system,
or at least know what the biases are, make sure that data is of good quality, and be in the loop. So the human is in the loop to reinforce, to support the machine to learn. Just like a sniffer dog going out and, you know, sniffing a whole bunch of different scents. We train the sniffer dog to learn which scent is the scent we're looking for and reinforce the dog to continue to look for that scent for us. That's all we're doing with data when we're machine learning with reinforcement learning. So if we can bring the principles right back to basics, this is data science 101. I was horrible at maths at school, so this is an uncomfortable place for me. Understanding maths and numbers is not my forte, but getting right back to basics really clearly understanding what are these machines doing for us. And then when you look at the data set that you have from health and safety data, you'll realize just how unlikely it is that we're going to get good-quality outcomes. So that's where we're at in health and safety. We also are really poor at determining what is the
value to the business because we are often challenged to say, what's the dollar return on investment? And that therefore, immediately aligns with the cost of an injury, which then goes to this whole therefore number of injuries that we save equals this amount of dollars in insurance. This is why we should use this technology. Really difficult way to demonstrate return on investment, which is why we see a lot of artificial intelligence, machine learning-type solutions used for advertising for things that are money generators because that's, you know, dollar return on investment pumps some money into the algorithm to serve up more content, more things that people would purchase. And that's a tried and tested approach. In health and safety, we need to be far more creative about how the business understands its driving value. So it comes down to data. That's the fundamental basics. So if data scares you like it scares me, you need to get out of that head space, be a little bit more curious.

- [Mary] Brent Sutton, Author of ‘The Practice of Learning Teams’.

- [Brent] In the latest work that we're doing at the moment, we are basically visually showing organizations where that drift is happening and we're showing them how weak those controls are because those controls are allowing workers to move outside that safety envelope, and those controls are allowing the presence of the energy or the hazard to also slip into those safety margins as well. And organizations love visuals. They love a good visual. Numbers have no meaning.

- [Mary] I was going to ask about the operational learning dashboard. That's what you're talking about, the visuals?

- [Brent] Yeah. Well, there's two components. The thinking around the operational dashboard was that
I don't think we can ask people not to measure stuff. It's just inherent. It's just inherent. So let's give them something they can measure that can create a learning opportunity. So a lot of the data that we present doesn't actually create curiosity. So a lot of the data that we present doesn't actually create curiosity. It doesn't actually create learning. It's simply stating a fact. I did 35-worker engagements, we had 3 harms for this period. I really don't know what that means. But we had three harms this month, but we only had five harms last month. So, does that mean we've done better at not harming people, or has work changed? I mean, I don't know what it means.

- [Mary] Yeah. There's too many possible reasons. Yeah.

- [Brent] Absolutely. It's just a number. And I love it when we add some colors in as well because that just, you know, really emphasizes something. So what we're trying to do is we're trying to say how can we present data in a way that creates curiosity at the leadership level? And then how do we get leaders by being curious to actually drill down into that data to actually see the narrative and the stories? Because that helps them make sense of the complexity of the system, whereas the number can't. So for me, it might be that we had 35, 40 engagements for the period. Give them a number. But of those 35 engagements, these were the themes that came from those engagements. Here are the learnings that came from those themes. Here are the learnings that came from those themes. And here are the system improvements that resulted from those learnings. I don't know, is that evaluative?

- [Mary] I don't know.

- [Brent] Yeah.

- [Mary] I don't know if it's evaluative.

- [Brent] Because we're telling a story. Because we're telling a story. We're basically saying, you know, every time we engage, we should learn. There should be a learning. But the bits that's missing sometime is had that learning come...where was the context of that learning. So hence more, we're trying to explore it through the other lenses and sort of scaffold those leaders to start to move away from looking at symptoms and behaviors and outcomes, which is what our data currently does to shift them back to conditions and work design.

- [Mary] I tend to think of the difference between data and insight in that the insight is the stories and the context and that kind of thing, whereas data is just cheer information without interpretation.

- [Brent] Yeah. So, you know, we did high-risk power work. We did 100 jobs. No one got harmed. I don't know, that doesn't tell me about what was the...you know, does that mean there was no variability across those 100 jobs? I doubt it. Okay. Were there 0 making do across those 100 jobs? I doubt it. If I think old school, was there zero deviation against their rules? I doubt it. So that data that's statistic doesn't give me any assurance or verification around that high-risk work. And that's where Todd talks about death hides in normal work because that data is not telling us anything around that. There is no context to it. Yeah. We've just tried to explore how to start that journey of looking at data differently. We don't have the answers. We are just saying if you start doing a context-driven element to it, there is an opportunity to learn.

- [Mary] Stephen Scott, Human & Organizational Performance Consultant at HOP Improvement.

- [Stephen] I was a brand new first-line supervisor in the cast house, so, what we started seeing on my end was every safety incident was a really bad negative event. Now, you probably think, "Well, of course, it was a negative event," but, I mean, we were issuing discipline, we were reprimanding people, we were counseling people. And that behavior, I was rewarded for doing that, you know, it was reinforced on me that that's how you hold people accountable, that's how, you know, we get to keep this performance going. Whenever there was an event that rose to the level of an OSHA recordable or a lost workday, it generated more scrutiny. So, what we found ourselves doing, you know, and talking to people with other companies. So, what we found ourselves doing, you know, and talking to people with other companies, this is really, really common, we found ourselves trying to figure out any way we possibly could to avoid classifying something as a recordable or a lost workday. Case management was the term everybody used
because that number got so much scrutiny,  it was part of our performance objectives. It was, you know, published all over the place, and anybody that wanted to know anything about that plant, that was one of the things that they saw. So, you know, while we felt like we were doing all the right things to try to manage safety, what we found was we were actively discouraging people from telling us about safety events. Until we had an event that was too big to hide, a lost workday case or a recordable where someone actually needed medical treatment. So, you know, over time, we really started to see that we were actively discouraging people from telling us stuff that might have allowed us to learn and improve and put better controls in place to prevent the next bad thing from happening. But that's kind of the trap we got caught up in with this drive for zero. It was really clear that zero was good and anything else was bad. And so, zero was the bar that we used to determine success or failure, good or bad, reward or punishment. And when that's the case, people just quit telling you things. It's almost a human nature.

- [Mary] Dr Nektarios Karanikas, Associate Professor in HSE at Queensland University of Technology.

- I'd like to talk a little bit about evaluation. You mentioned that. So, let's say that I'm a safety professional entering a new organization. So, I've inherited a culture, safety management processes, and I need to assess how well it's working, where it might need improvement. So, I got a whole string of questions here for you, and I can go back to them, but where should I start? How do I define success or failure? And how do I even know if I'm asking the right questions?

- [Dr. Nektarios] Yeah, that's a very nice question. Very hard to answer it. Why? Okay. First of all, any evaluation of any system needs to have structure, meaning that we need to follow an approach from high level down to, you know, detailed levels of systems. Now, there is a disclaimer here. The more you dive into the system and to look at the individual subsystems and elements, the more you miss the system perspective. So, we need to find the balance of how many things we evaluate, how many areas, and connect those areas. So, not taking how my audits work, but taking, how does the deliverable from an audit benefit investigations, benefit the risk assessment. How the risk assessment process feedbacks audits and investigations. If we want to evaluate, we need to be careful not to isolate aspects of the system and evaluate them separately. The second important thing, most of the times, we evaluate the quantity, the productivity of things. How many reports we received, how many investigations we closed, how many recommendations from investigations we closed, and so on. And we are missing two critical aspects. One is the quality of what we do and the other is the timeliness. So, I might implement, I might close off an action item in my risk registry, but maybe the quality of my action was not the best possible with the resources I have. Maybe I could have done better. So, how do we measure that? We measure that actually by indicators like the acceptability by the workflow, the visibility, and, you know, sustainability of our controls. It's not how many controls you have in place alone. And the timeliness. Do I really implement this control in time in my pursuit to protect health or do I discuss for two years in trying to find, you know, the best solution available. Perfection, you know, is the enemy of good. And timeliness is very important. So, those three areas must be assessed at the same time to give you a good overview of the system areas which must be once more connected. And the other important part of evaluation is how you define success. So, if the success of an investigation, and this goes to the design, how you design investigations, is to produce a report, then measure the report. If the success of an investigation, as you have designed it once more, is to welcome people, to invite people to tell their accounts and stories so that to help you understand the system and action, then you need to evaluate that. If the investigation is about blaming,
count how many people you blamed and you could be successful. You say, "Okay, we blamed 50 people. Yay." You need to have consistency between the real intentions behind each of the occupational health and safety processes and activities then design why you really have this activity and what you expect. If the health and safety management system expectation from the senior management is to take the certification, then yeah, measure that. If the purpose is to improve health and safety by connecting under the same purpose and interacting with the appetite to share and learn, measure that. Some things, we design A and we measure B.

- [Mary] Dr Peter Brace, Psychological Safety Consultant at Human Capital Realisation. Is it possible to measure psychological safety? I realize you can't sprinkle magic dust and make it happen. What are the things maybe you can or can't measure around psychological safety?

- [Dr. Brace] Yeah. You can measure psychological safety. Absolutely. There are a number of surveys that have been extensively tested and validated. You know, some were in the public domain. Professor Amy Edmondson from Harvard has published a seven-question survey, which is reliable in measuring the level of psychological safety. There are other proprietary measures that run up to 25 or more questions that look at different aspects of psychological safety. But what those surveys all have in common is they all
ask about how people feel. And, you know, those are quite easy questions to answer, right? Do I feel that my team's got my back? Do I feel that I can raise concerns to others in the team? All right. Pretty simple to say, I always feel that way, or I never feel that way, or something in between.

- [Mary] Stephan Wiedner, CEO at Noomii.com.

- [Stephan] I like being data-driven, hence my desire to even take this broad scope of interpersonal skills and break them down into more objective, measurable components. I would want to take a similar approach with this, where I think fundamentally, we want to build psychological safety. So, if people see the safety officer as, you know, the safety cop and someone that we can't really trust, well, I would start with assessing it. That's what I like about psychological safety, not the concept itself, although it is quite appealing to me, it's the ability to measure it. The fact that we can objectively measure it. And measuring it only gets you so far. What's really valuable after measuring it is putting the results in the middle of the room, and saying, let's have a conversation about this.

- [Mary] So, you can say, okay, look, I've been assessing this team, people, folks, and this is what I'm finding.

- [Stephan] That's right. And you can even go so far as to look at the seven questions that are used to assess psychological safety. And you can say, okay, these are the seven questions, and in order to produce a psychologically safe environment, we want everybody to be able to answer these questions in the affirmative. And right now we're not. So, what might we do different here?

- [Mary] Murray Ritchie, Author of ‘Seven Bad Habits of Safety Management’.

- An organization can say, "We want you to be safe, but what we're going to incentivize is the speed of your work, not the safety." And so the organization is sort of setting the parameters of the trade-offs, choices, erosions, like all those things that the workers end up doing.

- [Murray] Yes. And incentivizing those successes of, okay, no injuries, go zero. And incentivizing those successes of, okay, no injuries, go zero. Incentivize that. You don't get hurt, you get a bonus. And so, I can go into any organization, any operation right now, and I could get them to zero, like that. Just snap the fingers, I will get them to zero. They won't stay there and they won't know they're not there until somebody's killed or seriously injured, right? It's what Aubrey Daniels calls safe by accident. That's not the hard part. And when you incentivize those things, you're not changing the behavior of the job, you're changing the behavior of the response. You know, when I worked offshore as a medic, and I'm not proud of this at all, but when I worked offshore as a medic, the rig managers loved me because I could have somebody stitched up and have their stitches out before they had to get on the helicopter for crew change, you know, 10 days later. And there was no report. And it's a medical treatment case, but there's no report. Why? Because if I put in that report, everybody on that rig is going to lose their bonus. So, better we just keep it here, we keep it localized. I've worked for offshore installation managers that were adamant, you do not say anything to anybody about what's going on out here. And who is it...Clive Lloyd, who says it best, "You can't fix a secret." And I love that. Clive says his mantra, right? And he's so, so spot on, you can't fix a secret. So, incentivizing those things, that's a big part of what we're doing. And with BBS, it was, you know, we want to see so many positive cards. We want to see this many cards. You know, and I've sat down and looked at people's observation systems and said, "Okay, all of these are nonsense. They mean nothing." Right? But the ones that are actually probably count weren't observations at all, they were interventions. And they're the serious ones. You know, when they look at, oh, we're doing better than worse, we've got more positive than negative. No, you don't. No, it's just easier to report the positive.

- [Mary] Moni Hogg, Author of “The New View Safety Handbook”.

- How do you measure success when you are adopting a new approach like new view? Are there indicators for you that it either has or hasn't worked?

- [Moni] That's a great question. Look, one of the things that I think the safety industry and perhaps at the governance level, I think the challenge that those organizations and those people in those roles have is that there's not a I think the challenge that those organizations and those people in those roles have is that there's not a clear metric, not a clear way of measuring whether these things have been successful. It does tend to be, for the most part, anecdotal at this stage. And to draw an example, and I think this is not to be underestimated, I worked with a team recently where after the intervention that we did, I went and had a chat with the junior level manager and he said to me, look, after we did this intervention, what happened is the team have all of a sudden been going around with their chest puffed out, they've been contributing to the decision-making, giving their ideas, and they're just way more confident and way more engaged. Now, that's not something that can be measured, of course. And dare I say, and I think this is the challenge to all of us is that we are underestimating the value of new view safety in our workplaces simply because of that, because it's hard to quantify. We talk about the results being felt and it's really important that we get away from these quantitative measures and bring in the more qualitative measures. And there's a quite a bit of science around that, you know, ethnographic type of methodologies, which I'm no expert on. But an actual fact, social science does back that as an approach for measurement. Having said all that, what's really exciting for us is that the due diligence index standard has just hit the streets. Sidney Dekker, Art of Work, and Clyde & Co [SP], Michael Toma and his team have spent a couple of years researching and designing a more metric approach to measuring, you know, new view safety. And it's not to measure the effectiveness of your interventions, but it's more to give you a good categorized, well-formed understanding of the collective intelligence in your organization, and quite specifically the weak signals that you need to be listening to. Because one of the key drivers behind shifting to the new view is an understanding of the fact that the current safety management systems that we have are sending us strong signals, A.K. A when things fail or when they nearly fail. But what we need to start to develop is more of an understanding that weak signals exist in our business that are pointing to the fact that something can go wrong and our systems aren't delivering that to us. So, that due diligence index standard is linked to the six due diligence elements, which are based in New Zealand and Australian legislation, and I'm sure in the UK and America and Canada and so on, there'll be a similar construct around that. So, I highly recommend that anyone would look that up and get a bit of a dive into that and get an understanding of how you can start to build a dashboard and a platform by which your directors are seeing the value from new view safety and the collective intelligence and work signals coming up from your teams.

- [Mary] Angelina Badri, Director at Universal Safety Wellness.

- [Angelina] I think it's, once again, people's own comfort in terms of what they think good looks like. So I'll give you an example. I remember, you know, I'm always one that wants the best for everyone. I always say be the standard that others aspire to be. But, unfortunately, I've had these conversations and when getting to understand people's motivations and what drives them and what good looks like for them, I'll be having conversations and they'll say to me, "Man, Ang, you're so harsh." You're telling us that these are areas we can improve. And, you know, because I'm a very honest person in terms of feedback, like in a positive way in terms of how can we improve and what does that exactly look like. And many times I'll get a response of, "Well, if you think that we're not necessarily doing that great, You should see the contractors that were on site." Like, we're doing better than them, or, hey, we're doing better than this company. They've had fatalities. We're not so bad. So, interestingly, when people talk about what does
good look like, they tend to think of themselves and being in this space that's good because they're benchmarking themselves against really poor performers. And to me, that blows my mind. I don't pat pat myself on the back going, yay, you're doing all right because, you know, the people next door have had a shocking incident rate, but you're close to being, you know, the same in many ways.

- [Mary] Jerry Smith, HSE Compliance, Risk, and Project Manager at Vanderlande.

- How do you know if you've been successful? How do you measure whether, or to what degree, employees are following safety procedures?

- [Jerry] A lot of people will, first off, would like to say because the incident rates go down. Yeah, that's something there, but I don't trust that one. When I find that my people are looking, I can go on...I have this system, management system online, and I can tell who's going in and who's going out. I see that they're looking for whatever, and I see them looking, and if they can't find, the next thing I see is the email that come through or a phone call that come through. Jerry, I can't find this. Jerry, where is this located? I've been looking for it. That's when I say, when people are trying, they understand the system there, they're looking for those type things, I'll go to the ends of the earth to help anybody who actually tries. So, that's my goal, and yeah, if you want to put a number to it or a quantitative, it would have to go to the lagging indicators. But for me, it's a leading indicator. When they go to the training sessions and I got to everybody, and it has a month to do this particular training, and I'm at 90%, I think people are doing really good. If I'm at 60%, I think , you know, we got an issue. What was it? Could it have been circumstantial? Everybody was on holidays, whatever. But you got to judge it from a leading indicator, sideways, so there's reporting, there's training, there's auditing, there's the people just trying. That's my goals. That's how I judge whether I'm doing the right thing and it's successful.

- [Mary] So it is then, yeah, a bit of a more qualitative and almost, you know, a gut check experience.

- Gareth Lock, Founder of The Human Diver.

- [Gareth] Leadership in the organizations often aren't given the time to go and lead. They're managing paperwork and it's easy to count stuff. It's much harder to go out there and have a conversation and you come back and go, so what did I get from that? What's my metrics? What's my product from that conversation? And go, it's knowledge. It's about shaping stuff going forward. It's about joining different teams, but that's not countable. It's like, yeah, you're right. It's not. Tough. You know, when we're talking in a complex space, counting doesn't help us. It doesn't. You know, unfortunately, some people want those metrics. And going back to 2015, I gave a presentation for some work that I'd done at the oil and gas sector. And I stood up on the second day, so it was the inaugural IADC Human Factors Conference. And I stood up on the second day and I said, yesterday we talked about the fallacy of using these metrics like total recordable incident rate and those sorts of, you know, lagging indicators. And I said, you as the prime contractors are putting those requirements on your people bidding in because they were talking about how do we move away from this? So you have to stop putting those metrics on because the only reason they're coming into those is because they have to win a contract that comes up with some fallacious number that says it's 0.9 or it's 2.7. Whoopy. You know, that's as much about luck as it is about the performance and the, you know, what's going on.

- [Mary] Kym Bancroft, Managing Director of New View Safety.

- [Kym] My versions of safety sacred cows are things like...I like the...I call it the metric sacred cow, which is this idea that there's only very few metrics that, you know, brilliantly tell us the state of safety in our organization. So for most organizations, it's LTIFR and TRIFR, which are king in the boardroom. And we make this core assumption that these metrics are telling us about how safe our organization is. Which has been proven by many researchers in published empirical science-based journal articles that in actual fact, those metrics aren't telling us about the state of safety. They're telling us about injury performance and how much lost time we've lost. But it's not actually telling us anything about how effective our critical controls are, and how critical risk is being managed, and so on and so forth. So, I like to really challenge that one. And I've done that in many boardrooms. And what I do is, I take the research to them to say, you know, there's actually...research is showing that there's actually an inverse relationship between companies who have zero harm as a statistical outcome and their fatalities. So if it's a high-risk industry, you know, we're seeing the inverse relationship to be existing there. Which many people wouldn't think about because they think, "Hang on, having these very high metrics is actually helpful to us." So once I've sort of challenged that sacred cow, the question, then, is, is, well, what metrics do we put in place, because, you know, we need to replace it with something. I went through a process with the organization to redesign what metrics were important, what metrics would help us make decisions. And we landed on the due diligence index, which I think would just have to be a whole 'nother podcast talk, there's a lot to go into there. We did critical control work insights, which is a process of evolving our behavior-based safety observations, which we all know and have used for many, many years, to this idea that we have the work as imagined, we think work happens like this, it's like a steady black line, and then we have the work as done, which looks very, very different to work as imagined. And that's a blue line that's sort of going up and down. And sometimes that blue line starts to drift from the black line, which could mean innovation, or it could mean that we're drifting into failure. And perhaps a fatality or a serious incident or events going to occur, which we don't want. So it's about discovering the gap between the work as imagined and the work as done. So that's a little process that we designed for any worker or leader to go into the field and find out where those gaps are, both helpful and hindering.

- [Mary] Mikel Bowman, President at Bowman Legacies.

- [Mikel] You're never going to be perfect. And there's no time like now to make your voice be heard, but just be transparent when you fail, do the honorable thing and say, I failed and then adapt and move forward. So stop thinking like you have to be perfect. Now, when we strive for that in the safety realm,
you're not going to have a perfect record. I cannot stand it when I see the shallow mission statements that are put out there by organizations that say zero accidents. Oh my gosh, that kills me because the day you have an accident, guess what? Now you don't get the pizza party. And, you know, I don't know if you've all seen that, it's been around for a long time, but Mad TV did a video about... Yes.

- [Mary] Yeah, That's exactly what I was thinking of.

- [Mikel] "Don't say anything. You cut your arm off. We won't get the pizza party."

- [Mary] Yeah, exactly. It's a comedy sketch folks, if you haven't seen it, but yeah, that's the idea.

- [Mikel] But there's a lot of truth to that. You know, because of perfection, we will hide and a lot of people when that's expected will not report. And we need the data. As safety people, you need the data, remember. And so, therefore, you don't want people to have to hide from you for that. And so, therefore, you don't want people to have to hide from you for that. Now, so we talked about perfection. What was the second part of that question?

- [Mary] Well, it was in terms of polarization and how...so you talked about how we define success, but what's the role of perfectionism when we're talking about this polarized or unbalanced kind of approaches?

- [Mikel] Consistency in being transparent on where you fall short. You have to look at this. So often we want to plan it. Now, unfortunately, there's so much more that goes into this because I think that a lot of organizations push their safety people to produce as well. They want production. And what they don't see is they're stifling a lot of creativity on your part and they cause you to write policy all the time just to justify your existence. And man, that's what we call business adrenal fatigue or safety adrenal fatigue. And people are just waiting for you to write the new policies so they can sign off of it and go, oh, you can't do this, because then you're going to get stuck in trying to out write human behavior, okay? Out write, as far as policy is concerned, human behavior, and you're not going to do that. So, what you have to do is stop thinking things have got to be perfect. Now, if the organization that you're in expects that, there are ways that you can try to inspire them and teach them and educate them. Remember, your enemy is ignorance,
but it might be good for you to get on LinkedIn and start networking because if you're in an organization
that is expecting perfection and they're living on that zero accidents moniker, if it can't be taught out of them or inspired out of them, maybe it's time for you to find a place that would better suit your mission.

Special Episode