♪ [music] ♪ - [Mary] My name is Mary Conquest. I'm your host for "Safety Labs by Slice," the podcast where we explore the human side of safety to support safety professionals. We move past regulations and reportables to talk about the core skills of safety leadership, empathy, influence, trust, rapport.
In other words, the soft skills that help you do the hard stuff. ♪ [music] ♪ Hi, there, welcome to "Safety Labs by Slice." Accountability and discipline can be sticky issues at the workplace, not least because both words resonate with people differently.
For example, to some, discipline means punishment while to others, it just means guidance. What's the right way to approach these concepts? Today, we're looking at the human and organizational performance framework, often called HOP, or HOP. We'll start with how it developed and how it's typically implemented, and then move into what it can add to discussions of workplace accountability and discipline. To help explore this, I have HOP expert, Andrea Baker, with me today.
Ms. Baker has a bachelor of science in chemical engineering from Notre Dame. Her EHS experience has taken her into many different corners of the world and the safety profession. She has worked extensively for GE as an HOP senior expert, as a manufacturing and logistics EHS leader, and also as an engine assemble test and overhaul EHS leader.
Andrea is the founder of the HOP Mentor, a consultancy dedicated to the integration of HOP principles into business systems. She offers culture change planning and support and trains clients in the fundamentals of HOP, as well as training learning teams and coaches. She's also the co-author of "Bob's Guide to Operational Learning: How to Think Like a Human and HOP."
Andy joins us today from Salem, Massachusetts. Welcome.
- [Andrea] Thanks, Mary. Happy to be here.
- So, let's start with an understanding of what HOP means. I noticed on your website that you emphasize the word "and," in human and organizational performance. So, what's the significance of that in how you want people to understand HOP?
- So, HOP is a really fascinating discipline because it's really emerged out of practices that have existed for a while, but it's emerged out of using those practices and then learning from them. So, many people have heard about the human performance space. And oftentimes, if we're looking at the history of human performance, that brings us to be…look at the nuclear industry or the aviation industry and sort of understand the concepts of error prevention, error reduction.
And in learning about how we actually want to be working in that error-reduction prevention space, we discovered more things about how an organization has to respond in order to even allow us to improve. And HOP emerged from recognizing that, you know, all of the things that we've done in the past, whether we're talking about human performance or, you know, the study of human factors, or if we're just talking about thinking about things through a compliance lens, all of those pieces have led us to make improvements and HOP takes a look at all of them and says, "Okay, what do we want to do next? What have we learned from doing all of this, from actually executing these different types of practices, and where do we want to go next?"
And a lot of that has to do with how the organization functions, how we're actually communicating with each other, how we, as leadership teams, set ourselves up for response to events and response to different signals within the organization of when and where we should be improving. So, that's the organizational piece as it kind of expands our thought process beyond maybe a set of tools or beyond a set of prescriptive requirements and moves to how do we actually use those various pieces, recognizing that we exist in an organization with a bunch of people that have a bunch of different opinions on how the world functions.
- Okay. So, now, I'm not a safety professional, but in the course of interviewing various guests, I've heard a lot of different terms. So, like New View, Safety Differently. A lot of them seem to be influenced by the work of Todd Conklin and Sidney Dekker. Are these all synonymous with HOP or is this what you're talking about when you're saying, you know, taking different frameworks that we've already been working with and putting them together?
- Oh, that's a really good question and probably hard to detangle. And probably the history, whoever writes the history book on it is going to make the determination as to what that actually looks like, right? Because at least from my seat, what it seems like is that there are many, many similarities between those terms that you have, right?
So, the New View, or Safety II, or Safety Differently, HOP, there are more similarities than there are differences, and I think that there's a slightly different flavor to each of them depending upon the background of the individuals that help speak to the concepts. But all of them are based on things that we've learned in our safety journey, you know, throughout industry, right?
So, they all have sort of the same starting point of looking at where we've come from, kind of the gaps that we've seen in improving, but they have a different flavor depending upon, you know, the advocates that speak to them.
- Right. Which makes sense, right?
- Yeah.
- Everyone comes from a different kind of background. I have a quote from you that I'd like you to expand on. So, you say, "We're living with the ghosts of a global industrial culture that undervalues its workers." Can you tell me more about that and how it relates to HOP thinking?
- Yeah, absolutely. And I'll tell you where that quote came from too because in trying to wrap your head around what is this stuff in this HOP space, it can be difficult. It can be difficult to articulate. It can be difficult to even understand personally as to what are the various aspects of what we're trying to improve and how we're trying to improve.
And in my own understanding, my own personal journey in this HOP space, I had this moment of recognition that a lot of what we were doing was wrestling with these ghosts. A lot of what we needed to improve on the organizational side was understanding where we came from and the fact that unfortunately, throughout the history of industry, we have dehumanized groups within industry.
And so, there's a term that's thrown around. If you google the term, you won't get the definition that I'm about to tell you because it's a slang term. There's a term called Taylorism, which is actually a big piece of what human and organizational performance is trying to break. We're trying to break Taylorism. Now, if you Google what Taylorism is, you actually find the person that this is named after.
It's named after Frederick Winslow Taylor, and he wrote a really influential paper during the Industrial Revolution called "The Principles of Scientific Management." Taylorism, by definition, are his principles of how you would be running, let's say, a factory setting. Taylorism, the slang term, is an ism like any other ism, like racism, like ageism, like sexism.
It is this improper belief… And Mary, have people talked about this before on the podcast?
- No, they haven't, so this is fascinating. When you said Taylorism, I'm thinking, like, a tailor, like customizing. So, I'm glad you're going through it.
- And I'll help explain the rest of it. Yeah. So, I don't know if your listeners have heard this before, but I will do my best to give it justice as the explanation. So, this term that he was named after, the reason why he, unfortunately, has been given this title is because in this extremely influential paper that was, I think it was written in 1911, he helped industry understand how to manage itself.
Meaning, if we picture about what was happening in history at the time, right, we were trying to create factory settings and we were going from small batch productions of things to now larger production of things. And in order to do that, you had to bring more workers into the workforce. Oftentimes, people are coming from, you know, farms or they're a blacksmith or they're a cobbler, and they're coming into a completely different environment and they have to learn completely different skills.
Many of the things that he has in this paper, we still use today, and they are good principles of management, meaning things like if you're trying to create a product, he suggests you don't teach someone how to make the entire product, you teach somebody how to make a piece of it, and then pass it on to somebody else who makes a piece of it. So, those types of suggestions on how you run...
- The assembly line.
- Correct. So, the things that we take for granted as to how we make things today, right, he was one of the first people to write it down. But also in the paper, he had very strong opinions about the folks that were coming into the work environment, the workers, and that is laced throughout how he describes things. And it set up this improper belief that the planner, or the manager in this case, is smarter than the people who are executing the work.
And from the dawn of industry, there's been this divide of sort of this "us versus them" planners on one side, managers with that group, and then the people who are executing the work, and it's permeated how industry runs. And unfortunately, it permeates our thought process. So, I mean everything from how HR packages are set up, you know, to how we communicate with each other, to who's allowed to be in a room at what point, to how we even talk about each other.
It comes from this really divided space and with that divided space, in addition to all of the very obvious negatives that come from it, there also are these subtleties that make it very difficult for us to improve. Because oftentimes, those of us that are in a manager or planning space, we believe we're already supposed to know the answers to how to solve things.
We believe that, you know, the folks that are coming into work, they're there to execute to the plan that we've created, and that's what their function and their role is. And we ended up inadvertently creating parent-child relationships as opposed to adult-adult relationships.
- Yeah. I was just going to say it sounds very paternalistic.
- Yes. Yeah. Yeah, that's what those ghosts are that I'm referring to, is how do we recognize that in most places, in most industries across the world, this divide exists? And then, how do we recognize how that's actually affecting the way that we manage, the way that we try to execute our work? And what are some of the things that we can do to break that ism and move on from it so that we can actually work together in adult-adult relationships rather than parent-child relationships?
- Which I can absolutely see how that would affect safety too, when you talk about or I've heard people talk about work as imagined and work as done.
- Yes.
- So, what's the history of HOP? Like, how did its practice develop in the safety profession? Or, did it? Maybe it didn't develop specifically in safety.
- I think its roots are in safety. I think similar to this notion of, you know, human performance, human performance was the recognition in the nuclear industry that, you know, something like Three Mile Island, which, you know, we were lucky, right? We were lucky that it wasn't a meltdown.
That there are aspects of how our system is run and aspects of how our people are operating that system that we need to look at. So, it was one of the first times...so, human performance was one of the first times that we really thought about the interface between the person and the mechanical part of the system. And then when you tie that together with the study of human factors, it gets really clever understandings of what we should be doing, but it was also very much focused on procedural adherence, right?
So, making sure that our procedures are as accurate as possible, procedural adherence, creating the ability to identify and remove error traps. That is the space that many people live in when they think about human performance, right? Those are the pieces that they take away from it. HOP grew up in this same safety space, but it grew up as we learned from what works well and what doesn't work well with this notion of error prevention and mitigation and whether or not that's the whole story.
And realistically, what we found out is that if you think about error across the board, we might define error as when somebody unintentionally takes an action and certainly, they don't intend the outcome of that action. But then there's also this whole other category of things that happen in industry, actually everywhere, not just in industry, but in industries, since we're talking about it.
And I put the label of a mistake on that instead of an error. And a mistake is when we actually intentionally take an action, we intend to do something, but we don't intend a negative outcome and yet we have one. And so human and organizational performance started to think about errors and mistakes whereas historically, we might have only been thinking about errors, meaning I didn't even intentionally do that.
- So, can you give me an example of each just to clarify in my mind?
- Sure, yeah. Yeah. So, a super simple example that we can think about from, like, our daily life, let's imagine we're driving a car and it doesn't have... It's an older car. Let's say it's eight or so years old, and it doesn't have any sort of fancy blind spot detection.
And so in driving it, we need to be checking our blind spot before we change lanes. If we change lanes and don't check our blind spot but we intended to, so we just sort of plumb forgot, I just forgot to do it, which happens 1 out of every 10,000 times, we do something routinely, we just forget to do it, we would consider that an error. I made an error.
It might have a consequence, it might not, depending upon if somebody is in your blind spot. But let's say, I'm sure we can all picture a time in which we intentionally didn't check our blind spot, because at that moment in time, for some reason, we were 100% certain that nobody was there. That maybe we just...
- That no one had just come out of nowhere.
- Nobody had just come out of nowhere or maybe we'd just left a second ago, right? Maybe we're on a dark road somewhere and, you know, it's easy to see headlights. But for whatever reason, we were 100% convinced at that moment in time that nobody was there. If somebody is there in the blind spot, we would consider that a mistake. I made a mistake. I should have checked it and I didn't. And so there's this interesting space that happens quite frequently within human interaction, right, whether we're talking about industry or otherwise, of us making mistakes, which we put labels on later depending upon our mindset.
We might call it a deviation. I deviated from a policy. I might call it a violation. I might call it an adaptation, right? I might call it a mistake. I'm going to put labels on it depending upon sort of how I emotionally feel about it, but the piece of it that's all similar is that no one intended for a bad outcome and at the time that they made the decision, it felt like the right decision to make.
And so HOP incorporates that thought process as well and helps us understand what to do with that information. What HOP doesn't talk to, I mean, we talk to it but it falls outside of the realm of how we would be managing in a safety space, is something like sabotage. So, sabotage is when we intend our action and we intend the negative outcome.
Well, that's like a different story, right? We remove that person as quickly… - That's, like, fraud.
- …as possible from your organization. Like, yeah, let's move on. Yeah.
- Yeah. Okay, so it sounds to me like HOP offers a different view into human motivation and systems design, like how those two go together. So, I'm wondering if you can comment on the HOP point of view into some of these kind of fundamental questions. And actually, I think you have already started, but I'll ask anyway in case you have more to add, which is why do people break safety rules or any rules for that matter?
- Yeah. And that's kind of the crux of a lot of the discussion that we have in the HOP space, because there's a big answer to that question and it's a nuanced answer, but it's also very, very simple. We break rules because either we've made an error, meaning I didn't even intend to break this rule, right, I didn't even, like, there was nothing that I intended to do, or at the time that we did it, we had a local rationale.
We had a reason, a logical reason as a human as to why it made sense. And that can be for so many reasons. It's hard to list them all, but I will list off a few of them, right? So, that can be because at the time, the rule doesn't seem to apply to the conditions that exist in this moment in time. Or if I did follow the rule, I would be trading off with something else that would make it not possible for me to execute to whatever my goal is.
Or in following the rule itself, it requires me to use energy that I don't feel as though the energy tradeoff is actually helpful in this circumstance, so I'm going to find a more efficient way of doing something. I think maybe the most important piece of this view of sort of rules and breaking rules or not breaking rules is the recognition that the world that we live in, in industry or otherwise, there are variables, and movement, and changes in the system that we're in that are not predictable at the time that somebody writes a rule.
And so independent of how good we think a rule is, I think if we just take a moment and self-assess, there aren't too many rules that exist in life that don't have an exception, right? It's very hard to find one. So, to think that we could govern all the time always by a list of stagnant rules would be very hard. And we see it all the time, like even in society, right?
We see, like, how our laws are in versus what happened and does it make sense to enforce this law this way because of the person's actual context? Like, we see that trouble everywhere. And in our organizations within this safety space, or quality space, or operation space, we're not dealing, you know, except for very far and few in between, like outside, very small numbers of people, we're not dealing with people who are trying to break a law or purposely breaking a rule just to break a rule.
What we're dealing with is a set of expectations or guidelines that were written and they live in a space that's different from reality. And when I'm faced with the variability of realities, not all the things that were written at the time that they were written had all of the thought process, and assumptions, and contexts that I have to have in reality.
And so there are many, many, many cases in what's written down versus what I would actually have to do to be successful are not the same. Actually, more often than not, they're not the same. And so oftentimes, we label this as rule-breaking whereas in reality, it's adapting to real-life conditions to be able to create success.
If we only followed what was written on paper, some of you have experienced this, if, you know, you've ever experienced something called, like, malicious compliance where people only follow what's written on paper to prove, you find that we're not very successful, if successful at all, right? We can't actually execute if we just write what's on paper.
And so we can't have it both ways. That's the truth. We can't say, "Oh, adapt to create success, because you knew what I meant, but also follow all the rules and never violate anything if I think that it was a bad idea retrospectively, even though at the time you made the decision and you thought it was a good idea."
We have to figure out how to navigate this space in a way that is a little bit fairer than that and it's not just the judgment of a person who either liked or disliked the outcome. Because realistically, we all wanted a good outcome. You know, we don't have people sabotaging our system right and left, we are trying to all get work done in the best way we know how with the information we have at the time.
- Yeah, okay. Yeah, so many things come to mind as you're talking, like...
- I don't know. Did that even answer the original question? So, I hope so, but...
- I think so. So, I've got four things that I wanted you to comment on, and I know they're going to be intertwined. So, I'll just go down to the second, which is, what do you think is the traditional view of causality in human systems and how is that maybe flawed?
- Yeah. It's a really good question. I don't know how long you spent on these questions, but they're quite good, so credit to you. So, there's a couple of different things. So, it depends upon how deep we want to get in this conversation. But maybe the easiest way to explain it or the most complete way to explain it is that for most of us, myself included, within industry, we were inadvertently taught to see how we manage in the world around us within an organization as what we would consider an ordered system, meaning that cause and effect is fixed, it's known, it's predictable.
So, if you were going to try to manage something that would be an ordered system, how you would think about it is this, you would think about, okay, we have to design a really smart piece of equipment. Meaning that the equipment itself needs to fail safely if we're talking about safety environment or quality environment. And then we need to design a really good operating procedure that allows us to interface with that equipment.
And then we need to train people to that operating procedure. And then we need to hold structured audits so that we make sure that the mechanical parts of the system and the people are doing what we thought they were doing, right? So, the mechanical aspects are functioning properly and the people are following the process that's outlined. And kind of the thought process is that if we do all of those pieces really well, nothing bad should happen. And yet, in a lot of places, we do do all of those pieces really well and we still have operational upsets, right, we still have safety issues, we still have quality issues.
And so the question became, what are we missing in our understanding of how we manage, of how we set things up? And what we were missing is that actually, that's not how the world works. That is a description of an ordered system, which ordered systems are things like the entirely mechanical aspect of our business. Cause and effect is fixed, things are predictable, things fail linearly, A always causes B always causes C, right?
So, it is a known...there's a box and there's knowns inside of it, right?
- It's like a computer algorithm, right? Like, it goes...
- You got it. Right. It's going to happen and it's going to happen this way, because it's following the laws of physics. So, just picture something mechanical, it's following the laws of physics. Once we put people into the mix and we put real-life and environmental factors into the mix, it no longer is neat and predictable like an ordered system. It's actually called a complex system. In a complex system, things are not predictable like they are in an ordered system.
You can see patterns, but you can't predict that A is going to cause B, it's going to cause C, and it certainly doesn't happen that way all the time. A good way of just mentally grounding ourselves in what a complex system feels like is if you have children. Humans, our interactions are complex, right? So, you know that not every time that I do A, is it going to cause B or C when talking to how we talk to each other.
And specifically, it's really easy to see with children because there aren't a lot of social dynamics of trying to cover up behavior, you know, putting on a show, right? So, you can actually see clearly through to motivations and interactions in a way that it's harder to see adult to adult. So, in a complex system, we can't predict everything, which is why it's so difficult for us to know kind of even how to manage in a complex system, is because although we have written all of these really nice things, picture, as soon as you push an operating procedure or a piece of equipment into the real world, things start to change and they're constantly moving.
So, that really annoying adage that "change is the only constant" is actually correct, but we don't necessarily have tools and thought processes that allow us to know how to navigate that well. Most of the tools that we have for, like, risk management, they're fairly stagnant and they're based on cause and effect.
But then in the real world, it's like trying to fit a square peg into a round hole. Because oftentimes, what's even written down is the way that we're going to manage a certain set of risks or a certain set of hazards. They don't even apply in every situation that people are faced with. And so you're like, "How in the world do I understand all of those various scenarios and then recognize that they're constantly changing when every tool that I have is fairly stagnant in how it's created and how we talk about it?"
So, that's the piece that HOP is founded on, this recognition of not only the Taylorism piece that we're talking about, but that we are functioning in a complex system. And so not everything that we have done in the past makes a ton of sense to only focus on in a complex system because cause and effect is not fixed.
- So, yeah, and I think you actually just answered perfectly what my next question was, which was, how do you define a perfect procedure and how does that relate to systems design? And as you're speaking, I'm thinking to myself, you know, if there was a way to make it perfect, any operational procedure, we probably would have figured it out by now.
It's the humans, right? The complexity.
- So, the part that's pretty amazing is that it's actually the humans that are making our procedures work so well, and yet, we also point to them as the problem.
- That's true actually.
- Yeah, so the humans are the ones that are taking...
- Yeah, because they have context.
- Right. They have context, they can adapt in real-time. So, we as people, whether we're talking about, you know, as in a management position or a planning position or, you know, a quality position, or we're executing because we're close to the work or physically making something, we take the messiness of what is written down as documentation and we make it work to the best of our ability within real life. And sometimes, we make really good decisions and sometimes, we make decisions that in retrospect, we'd be like, "Man, I wish I knew more information at the time I made that decision because I don't agree with it anymore and clearly, nobody else agrees with it either. Like, it didn't turn out the way I thought it was going to."
So, a perfect procedure is actually not necessarily having a perfect procedure. A perfect procedure is being able to integrate that procedure with really tight feedback loops, where we're learning in real-time from the people who have to execute it as to where they're seeing difficulties and we can start to see patterns which allow us to improve the procedure. It's never going to be perfect, but allows us to improve written documentation.
Resilience, and I'm pretty sure, you know, I've heard Todd say this, I'm not sure if it comes directly from him or if it's from somewhere else originally, but it's the recognition that resilience in our system, it's not about designing something perfectly. It's the speed with which we can learn and improve that allows us to create something that's resilient. And, of course, we all have continuous improvement programs and actions, right?
And we've all said for years that they're super important. I think that I've learned that the type of continuous improvement that we were doing, at least where I came from, and what many organizations are seeing is not really the same type of continuous improvement that is necessary in a complex system and we weren't necessarily getting the information from the right places.
We would try to learn but we'd learn from an outcome or we'd learn from a written documentation as opposed to learning directly from the people who hands-on, day-to-day are dealing with a version of reality that many of us don't have access to, right? Many of us in a management or a planning position, we don't do that thing day-to-day. And so there's absolutely no way that we could understand all the nuance of what it looks like to deal with trying to make a process fit in real life over and over again.
Yeah. So, that's this piece of HOP of making those really tight feedback loops, but also recognizing that we need them in a way that I didn't understand originally at all, right? I thought just kind of good planning and good procedures could take care of a lot of it and I've learned through my own practices that that's not nearly close to as true as I thought it was, so.
- Yeah. I think it's good that you're pointing out the positive aspect of that, is just it's essentially adaptability, right, and how the person doing the procedure is making meaning based on the context that they have, based on the information from this procedure that almost certainly doesn't take into account all the nuances that, you know.
And so with their best intentions, they're making meaning from sort of what is intended by the procedure but still meeting the goal, hopefully safely.
- Hopefully. And most of the time, yes, right?
- So, that's the incredible part is we are successful so many more times than we're not and yet, we have this fixation with learning from failure, which is good, but not the same fixation with learning from success. Because we've made this assumption, or at least that I did and perhaps other people haven't made this assumption, but within the industry, we tend to make the assumption that the reason why something went well is because of all the stuff that we talked about, because of that design, because of that procedure, because of that training.
And so when things go well, we don't necessarily look at it or learn from it where in reality, the reason it went well is because someone adapted in real-time to make it go well. And we have more of an opportunity to learn from that because there are so many more times that things go well than when they fail.
- Yeah, true. True. So, the last one is complacency. Where does that idea fit into all of this?
- So, it depends upon what type of definition we use around complacency and how we're actually picturing it. So, I would talk about it maybe over in two different sort of buckets of what complacency means to us if we were going to think about it as in terms of synonyms or what it looks like in the real world. So oftentimes, we use the word, complacent, to describe something that's actually not complacency.
We'll say, "That person became complacent," and we could replace that in the sentence with the person sort of lost situational awareness, they weren't actively paying attention to what they do, and we would use the term, complacent, to describe that, oftentimes, because it's like a drop-down menu option on a piece of software somewhere, right?
- Yeah, yeah.
- A person became complacent. Realistically, complacency or this loss of situational awareness, whatever we want to call it, or going on autopilot, is a product of a stable system. It's how our brains process information and it's not something that we can control, meaning we can't control when our brains choose to go on autopilot.
So, when we're learning something new, we're actually using the frontal lobe part of our brain, the thinking part. So, the part that when you are hearing your own thoughts, it's coming from there, active thinking part. When we have gained a skill set, our brains are really good at conserving energy. Actually, a lot of what happens with our bodies, even decision-making that we have, is based in this fundamental desire to conserve energy.
And so we no longer have to use this frontal lobe portion of our brain. Instead, we have habit loops that we've created and we can tap into those using less energy, which frees up our frontal lobe to do something else. So, if you think about learning how to drive, we'll just go back to driving as an example, when you first learned how to drive, you were using all of your brain energy to process what you were doing and you weren't actively thinking about anything else.
You were thinking about how hard you were going to press the gas and how much you were going to turn the wheel. Those were the things that you were thinking about. But once you become proficient at that skill, your brain stores that information in a different place. And now when you drive, if you drive routinely, you can think about whatever you want to think about, right? Your brain is thinking about dinner and what happened at work and all that stuff. We don't actively make the decision as to which part of our brain to use.
So, although you can sort of recognize, if something snaps you out of a moment of going on autopilot, like let's say you're driving along and, you know, an animal runs across the road, suddenly, your brain, that's an attention activator, suddenly, your brain will snap back to using the thinking part of your brain, but you didn't actively choose to use the habit loops.
That was happening automatically. And so this notion that we could try to force people not to be complacent or not go on autopilot, it's not true. It's flawed. Because even if we put somebody in an upset condition... So one, we're always trying to create stable systems. So, in order to have somebody not go on autopilot, you would need for the system to be highly unstable all the time, which is not what we want, right?
So, but even if you could create that, it's amazing how quickly we figure out how to make sense of unstable systems and we still develop habit loops to deal with them as much as possible. So, that's one version of complacency, right? And so then this other version of complacency is, you know, well, the person just doesn't see the risk anymore.
You know, they've been doing it for too long and they don't think it's a problem. And so we might call that complacency. And that's just risk normalization, which is also an extremely human characteristic that we can't force not to happen. Most of the operations that we have that exist need some level of risk normalization in order for people to continue to do them, which sounds terrible, but it's true.
Like, if you think about what we're asking people to do...
- Yeah, well, if you think even just, like, you know, cutting an apple, right, like if we were terrified every single time, but at the same time, you're right about that bias where you're like, "I've cut an apple 5 million times before and there's no reason that on the 5,000,001...
- And one time, I'm going to cut my finger off, right?
- Yeah.
- And so, risk normalization, like, that's also not something that we can control. We have been, I think, mixing up what we can control and change versus what we should take as a given input and then think about how do we manage it. So, for example, risk normalization, if it happens, it's natural, it's not something we're going to undo. But knowing that it happens means things like when you have somebody brand new to a process, pull them, like, right after they've learned about it, and ask them what they're concerned about, because they haven't been risk-normalized yet.
Then, you know, within a couple of days or a week or depending upon what the skill or the process is, they will become risk-normalized and they won't be able to see things anymore. So, let's use that as an opportunity to learn about, you know, when you do cut the apple for the first time, what are the things that you're concerned about? And is this something that we should all be concerned about or is this something that we shouldn't be? Those are the types of things that HOP tries to help us separate.
What is in our control to change versus what is just human that we can't change, but we should know about so that we can consider it? And we can consider how to best use those characteristics to make improvements.
- Yeah. So, it becomes, yeah, not something that we try and change so much as an input into smarter design, really, right? It's a given. It's going to happen.
- It's a given. It's part of humanity. So, knowing that, now what, right? That's the questions that we ask ourselves.
- Okay. You've talked about people changing their whole cultural understanding and approach to safety once they learn about HOP principles. How or why do you think that happens?
- So, I think that happens probably for the same reasons that any sort of shift in thinking happens. For example, if I'm going into a group of people, a slice of an organization, product placement, or their slice of an organization, then actually my first, personally, right, my first plan is I'm not going into a place to try to change how people think.
I'm an outside person, right? I have some amount of influence, but I don't have the ability to control people's minds, right? So I know I can't force somebody to think differently. But what I can do is I can explain concepts and then see who's passionate about them. So, with any group of people, you're going to have folks that are aligned with a certain thought process and folks that it's newer to and folks that it actually just, it rubs them the wrong way.
They're like, "This doesn't make any sense based on how I see the world." So if we're trying to think about how do we shift mindsets, what we want to do is find the people that this mindset makes a lot of sense to, and then teach them how to use it. How do you use it to create improvements that are helpful for everyone? And when other people start to see those improvements, that becomes evidence that the mindset is helpful, but they're not necessarily...
I'm not the one convincing them, it's the folks that are doing it within their organization, right? So, if we talk about strong ties and weak ties that we have with each other, right, so somebody who has a strong tie to you, what they do is going to influence what you do. And so when you are thinking about changing thought process within an organization, you're thinking about it more like a technology adoption curve than anything else.
You're going to have innovators that are going to think this way. And then if we upskill innovators on how to do things differently, well, then early adopters start to see that, and they say, "Well, if it's good enough for so and so, you know, I respect them, so I'll give it a try." And then they create their own body of evidence that it works and then they help influence, you know, later adopters.
But then, you're still going to have folks that it won't make sense to, right? They're going to be on the end of the spectrum. And that's okay, right? But I'd say the reason why the change happens is because there's so much positive, like, evidence-based response to changing how we're acting that it's hard to deny that it's helping, right? So, you set a little bit of a flame somewhere, and then you let it catch and then the rest of it… It's maybe not the best analogy for safety [inaudible].
- So, you actually just gave me a perfect example, but I'm sure you have more. Practically speaking, how is HOP implemented in an organization? So, what kind of structures and processes are used or maybe recommended? So, that's one, right, find the early adopters.
- Find the early adopters. And I mean, that's highly dependent upon who you talk to in the space. So, I can only just talk to what I've seen work, which it's actually not... There's a desire and a tendency, in most cases, to try to create, like, a structured program around things.
And the reason for that is we have mental images of what programs are. We know what they are. We can wrap our head around it. But realistically, HOP, it's not a program, it's a way of thinking that influences what we do, right? So, it can influence how programs exist, it can influence the tools that we have, but it's a way of thinking. So, if we wanted to focus on how do you have folks adopt a way of thinking, there's these phases that seem to occur.
What I'm describing isn't like Andrea's belief about how you change cultures, it is phases that we've seen organizations go through that are successful in making this change, right? So, it's they're proving it out by doing it. And you need to have someone somewhere in the organization exposed to these concepts and they're like, "Yeah, this makes sense.
I think we should do this," right? So, you need some sort of buy-in. We need a place to start in which someone in the organization in a leadership role that feels as though they have some amount of autonomy, also thinks it's a good enough idea, enough to expose the folks that work for him or her to the concepts. Then, we have to be exposed to the thought process itself, what are we trying to change about how we're thinking, and then we have to go do something with it.
And the first thing that seems to be helpful to do with these concepts is change, or augment, or tweak how we're learning from our workers, right? So, how are we gathering information from the people closest to the work in order to have a larger body of knowledge on how things are actually functioning? So, you know, we can discreetly pick a task or a process or a piece of a task or a piece of a process and learning about what you described earlier, that blue line, right?
Work in practice is the blue line versus a black line is work as we imagine it. We want to learn about the blue line, which has a lot of movement and change in it. And so that's the first thing that we do, is we get good at that. That's the upskilling part of how do you actually hold those types of conversations. What do you have to do culturally to create a space that people are actually willing to tell you what is happening, because oftentimes, that is not aligned with a set of rules.
It's not aligned with a procedure. So, you have to have some degree of psychological safety for people to even be willing to talk to you. And then when you have quite a few people that can facilitate those types of conversations and you have people who are willing to talk to you about what it really looks like to do work, that's when this world opens up of this HOP space.
Because now you have a lot of flexibility in how you use those resources. If you're thinking about it from an upper management perspective, you can go... We call it operationally learning, when we're learning about the blue line. You can go operationally learn in your organization about lots of different topics. And then that's actually the information that you can use to strategically improve things.
So, sort of the next phase is going from learning in the small pockets to being able to use that information to do things like improve our management systems or to look at a discrete set of processes or hazards that exist or a subset of outcomes that we don't want to see and figure out what do we actually need to change with the recognition that in doing this different way of asking questions and learning from different places, what we're really doing is identifying better problem statements.
We're using fewer assumptions as to what the problem statements are and using more real information to know what they are. And then we're engaging with the workforce to not only verify that those really are the problems, but also what improvement actions do they see. And their design, because they're the end-user, is elegant compared to anything that I would design, right?
So if you're the end-user of something, you tend to naturally apply pieces of, like, human factors understanding because you know how you interface with something, right? You naturally don't create something that has a ton of bureaucracy, that's going to be a huge amount of effort to execute. You naturally choose something that you're pretty sure you'll do, and so chances are higher that other people in your same process task environment will also do it.
And so it helps reduce some of that variation between what we expected people to do in terms of an improvement versus what actually happened. So, that's sort of the next piece. And then when we get good at this, that's when we are really, really interested in the system design itself. So, now we have the right information, we've got the right problem statements.
So, then if we're thinking about brainstorming solution sets, now we get really clever about the type of solution that we're trying to put in place. And oftentimes, the type of solution in like a safety or quality space is one that's focused on not necessarily in trying to control human behavior, but one that mitigates for the outcome of variants, right?
Whether it's variants on purpose, like a mistake, or variants by accident, like an error. So, if we think about that design piece of it, a really good example that you can visually picture is how our cars are designed so that you can crash them and walk away with your life. That's that mitigation concept. It didn't necessarily constrain my behavior. I unfortunately still got in the crash, but I recognized that a crash was possible and it probably is going to happen, it will happen, and therefore, how did I design something so that the person who made that error or mistake can walk away with their life despite the fact that unfortunately, the side effect, in this case, is the car is destroyed, but the car protected the human from that?
So, those are kind of the phases, leadership development, paradigm shift and thought, being able to actually execute operational learning, learning about the blue line, using that information to help us better design management systems, change how we're doing audits, change HR policies, change a lot of different small aspects about what we do. And then along that whole path, we're getting really clever about how we actually design solutions.
That's when we learn about the right problem statements.
- I'm hearing so many parallels with web design and user design, app design, user research. And one of the things I think is important to pull out too is that at the beginning, you're talking about small changes. So, you're not approaching people and saying, "Hi, I'm the big consultant from outside and we're going to change everything all at once and..."
- Everything about your business like this. We're going to do it tomorrow. No, absolutely not. Yeah. Terrified. You know, the proof is in the pudding as you make these sort of...you get this information and then, okay, well, let's tweak this and see how that goes. And yet, it does eventually move to like system design more holistically.
- Yes. But it's all to get there, which is kind of, and you're right to think about the software space, right? To get there, we're not trying to tear anything down, right? Nothing that we've done in the past is something that we would label as bad. We shouldn't have done that, that was bad. You know, that's a terrible program.
Instead, we're saying, "What exists today, and how do we use this thought process to tweak what exists?" Rather than, we're going to go in and say, "Yeah, let's replace everything." I mean, that goes against human nature too. You're not going to come in and tell me my baby is ugly and then tell me how to replace it. Like, that doesn't work, right?
So, it's all about tweaking, microchanges.
- Okay. So, now that we understand HOP a bit better, I'd like to focus on accountability and discipline. So, those words often come with baggage. Both of them are interpretive or can be interpreted as punitive, even if they're not intended that way. What is the HOP take on those concepts?
- So, let me just start off with a little bit of why it's so important that we know what the HOP take is if we're going to embark on this type of journey. Because it's actually this notion of accountability and this notion of disciplinary action is central to being able to change the thought process that we're using to manage.
And the reason why it's central is... Let's just think about it super practically for a moment. If I'm trying to understand how work is done and I fundamentally recognize that how work is done, because of variation in real life, is not going to be reflected in my written paperwork in a lot of circumstances, and actually, what people are doing is sometimes directly in conflict with the written paperwork, then somehow, I have to learn that information, which up until this point in most industries, the folks executing the work have worked really hard to hide that information because of how we've chosen to respond to it, right?
If we respond through disciplinary action or we respond through labeling something as a violation, then human nature for all of us is to try to reduce negative consequence for ourselves. And so we will work really hard to hide any behavior that we are pretty sure someone else will subject us to a "punishment" for.
We can just think about speeding if we want to for a moment, right? So, if you get a ticket for speeding, it's a form of punishment administered by another adult to you, right? Generally, that doesn't mean from that moment on in time, you are going to change your driving patterns. But you're going to change a little bit and the thing that you're going to change is towards making sure that you don't get caught again.
- That you don't get caught.
- Right. So, you slow down in that area or maybe you slow down for a few days, because you're like, "Oh, they've got speed traps everywhere." But it doesn't permanently change the behavior of how quickly we think the proper amount of speed is for driving. That's not what happens. We just get better at hiding it. We call that a compliance mentality. So, when we're really good at showing one behavior when there is some sort of authority figure in the area, we're complying to a rule, but we're doing it just for the sake of complying, so that I don't get a punishment, not because I intrinsically believe it's the right thing to do.
So, that said, if I want to learn the true or the real deal of what's actually happening, I have to be able to create a psychologically safe space for people to talk in. But what that also means is that as a leader, I'm going to hear things that I would historically have labeled as a violation, that historically we've used HR disciplinary action for, that suddenly, now I can't do that because...
- If you want to maintain that psychological safety.
- ...safety, I'm never going to learn anything ever again, right? So that information is going to go back underground. And so in order to figure out, well, what do we do with this? Does that just mean we're throwing accountability out the window? That's when we have to detangle these concepts of accountability and HR disciplinary action. HOP has wants and needs and agrees with the notion that we need accountability. But we need accountability by its most useful definition within an organization of people.
And I really do hate when people define things, but I'm about to do it, so I'm really sorry. I think I already did it with mistakes and error, so I might as well just continue on that path. So, the Merriam-Webster definition, I believe it was 2009, for accountability is it's a person's willingness to take responsibility for their actions and to account for meaning to tell the story of their actions.
So, it's a willingness to take responsibility. So, the important part of that is it's somebody's own willingness. That means that as a leader, I can't force someone to feel accountable. I can't punish someone into feeling accountable. I can't demand somebody to feel accountable. They either do or they don't.
They either feel responsible or they don't feel responsible. And unfortunately, in many places, we have set up environments where it's difficult to feel accountable, because my role as a leader, although I can't force someone to feel accountable, I do have to create an environment where people can feel accountable, where it's possible to feel ownership for your actions.
If the environment that I have created either on purpose or inadvertently is one where people feel like it's highly command and control, so meaning I walk in and as somebody who's executing the work, I feel like parent-child relationship, head-down work to task, follow written instruction, and that's the mindset that I have, it's very hard in that type of environment to feel ownership for your actions because you actually don't believe your actions are your own to take.
They've been dictated by somebody else. And you can hear it. So, in organizations that do have lower accountability, you can hear it in how people interact. People say things like, "That's not my job," or, "don't look at me, I was just following the procedure," or, "don't talk to me about that, talk to my boss about it."
Those are all indications that we have a culture, we've created an environment where it's difficult to feel accountable. Now, there's going to be a few people in any subset, right, that feel apathetic towards what they're doing, that's normal, but when we have a lot of people that feel that way, that's not a person problem. That's not like we're hiring a bunch of people who are apathetic to work.
That is an environment that we've put people in that is sculpting them to feel that way. So, that's the type of accountability we want. It's closer to ownership than anything else, if we had to pick a synonym.
- I was going to say autonomy.
- Autonomy, ownership.
- Having a feeling of... Yeah.
- A feeling of I'm not just a cog in a wheel, right? My opinions on things are important. I bring value. How I'm doing this work, people can learn from it. I see problems and I'm able to be solution sets of problems, right? All of those pieces that allow us to feel empowered, that's actually the type of accountability that is the glue that holds an organization together.
That doesn't mean we don't use disciplinary action. Whether we call it disciplinary action or HR disciplinary action, what I mean by it is the things like you write somebody up, you give them, you know, a verbal warning, written warning, like a progressive discipline with the end all being termination or removal from the organization. That doesn't mean we don't use that tool, but that tool isn't designed to create accountability.
That tool is a fair way to remove someone from your system, from your organization that isn't meeting expectations. That they're not fitting into the social norms that have been created. That they are struggling in ways that most other people are not struggling. And in those circumstances, you actually know who those people are already.
For most organizations, like the people that we really need to be using HR disciplinary action with, you know them by their first and last name because they're struggling to get... They don't fit in, right? And not that they don't fit in because they're eccentric or unique or they're... but it's they're causing a lot of disruption within the team that they're trying to work in, right?
They're the person that's always coming in to work late, that, you know, the coworkers are saying, "This person is going to...something terrible is going to happen with this person. You need to talk to them," right? They don't get along with their direct supervisors and everybody else seems to get along with it. That's what we talk about as an issue with the person. So, if we wanted to try to separate these two, there's actually a fairly easy rule of thumb.
Now, there's an exception to every rule. I just haven't found the exception to this rule yet, so I'm going to continue using until I find a huge exception. If we want to know the difference between the person problem and the system problem, the person problem which we would use HR disciplinary action for.
If you can look yourself in the mirror and say, "If I remove this person, it will 100% fix the problems that I'm seeing," that's a person problem. And we should be using performance management, disciplinary action, whatever terms we want to use. But if we look ourselves in the mirror and we say, "If I remove this person, it'll send a message to other people, but I have seen other people do this in the past and I'm going to see it again in the future," that's not a person problem, that's a system problem.
And using disciplinary action in that circumstance does the exact opposite of what we're looking to try to create. By sending that message, we've inadvertently shut down that feeling of "sending the message" meaning removing somebody or writing somebody up. We've inadvertently sent a completely different type of message. And that completely different type of message is that head-down, work to plan, you violated this process and it is not acceptable to do.
And therefore, the human response is to hide information as opposed to being able to account for, meaning tell the story of your actions. And it is harder to feel ownership because I have now created an adult-child relationship rather than an adult-adult relationship. So, mixing up when we use the tool of disciplinary action versus when we learn when we have a system issue hurts us in terms of actually being able to create an environment where accountability or ownership is a fundamental piece of the culture.
- Wow, yeah, that's a great insight. I'm going to be chewing on that for a while. You've described getting buy-in around HOP principles as your greatest challenge and your greatest reward. So, I was wondering if you could talk about your experience and what you learned. And I'm especially interested in sort of practical advice to listeners who are kind of keen to make this shift but don't really know where to start.
- Oh, man, where to start. So, if you're starting for yourself, meaning that this sounds interesting, then I guess the advice that I would have is that no one group, one person, one written document on any of this HOP stuff holds the keys, the solution set to anything.
It is a collection of really smart people that have done a lot of work in a lot of different spaces, that is a growing community made up of smart people in organizations that are trying to do this stuff and are willing to practice, and fail, and see what works in their organization, see what doesn't, and share stories.
And so if you are interested in this, read everything you can, but also actively question all of the things that you read, because that's how you... It becomes your own thought process at that point. And when it becomes your own thought process, then it's easy to know how to take action because it's like explaining how you think, right?
It's an easy way to take action. And that there's a community of people who do the same thing, right? So, knowing that there's a group of people out there who are trying to think the same way that recognize that there's some holes in how we've been managing and looking at things and those holes are fillable if we take the time to learn a little bit of, you know, Psychology 101, Sociology 101, right, System Design 101, if we're willing to put some of those pieces together, it makes a lot more sense how to start to fill those pieces and get better at it.
And there's a lot of really smart people out there trying it and willing to try. So, I guess that would be my advice to people who want to start. Just read what's out there. Read what's out there and try things. Try things, try the thought process, be curious yourself. And the more that you are curious yourself, the more you start to realize that there's sort of this whole space in industry we haven't explored that is actually fairly logical to us when we have the moment to think about it.
But we've been within industry so long that it's hard for us to recognize that some of the practices that we have don't line up with how humans interact with each other. So yeah, so google some hot stuff and start reading some books and then, you know, connect to groups of people that are trying this out in real life. Because that's where the rubber meets the road, is the practical application of this and people trying it out in the real space.
- Yeah. I was going to say it sounds less like look for a manual and more like understand that it's a practice of a discipline, right? And a discipline is just what you said, a group of smart people practicing, and learning, and failing, and sharing ideas and, yeah. It's fascinating stuff. So, I have a few questions that I ask all of our guests. The first one, I'm going to call the University of Andrea.
If you were to develop your own non-technical safety management training curriculum, where would you start? Which core human skills or soft skills, whatever you want to call them, are the most important to develop for tomorrow's safety professionals?
- Oh, well, let me think on that for a second, because I'd like to think that I kind of do that as part of a living. But if I was going to start over, and I have thought about this actually quite frequently. In addition to what I hope to impart when I'm talking to people, you know, in an organization, there's another whole part of my brain that would love to focus on how did we get here?
Meaning, how did we get to a place that my job could be reminding people to talk to their employees? Like, what happened that that is actually a job? And so I think that if I could start from scratch, I would start with teaching some general concepts about how our brains process information at an elementary school age, so that it is part of what we know and part of what we understand about humanity.
Specifically, I'd probably focus on biases, which I know, in general, we tend to label biases as negative but biases, meaning just like the way that we categorize information and how our brains function to categorize information, and the good and the bad that comes from that. And if we could, if many of us had an understanding of how we are processing information as individuals, the rest of this would be easy, so much so that I wouldn't have a job because there'd be no reason for me to teach people things that feels like we should be teaching our children and teaching at a young age of… You know, why when you look at something different, why do you have that feeling of concern or fear?
What is happening in your brain that is making you feel that way, and do you have to feel that way? Like, the ability to teach those types of things at a young age would help us, I think, tremendously not to have things like Taylorism existing in an organization. So, that would be University of Andrea, teaching biases to people before we've created these habit loops that they become so ingrained that we have to be taught later on.
- I don't even notice. I have a textbook for the University of Andrea then. It's called Designing for Cognitive Bias, or "Design for Cognitive Bias" by David Dylan Thomas. And it's actually, again, user design, web design, graphic design, but, you know, a system is a system. So, it's interesting, like, just learning.
Reading it was... It's a teeny tiny book and it's all about cognitive biases, right? Like, not negative per se, just how we process information.
- Yes. And anyone who is super interested in how all this works, this is one of my favorite books. I don't know if it comes up as backwards on the screen.
- Steven Pinker, "How the Mind Works."
- "How the Mind Works." But this is like the foundational piece of how we are processing information, which then makes something like biases makes so much sense as to how they happen and how they become part of our world.
- Okay. A couple more. So, if you could travel back in time and speak to yourself at the beginning of your safety career and you could only give yourself one piece of advice, what would that be?
- That I'm not supposed to know the answer. So that's not my job. So, but when I came into working in safety in organization, the overall feeling that I had, whether or not I invented this myself or whether or not this was taught to me, I can't tell you at this point, was that my job was to find risk and execute solutions to reduce it.
But in order to do that, because I wasn't the person who was doing the work, I had to make a tremendous number of assumptions in even identifying what the areas that we should be working on are, but also in how you would address them. And it's not that I, you know, didn't use best-published knowledge, right, of, you know, how you would design a defense, but it's the fact that I didn't involve the people who actually do the work nearly as much as I should, so much so that recognizing it's not my job to solve it.
It's my job to facilitate discussion, it's my job to bring resources to the right place, it's my job to empower people to actually have decision rights to make changes, it's my job to make sure that those changes don't have unintended consequences that I could see from my position that would be hard to see from another position. That's my job.
My job isn't to go fix things for people because I don't really have the right information to do that. That would be the advice that I think I would try to give myself.
- That's great advice, and especially for young people, because we all think we know it all and we're just starting out.
- Well we're sort of groomed that way too in organizations. So you're like, make a legacy for yourself, like, find a problem, solve the problem, you know.
- You're the promise of tomorrow. So, you've already given one thing, but I wanted to ask, what are some of your favorite practical resources for safety managers who are looking to improve? So, the Steven Pinker book?
- Yeah. So, everything not safety-related, to be honest with you. So, if I was going to say practical safety for safety folks would be, if it's geared towards a group of safety professionals, you've probably already been indoctrinated into that thought process, right? So, it would be outside of that. And so I would be looking at sources of information that teach us about how we interact with each other.
So, if you're looking at anthropology or if you're looking at psychology, actually one of my favorite resources, completely unrelated to industry but gives you a really good idea of these concepts that we've been talking about in this HOP space in terms of how you raise a family, it's called "Raising Human Beings."
You'd think that I had these here for, like, planted here, but they just sort of happened to be. And this discussion of finding...it's basically finding the black line, blue line of what your expectation as a parent is versus what's actually happening with how your children are seeing the world and being able to actually learn what that is to develop a problem statement to then work together to figure out how to, you know, bridge that gap.
Completely different application of a similar thought process. And I think that every time I read something psychology-based that has nothing to do with industry, I get smarter about how we work together as humans.
- Yeah. That makes sense. You make the connections. The connections aren't necessarily drawn for you, but as you read them, they become obvious because of your experience. Where can our listeners find you on the web?
- So, I think if you put my name, Andrea Baker and HOP into a Google search together, Andrea Baker alone, sometimes not, HOP alone, I'm somewhere in there. You put them together, I'm easily findable, but if you're looking for me directly, there's a website that a group of us who practice this and who teach this, we have just pooled our resources, meaning like all the stuff that we teach, like the slides that we have, it's all available and free for people.
That's at HOP Hub, so hophub.org. And so that is probably the easiest way to find me.
- Awesome. That sounds like a great pool of resources. That's all the time we have for today. Thanks so much for joining us, and thanks to our listeners for tuning in.
- My pleasure to be here. Great questions, by the way.
- Thank you. I'd like to take one minute before we go to acknowledge the "Safety Labs by Slice" team. I might be the face and the voice of Safety Labs, but there's a whole team of very smart, passionate people with a lot of heart who work behind scenes to make this a fantastic podcast. So, have a great day. ♪
[music] ♪ Safety Labs is created by Slice, the only safety knife on the market with a finger-friendly blade. Find us at sliceproducts.com. Until next time, stay safe. ♪ [music] ♪