Christian Hunt
EP
4

Mitigating Human Risk and Influencing Decision-Making through Understanding Behavioral Science

This week on Safety Labs by Slice: Christian Hunt. Christian explores the concept of human risk, the difference between ethics and compliance, and how EHS managers can influence decision-making.

In This Episode

In this episode, Mary Conquest speaks with Christian Hunt, a behavioral scientist with focus on safety.

Christian takes a deep dive into understanding human risk and how this concept helps EHS professionals become even more effective.

He reveals how humans tend to be hidden risk magnets in the workplace and shares insights into the impact of action and inaction on safety. With a sharp focus on the synergy of ethics and compliance, Christian manages to draw the line between having too many or too few constraints on co-workers. While discussing daily safety procedures, he explains why a reformative and learning culture must underpin safety management, and how EHS professionals could harness basic marketing approaches to influence people and change behaviors.

Transcript

♪ [music] ♪ - [Mary] My name is Mary Conquest. I'm your host for "Safety Labs by Slice," the podcast where we explore the human side of safety to support safety professionals. We move past regulations and reportables to talk about the core skills of safety leadership, empathy, influence, trust, rapport.

In other words, the soft skills that help you do the hard stuff. ♪ [music] ♪ I'm here today with Christian Hunt, the founder of Human Risk Limited a consultancy and training firm that specializes in the deployment of behavioral science in the fields of ethics and compliance.

Christian helps clients understand human risk, minimize their exposure to it, and support those with the responsibility for mitigating it. Welcome.

- [Christian] Thank you.

- Where are you joining us from? I know you...

- So I'm currently in London.

- You're currently in London, but sometimes you're in Bavaria. Am I right?

- Yeah, so I'm transitioning from London to Munich so you've currently caught me at one of the transition phases. But yeah, most of my time is spent in Munich.

- So today, we're talking about mitigating human risk and influencing decision-making through understanding behavioral science. So I'd like to start with risk. How would you define risk, and what are the important elements for safety professionals to really understand about risk?

- So I'll stick to a very tight definition of risk. But I mean, risk is generally unpredictable things or things we would rather not have happen. Although recognizing sometimes we need to take risk to get reward. But I specialize in human risk which I very loosely define as the risk of people doing things they shouldn't, or not doing things they should.

And if that sounds like a really broad definition, it is. It's intentionally broad to encompass everything from I'm deliberately setting out to commit fraud, through to I'm a bit tired and I've made a mistake. And the other thing that's worth noting in that definition is that I talk about action and inaction. And very often when things go wrong, we tend to think of it as people making mistakes, taking risks, doing silly things.

Actually, if you don't do certain things, and those things are important, that can have equal, if not greater implications. So there's an active and a passive element to human risk.

- You've talked about in the safety industry, viewing humans as units of risk. So what are the implications of that view? What would you say to that?

- So what I think is fascinating about people...and obviously, that's the core of my business because that's why I've referred to as human risk is that they are simultaneously huge assets, and also huge drivers of risk. So if you look at businesses, why do we employ people? Well, the answer is we need them to get to where we need to get to. And so there are many industries that could not function without a large amount of human capital in them.

And even if machines are taking over some of the tasks, actually, we still have a core of we need to employ people. And so the challenge we've got there is how do we get the best out of those people to do the things that they do? Because if we were asking humans to do repetitive, predictable tasks, we could give those to machines. So what we're doing is increasingly hiring people to do things when they can perform tasks that machines can't, but equally, that makes them at their riskiest.

And so we've got to balance these two things out, saying people are simultaneously a huge asset and a huge liability. So what I help people think about is how do you balance those two things out? Because if you just say to people be the best you can, do whatever you need to do to be you...to get the best out of you. You will, inadvertent...you'll get some great outcomes, but you'll also run huge amounts of human risk.

That's one word of what risk is. If you equally, turn around and say, look, you're all incredibly risky and I want you to be very, very careful about what you do. You will miss opportunities for them to generate what I call human rewards so getting the best out of people. So we're in a really interesting balancing act, which recognizes that people are simultaneously the largest risk all facing organizations and equally a huge competitive advantage and a necessity.

So how do we balance those two things out? And the challenge with this really comes down to the fact that people are cognizant. They understand what's going on around them. So it's not as if we're just programming robots, human beings are sentient, they understand when they're being ordered around, they have views on that. How do we maximize that in the 21st century? And so that's the real challenge that I help companies address.

I start from the risk perspective, but I also look at the reward perspective as well.

- So that sort of brings us a little bit into compliance. So tell me a little about ethics and compliance, where do they fit into the larger picture?

- So my background is in that. I started my career as an accountant. We try not to talk about that. Then I moved into financial services. And financial services, as your viewers and listeners will know, is a heavily regulated industry for very good reason, right? We saw in 2008, and we still see it.

What happens when that industry gets out of control? It's not capable of regulating itself and doing the right thing, it needs to have societal control. So compliance is really the idea of we have a set of rules that are imposed on...well, all companies have to comply with rules. But there are certain companies that have regulators that look after them. So compliance functions typically exist in industries where there is a regulator, a set of rules that that company is required to comply with, that society has decided are important.

And so on one hand, you've got this idea of compliance, right what do we need to do to be compliant to fit in with the rules that are set for us by government or other authorities? Ethics is doing the right thing. And the reason I put those two things together is that they can either collide or they often sit very nicely with each other.

And what I mean by that simply is sometimes you can do things that are 100% compliant, what we might call lawful but awful. So in other words, if you just looked at the rules, you'd say, well, that's okay the rules permit us to do that. There are times when we can drive at a particular speed on a road, but actually, be dangerous to do so. And so those are situations where we say to ourselves, we need to think both about the compliance angle, what the rules tell us we can do, but then there's the ethical question about what should we do.

And the mantra I always use is just because you can, doesn't mean you should. So just because the law permits something, doesn't necessarily mean that gets the right outcome. Now, that happens because either the law has been crafted without the particular situation that you have in mind. So we've all seen examples of a rule that looks sensible, and then you apply in a particular situation makes no sense whatsoever.

And equally, we can have situations where the law is itself preventing the right outcome from an ethical perspective. And there's an interesting tension how do companies manage that, well, clearly, you've got to make sure you're compliant with the rules. But what you don't want to do is to sort of say, well, the rules tell us we had to do this and so we went with that. And we ignored the fact that there was an issue there. There is a real tension between these two things.

And so I combine this idea of what we are told by other people that we have to do with what we believe as an organization is the right thing to do. And so I think compliance and ethics need to be managed together. Because what you don't want to do is send a signal to people within your organization that here's a set of rules that we impose because other people have told us to do it. And here's some stuff that we believe in.

Because what that suggests is that we should only care about things we believe in, ignore the rest. We have to balance those two things out, we have to recognize there are tensions between those two things. So for me, compliance and ethics, horrible words to describe very, very challenging things. So compliance, I don't like as a concept, because it sounds awful. You know, compliance is what the rules say, ethics is what is the right thing to do, we have to look at the both together.

- So I think when you're talking about that tension there's a risk of...I've heard you use the term compliance theater is that what you mean, when you're simply...well, technically, we followed the rules we're just, you know, missing the ethics piece.

- So compliance theater is a term that I specifically use to refer to exercises that are just there to be seen to be there but don't necessarily deliver the underlying objective. So if we take an example of airport security, we all recognize the underlying risk. We've seen what happens when people breach airport security that puts passengers, planes, society at risk.

And so we have various security measures that we introduced. And we could talk about COVID, as well but let's stick to airline safety because it's a bit less controversial. And we introduced these processes. Now, very often we do things we require things that aren't necessarily that effective, but we do them to be seen to be doing them. Now, that's a perfectly legitimate objective because one of the reasons you would introduce airport security, those scanners is to deter people from trying to do something.

So even if those scanners don't necessarily detect everything, the very fact that they exist is potentially a deterrent in of itself. And why do they slow us down as we walk up to the scanners? Well, they want to destabilize people who might be up to no good. And so that would be an example of compliance theater.

And I use the term in two different directions. That's an example I think of potentially good compliance theater. We're not only doing it we need to be seen to be doing it. But we are deliberately going further than we would need to so that we have that visual, that deterrence component. There are other times where compliance theater is a much more negative concept where I'm talking about things that we do, that we all know serve absolutely no purpose whatsoever, are deeply irritating, and are just performative, box-ticking if you like.

And so I'm hugely in favor of well-intentioned well-thought-through compliance theater, so things that we do, to send a signal to people that's valid. I absolutely detest things that we do that is just box-ticking, we're doing it to be seen to be doing it, we don't really care.

It's a huge consumer of resources not particularly effective. So the term covers good and bad. But you're absolutely right the reason I love the word theater is it's a spectacle, it's something that is put on as a performance. And the question is, is it a performance that is worthwhile? Is it delivering an objective? Is that performance in itself valuable? Or is a performance of just ticking the box going through the motions?

That I think is super destructive. And, of course, often, we confuse the two things. So we might perceive something that has a deterrent effect as being a waste of time and pointless. But that's the challenge for the people managing the compliance and ethics is thinking, well, why are we doing this? Is it doing what we want it to do? And that might not be recognized by the population, that you're trying to police because the people it will irritate may not be the people that it's targeting.

Sometimes we have to do things...we have to go wider because we can't identify who it is we're chasing. So very, very broad amount of things that fit into compliance theater, but you're quite right, there is a very negative aspect to it. But I also wanted to stress there's a positive one as well.

- Yeah, it sounds to me like strategy is really the difference between the two. If it's done to be done it's not necessarily effective. But if there's a strategy behind it, even if those who are, you know, in the case of airport security aren't fully aware of why do I have to take off my shoes? It's useful.

- Yeah. I mean, I think the danger comes, it's very easy to concoct a strategy that sounds good. So I am supportive of things that have a basis. So is there some behavioral science behind it? Is there evidence that this actually works? And one of the challenges I think we have is that there are lots of examples of compliance theater that purports to be, well, we're handling this in a very smart basis. But in reality, they're just lazy, they're what's always happened, they are easy for the people to implement, and they're not necessarily thought through properly.

So I agree with you, we need a strategy. But just making something up and calling it a strategy isn't good enough, to my opinion. We need to know why we're doing and be able to justify that. Particularly if we're adding burdens to individuals or organizations, that's a cost, wasting people's time getting to do extra things comes at a cost. We need to be able to justify that cost and not just ow, well I did it because I could.

- So an understanding of how humans react to constraints is a fundamental concept for successful compliance. How do you find the sweet spot between too many and too few constraints?

- So I mean...

- What's the magic formula?

- Well, the quick answer is there isn't one. And we have to think very intelligently about what it is we are trying to achieve. And so the objective that we're looking for when we think about both compliance and ethics, is we'll go work out is this a binary outcome? So, for example, a speed limit is a binary outcome, you are either driving within the speed limit or you are not, it's very, very simple and straightforward.

Now, if it's something like that...and, of course, we have to make sure that's correctly calibrated. So you can't just pluck a number out of thin air, what's the basis for that? If you've got something like that, then that's a very simple thing for you to police, it's a simple thing for you to communicate. That's very different to something like an ethical dilemma, you know, do the right thing. You know, Google's do no evil will be good example.

Provide good customer service. That's a completely different category because actually, in order for people to do that, they have to be working with you, and they have to want to do it, to get the outcome, they have to want to do it, they have to understand the basis, and they have to have the wherewithal to be able to manage that. And so very often what we do when we look at compliance and ethics challenges is we kind of lump it all together and say, here's a set of rules that you need to comply with.

But there are some rules that are binary, there are some rules that have qualitative components. And so how we think about those things may be very different. And we love the idea of simple rules in a very...it's great for communicating, let's put lines, let's put limits, let's put numbers on things. That feels very, very safe and secure. And if we can do that, we absolutely should because that's easy to communicate, it's easy to police.

And it's also easy for the target audience to respond to. But many of the things that we're trying to mitigate, don't actually come with a neatly defined set of parameters even if you could quantify it. There may be a certain situation where something is okay, and an entirely different situation where doing exactly the same thing causes huge problems. So if you think about treating customers ethically, what is an ethical appropriate solution?

Something that's appropriate for one customer may not be appropriate for another one. And so if you just have a slavish set of rules that you follow, in certain circumstances, you might get the right answer in other circumstances you might not. And so as we're looking at limits that we want to impose on people we've got to be thinking very carefully around is this simple and binary or is there a qualitative component to it?

And how much can I monitor? Because one of the other things that we like to think if I'm going to impose restrictions on people, I'm going to police around this, I'll set some simple rules. But can you actually see everything that they're doing? And how do they perceive that? Are they aware of what you are able to see or not? And if so, are there bits that you can't see?

And so if there are circumstances where you can't monitor in real-time, you can't intervene, maybe you can only find out afterwards or you'll only find out if something goes wrong. That is a very different set of controls than say something like, are you driving at the right speed on this stretch of road and I can monitor you as you do it and catch you in the act. Sometimes you might not find out until afterwards.

So the constraints that we put on people, very important to think about what are the circumstances? And really importantly, how will they react to it? Because very often, we can sit there and say as a compliance and ethics professional, this is what I want you to do. We know what it is we understand the outcome. And we have spent ages studying this subject. To someone on the ground who is just trying to do their job, they aren't steeped in this stuff.

And so how do we translate something that is potentially nebulous and complex into a world where somebody may not even have thought about it, may not even be aware of challenges that they're facing. And so constraints are a really interesting challenge. And, of course, the flip side of constraints is we want to unleash people, we want to give them opportunities to do incredible things in a positive sense.

And so, again, we don't want to constrain legitimate activity by ring-fencing it too tightly. We don't want to treat people like small children all of the time. And that's the balancing act here. At what point do we give people some autonomy, allow them to do what they want? And at what point do we need to tightly control? And there will be different situations that will require different responses. And sometimes you need to do a little bit of both.

- I'm going to come back to context in a minute, I just wanted to ask about the role of psychological safety, which is related to context. So you have a great example about how there are new traffic lane rules in the UK, and how the public has reacted to them. And that tells us a little bit about statistical safety versus psychological safety.

- Yeah, so in very, very simple terms, the UK is going through a phase of trying to transition particularly in large cities like London, it's trying to transition people to do more cycling. And so COVID has helped with that. But also a sense of people wanting to do the right thing for the environment. And also our streets are getting busier and busier so it is actually quicker, sometimes to cycle around.

So they've introduced some changes to what we call the Highway Code. And what's interesting about the code is it's a combination of rules and kind of suggestions. So principles that you might like to consider. And those principles will be borne in mind, if something happens, if an accident, some sort of consequence, they'll bear those principles in mind. So it's a combination of a sort of voluntary code, that you know you could be held accountable for and some very strict rules.

And so the government has been looking at really changing that to bring it up to speed with some of the common practices, but also undeniably an objective of trying to encourage more people to cycle. And so one of the challenges you've got there is that cyclists like the new rules because it gives them a little bit more protection under the law.

It requires car drivers to behave slightly differently. But if you're a car driver, it feels like you're having your world constrained very significantly, you'll be required to do things that you didn't before. And the problem that the government has with these regulations... Now, I'm in favor of them but I can see arguments the other way. The challenge that the government's got is that they haven't communicated them properly.

They've introduced these things by stealth, there's a communications plan coming down the line. And so they haven't really taken the target audience with them. And I think one of the things that's interesting is, if you're going to impose changes, you really have to make sure that people not just understand what it is you want them to do, but the rationale behind it.

Now, they might not agree with that rationale, which is the case that we have here. But if you don't explain it to them, people start creating their own logic. And so if I'm a car driver that suddenly feels like my road space is being sacrificed for these irritating cyclists, then that feels like a constraint on me. I'm having something taken away from me. If I'm a cyclist, I will welcome these extra protections that require cars to be a little bit further away.

And so the psychology of how we introduce rules is really important. And I come back to the point I made earlier, which is if we write rules for computers, algorithms, robots, we codify, and you could argue to an extent small children, we are codifying things and we know that if we communicate it, there's a strong likelihood that we'll get the outcome we're looking for. Human beings don't work like that because we suss out whether we like something or not.

So if it's communicated to us in a way we like, if it's positive, if it was communicated by messengers that we like we may react differently to the same information that's presented by messengers that we don't like, or with inferences that we don't like. And, of course, one of the things now is that we have a lot more access to other people's thoughts through social media and other forms of communication. And so what we're seeing with this particular change in traffic regulations is an example of a set of rules that probably if you could take it out of context, people would think with sense.

But as people approach this, as...it'll impact people one of two ways. You're either a cyclist who benefits from it, or you're a car driver who might not, that's going to cause particular problems. And so we need to think as we're trying to influence human behavior, it is not just a case of being clear about what we're looking for, there's a persuasive element to it. And that matters because in many cases, the rules, again, back to principles and ideas, we have to sort of encourage people to do the right thing and we can't police it all the time.

So we're trying to stop bad outcomes from happening. And we're trying to work with human beings that responds to the environments that they're put in. So it's a really good example I think of the challenges that we're facing.

- Do you see the British government hiring influencers too?

- It's a strange thing.

- It's not that crazy, really.

- No, no, the strange thing is that the UK Government was one of the first globally to introduce the idea of behavioral science. And they created something called the Behavioural Insights Team. And again, I want to stress this by no means they're the only but it's one flagship. And the government were sort of still looking around about 2010. And they set up what was colloquially called the Nudge Unit, which was really a way of thinking how can we use this nudge theory, the idea of changing the choice architecture so we don't just tell people to do something, we are mandated.

We find ways to make an outcome that we're looking to achieve more desirable, but we don't remove people's ability not to do it. So good example is if I want you to eat more healthily, what I won't do is take unhealthy foods off the menu, but I might position the healthier options in a way that makes it more attractive to you. And that's worked very well with children, for example. So give them the healthy food first, they fill up their plates, and then they arrive at the unhealthy stuff.

By then it's too late, right, they filled their place with the healthier things. So it's not removing any sense of personal freedom. And so they introduced this Behavioural Insights Team who were very, very effective at a range of different interventions, from getting people to pay their taxes on time, getting people to be healthier, just changing many dynamics. And governments love it because it's cost-effective, it doesn't cost too much to do these things.

And you can experiment with them. And the government data set was huge. So the bit that's astonishing is that the UK Government has for many years had the capability to do these things, and has been doing it in other fields. So I find this case study really interesting because it's a body that's used this stuff before, and for some reason, they missed the boat on this one. Now, maybe they were distracted looking at COVID-related issues, but they've got people that can do it, for some reason they chose not to.

So I have to conclude either it was ignored, or potentially that they were given bad advice. I'll go with the former because that feels more comfortable but it's possible the latter as well.

- So coming back to context, one of the important elements of safety training is context and decision making. You've discussed the airline industry, safety regulators as really doing a lot of things, right, in terms of the tone of the regulations and their use of context. Can you tell me a bit more about that?

- Yeah. So I think I think one of the things that's astonishing about airlines is that if you look at the number of accidents, by which I mean things going wrong in the air, in very simple terms, it's an incredibly small amount relative to the number of planes that are in the sky. And, of course, thanks to COVID, they've been far fewer flights. But over time, there has been a clear decline in the number of accidents involving aircraft.

And that I think has largely come from the industry and its approach, but also the way that industry has been regulated. And so the airline industry regulators have recognized that if airline A has an issue, then it is incumbent on them to also communicate that to other airlines, because if it can happen here, it can happen over there.

So they're very good at encouraging sharing of information. And they have reversed something that normally applies... So ordinarily, your bet...in simple terms, we learn this as small children, if we can hide a problem, it might go away. What the airline industry has been encouraged to do is to share problems. So if something goes wrong, you get into more trouble by not sharing than you do by sharing.

And they support people in doing that process. So if something is wrong with your plane report it and that gets reported to the regulator, and they can have an industry-wide response to it. And so there's a learning culture there what's known as a just culture in terms of regulation that encourages exactly the sorts of behaviors that you were looking for. Which is it becomes far more preventative and so near misses aren't things to be hidden, they are things to be reported and shared and the regulator can respond to that.

And so you've got a regulatory environment that encourages exactly the kinds of behavior that you and I as passengers and members of society would want because the regulators set the right environment. The firms then, in turn, do the same thing. And so they have some very interesting policies, where, for example, they recognize the human dynamic.

So I spoke to one airline who fascinating he told me, look, if somebody on our aircrew has a drug problem, what we will do, if they come to us, we will work with them, we will help them go to rehab, the objective being to put them back in the sky. Now that sounds terrifying from a passenger perspective, but it's very, very smart because if you do that, then you've given people every opportunity to come forward, and you can help them through the problem.

And there's no consequence to them because they'll keep their job. And so if you said, right, if we catch you doing drugs, we're going to fire you, that's a very strong line. But you also have to find a path for the redemptive sinner, the person that wants to get themselves out of trouble. Because we can all, you know, find ourselves on the wrong side of the law. So there are these routes back to redemption, not for big serious issues but for things where actually there is an opportunity to rehabilitate people in every sense, they do that.

And what I find fascinating is that airlines do this really, really well. A lady willl say to me, well, hold on a minute, airlines have all sorts of issues. And you'd be right. And if you look at what goes wrong in the airline industry, tons of issues, luggage gets lost, we have data challenges, we have all sorts of...a ton of issues, but none of it happens in the air, it all happens on the ground.

And the bit that fascinates me is this culture that's applied to airline safety to the literally life and death aspects of the business seems to work really well. Not every single time, it's not 100% perfect, but it's pretty well. But they still have challenges on the ground. And what that tells me is that it's possible to do this, if people really care about it and are incentivized to do it.

The challenge I see is that they don't replicate the very things that they do to keep us safe in the skies, they don't necessarily do on the ground. Now, maybe in the scheme of things, that doesn't matter because if my luggage takes a little bit longer to get to me, it's not the end of the world. Whereas if something happens on a plane it is. But nonetheless, there's a really interesting distinction there. So I find it a fascinating industry because on the one hand, really, really good at the stuff supported by the regulator.

On the other hand, some absolutely woeful examples of terrible customer service, bad decision making, you know, things going wrong, that are fundamentally basic. And so that's why I find the industry really fascinating. But I try to point at the positive and say, look, here's an example of where we can get it right, we can eliminate the errors in the bit that really matters. And that's the bit, I think it'd be really good if we could copy in other contexts because then we can try and build a safer world and, you know, protect people.

- So you had also talked about the way that the regulations are written in terms of airline companies complying with the regulations. And this goes back to constraints, not being too prescriptive but, of course, introducing some constraints.

Can you tell me a little more about that?

- Yeah. So what I think they're really good at is recognizing the underlying human dynamics of regulations. And when we think about a subject like compliance or ethics, we talk about it in kind of conceptual terms, but actually, ultimately, it's about influencing human decision-making. So you can't say to an organization be compliant or be ethical, because it's an inanimate object, it won't respond.

It's the people within it that determine whether or not that happens. So what I think airline regulators have recognized is that if they want to influence the people within it, to get the outcomes they're looking for, so to do the things that we need them to do, to not do the things that we don't and to do that to an appropriate quality. We need to start thinking about what does it feel like for those people. And so the example I gave of giving people incentive to come forward when something has gone wrong, recognizes the psychological challenge.

And, therefore, we need to build...you mentioned the word earlier, psychological safety. Give people a reason for coming forward, make it easier to come forward than not come forward. And so they think these things through. And I think a really good example, is the airline safety briefing. And I spent far too much time than is healthy, looking at the regulations behind it because I thought this is interesting.

We sit on a plane and what they try and do is they try and grab our attention to point out what will happen when things go wrong. Some airlines do it very well, some airlines do it less well. But there is an attempt to engage us. And there are some interesting examples on the internet of how they try and do that. Now my initial reaction was ow, this is the airlines using their initiative and saying if we just read out the safety regulation, no one's going to pay attention.

But what's fascinating is it's actually the regulator that encourages them to think in those terms. So when you've got a cartoon character, celebrities, people able to what looks like freestyle with the briefing, that's actually the regulators creating an environment and basically said to the airlines find what you think is the most effective way to get this job done. And so there is a battle for our attention.

Now we are not naturally minded to listen to safety briefings because well, I've gone through the hassle at the airport, I'm sitting in my seat, I've heard all this stuff before. And anyway, my plane is not going to crash, because it's really safe. And so they've got a battle for our attention. And so the airlines are given the latitude here, they have to do the safety briefing. And bear in mind they're not sending us a YouTube link six months before we fly or, the day before, they're giving it to us just at the point when it's most relevant.

Now, we are a captive audience because we're sat strapped into our seats. But it's a great bit of psychology because it recognizes, okay, so we've got people trapped here. But just because they are sat there doesn't make them a captive audience in the sense of, they're going to want to listen. And in many cases, we might want to shut off and not hear it because actually, it's pretty unpleasant to think about the fact that your plane might have to do an emergency landing, I don't want to listen to that stuff.

So they've got to find a way to try and get through it. And the bit that I like is that the airlines are given huge amounts of latitude as to how they try and do that. The content is fairly fixed, but the way that they present it is open to them. And so I think we've seen some really interesting examples of airlines allowing crew to do that or thinking creatively about how they produce videos that I think illustrates the fact that the regulator understands the dynamics at play here.

And imagine if every single airline did a standardized briefing, that will be terrible. And sure frequent fliers will get confident they won't necessarily listen all the time. But they've really tried as far as they can, to get to the audience to win that audience over, to grab their attention on some level, and therefore maximizing the chances of people doing the right thing if an emergency happens.

And therefore, I love the fact that one, it's not prescriptive, two, it allows the airlines a lot of latitude to own it and do what they want to do with it and that allows different responses to go through. But there's a real recognition here that if you try to codify that too much, you get the wrong answer. And one of the big things we get wrong with compliance things is we think ah, the best thing to do is to have detailed rules that people follow.

Detailed rules are amazing if you don't have any form of volatility in the environment. So if you're dealing with the laws of physics or chemistry, then detailed prescriptive laws make a lot of sense, nothing's going to change. Nuclear power station, write those things, codify them tell people exactly what to put in the reactor and what not. If you're trying to communicate with human beings, having a very rigid approach is incredibly dangerous, because that can appear to be tone-deaf.

So we need to flex this piece. And that's why I really like that example.

- And what do you think a safety manager, a professional could learn from that in terms of doing safety training, you know, introducing, whether that's at the beginning, or it's a toolbox talk, a daily sort of thing?

- So the key piece I think is to think about your target audience. And very often, if we are responsible for implementing a rule, for doing some training, we are experts in the subject by default, otherwise, we wouldn't be responsible for it. And so when we come is we bring our expertise into a world where people are not necessarily interested in the subject, don't necessarily have the background, have a million and one other things on their mind.

And so what we need to do is we need to think not about what do I think would be a good way of communicating but what is likely to be received by the target audience. And so we have this...it's particularly challenging if you are somebody that spent your whole life focusing on a risk, and your job is to communicate that to people who don't, by default, they're doing something else. So how do I translate that? Well, the technical response, the things I think are important, the things I think will resonate, the things I think are obvious, might not be to them.

And so we have to start thinking about what will it feel like to that target audience? That's where I think we need to adopt a marketing mindset. So a little bit, how do you sell a car, some insurance to somebody? Well, the answer is you can use all sorts of technical terminology, but the most effective advertisement gets to people, they play on an emotional level. And it's that that we need to start thinking about.

And it's really easy to think you've done your job well because what you've done is you've imparted the knowledge you have to the people that need that knowledge. But there's a difference between the knowledge you need to be the professional, and the knowledge those people need to do whatever it is they're doing safely or compliantly. And so as I look at it, I think these things, it's a translation role. I've got to translate the risk, the regulation, the ethics, into something that makes sense to people on the ground that won't have spent the entirety of their lives thinking about it.

So I need to communicate with them in their language in ways that make sense to them. And what I mustn't do is try and somehow convert them to becoming mini versions of me. I'm not trying to turn the audience into experts in compliance in safety. What I'm trying to do is get them to do whatever it is they need to do in a safe or compliant manner. And that's the bit that people often miss.

And so it feels dump a whole load of stuff on people feels very safe, but have they emerged from it better able to do the things you need them to do not do the things you don't, that's the test. And that's why I think marketing is a helpful comparison.

- Yeah, it seems to me, you know, we do market research. And really empathy is key in understanding how someone's coming at a subject that you're well versed in when, as you say they've got a million other things. They're trying to make widget A, they're not too worried about the guard on the machine, for example.

- Yeah, and that's the point, it's...that's why we are sometimes blindsided because we as compliance safety professionals are looking at this subject in detail. We're focusing on it, and that's our job. The people we're talking to, and trying to communicate and try to protect, are doing something completely different.

They are focusing on whatever it is they are employed to do. And so in some respects, we come along with a message for them. If we're not careful, we can be perceived to be a nuisance, because we're slowing things...because very often it's requiring people to do things in a way that they wouldn't ordinarily do it. Because if they were doing it in a safe manner in the first place, you wouldn't need to communicate it. So we're intervening and stopping the natural flow of whatever it is they're up to.

And so what we need to do is be really careful here and not assume, for example, that they understand why, that they agree with us, or that they even think we have the authority to impose that. And so, you know, we think we're in a strong position. My job is to protect people, and I'm going to come in, and I will stop all these bad things from happening. But actually, if you really want to stop them, you've got to work with the people that you're trying to influence people whose behavior will crystallize or increase the risk.

And so how we translate our theory, our detailed understanding into something they understand, care about, and respect is hugely important. And we often don't think about that. We might get to the understand, we don't necessarily get to the care about and respect. And so we've all been in situations where we've had petty bureaucrats telling us what to do.

And as soon as they're out of earshot, we'll try to find a way around it because they're irritating petty bureaucrats. So we need to overcome that and we need to accept the fact that just because we can tell people to do something doesn't necessarily mean they want to or they will. And we really rely on the employment contract sometimes and say, well, because we employ you, we can just tell you what to do.

Legally correct. But from an emotional commitment perspective, that's not necessarily there. And if there's a qualitative element, ethics being a good example, then we've really got to win people's hearts and minds. And this idea that we can just tell them what to do, yes, it's theoretically correct. Yes, you could ultimately fire them but that's a really dangerous way to approach topics where there's a lot of nuance.

- Yeah, it's really about persuasion, effective persuasion. Going back to context a little bit, context isn't static. So a good example of this is variable speed limits at school zones, or according to weather conditions. How does that relate to continuous improvement and to designing successful safety programs?

- So I think anything that's variable is... firstly, it's clearly intelligent in the sense of it's thought about the fact that a black and white rule is not necessarily going to get there. So you talked about speed limits, well, if we're restricting people...we've all been on stretches of roads in the middle of the night and the speed limit seems a bit silly. And we've also been on the flipside, where maybe it's a bit too fast in some other circumstances.

So this idea of having variable speed limits or other forms of variable rule makes a lot of sense when the environment could change drastically. And so we don't want to have a kind of average, because if you think about an average speed limit, might be far too slow, in certain circumstances, far too faster, one size may fit nothing. And, of course, one of the challenges with speed limits is that we will have different drivers in that so driving different vehicles without...so there's a ton of variables there.

So if we think about speed limit is in of itself, an attempt to sort of codify a load of variables and something that we all intuitively understand probably has to be a constant across people. We couldn't say, well, you're this particular demographic, you've got this speed limit you're that.

That starts to feel unfair. So what we do is where we can let's have a speed limit again, easier to police, easier to enforce, easier to communicate. But sometimes we recognize that's not going to work and so we have these flexible regimes. And I think what's interesting about continuous improvement, is that if we want to solve this particular problem, we cannot look at risk management as a, ow, well, I've looked at it, I've analyzed it, job done, move on.

We have to keep coming back and refreshing things and circumstances change. And the, particularly where it involves human beings, human perceptions may change. We have an interesting example. Every time they introduce a major new safety feature into cars, there are more accidents. Which sounds counterintuitive until I tell you people drive faster so you put airbags in, you put ABS anti-lock braking on.

People think ow, it's... And , of course, car manufacturers promote this. And so we do something that's called risk compensation is we think it feels a lot safer. So think about modern cars, they're a lot quieter, you don't have that sense of great speed that you would do in a sort of old jalopy. And so what's interesting there is that we compensate for it because our perception of the risk changes.

And so as we move as technology improves, as circumstances change, the risk profile of what we're looking at may change. So even if you got it bang on right the first time, you may well find that these presumptions that you placed on that particular calculation, turn out to no longer be valid. So we have to constantly review what we're doing and what we're looking at.

And so whenever we're trying to influence human behavior, it's necessary to have a regular review process and also recognize that if we have lots of evidence that we've got it wrong, we shouldn't be afraid to change. So I have a very simple mantra, which says, look, if one or two people are breaking a rule, and everybody else is managing to be compliant, then that points towards the problem being those people assuming they're all doing the same thing as everybody else.

So basic, simple heuristic there is those people. If you have lots of people breaking a rule, then there's probably something wrong with the rule. Either the rule is stopping something legitimate from happening, it's not clear, it's really hard to comply with it. But there's a piece of behavioral feedback that tells you that whatever you were doing in terms of setting the rule, communicating it, training it, is not working.

And I think we love to think that we've a little bit like, you know, mathematical problem, we approach something, we write some rules, we codify it, we can monitor it, we can control it, job done. And then, of course, you can't review everything on a daily basis, we need consistency of regulation. But at some point, you need to come back and look at it and say to yourself, what can I learn from what has happened?

How has this actually fared in practice? In the same way, to return to my marketing analogy that we would look at an advertising campaign and say, I'll run an ad campaign if we increase sales, and I've eliminated all the other factors, then I can go, ow, that was a successful campaign. If my sales are going down through the floor, and the ad campaign seems to be the major driver of it, I wouldn't keep running that advertising campaign.

I would review that and maybe stop it, try something else, whatever. That's the same approach that we need to adopt. And I think it's really tempting to think we've put a fence up, we've controlled this thing. And for a period of time, we might have done it. But what we're running here are experiments, it's our best guess attempt to influence human decision-making.

So I think we need to adopt the right experimental mindset that says we're going to try some things. And every single rule that we introduce communication, training, is an experiment to try and get the outcome we're looking for. And what we should do is adopt a scientific mindset that says we need to iterate. So we try something, we monitor, we don't just assume we've got it right, we monitor it, we recognize what's there, we listen to what people are telling us, we watch their behaviors.

And we then go through an iterative process to try and finesse them. Even if we've got it 100% correct, we also have to recognize that people get bored of things, people switch off. The more comfortable I am with something, the classic thing that commute. It's very easy to drive to work, and forget how you got there because you're on autopilot. Greater likelihood of having an accident, statistically more likely to have an accident near where you live, in part because you're driving there more often than elsewhere, but also because you're feeling very, very comfortable.

So those human factors also mean that we need to review it. So even if theoretically, we've got it right we need to freshen things up. In the same way, you might argue advertisers freshen up their campaigns they keep a consistent theme. So that experimental mindset is absolutely critical.

- And I think what comes to mind when you say that as well, is generational change, right? You know, what works for a particular generation, you know, the pre-internet, the pre-smartphone generation versus very young workers that are just starting different kinds of messaging, different kinds of...

- Yeah, and I want to urge a bit of caution there as well because what we tend to like to do is to simplify things. So you're completely right to say that we should be looking along that sort of access and saying what are the different ways? But none of us likes to think of ourselves we're all unique and different. So this idea that certain generation are all the same, is potentially also flawed.

And there will be people that...you know, we've all seen people that are old and the behavior, they're much more elderly way than they should and some older people that perhaps should be... that we think about being much younger. So a very simple analysis there that says we need to...I think this is about cognitive diversity. And so you're quite right, there will be a propensity towards digital natives, for example, to behave differently.

So the average digital native has a different relationship with technology than the average digital immigrant. But again, there'll be some digital natives that are useless with this, and digital immigrants that are phenomenal ahead of the curve. And that's just one particular lens that we're looking it through. And as we...we're sort of moving forward and with technology particularly as we communicate with people, you know, we do digital training. We've got the opportunity to be much more targeted and really address people in a way that is most effective for them.

And that might be by segmenting by age, it might be by segmenting by location, or by experience. But it might also be giving people a little bit of a sense of freedom and agency in how they receive it. So if you think about training, you know, historically, maybe we got people into a classroom and we lectured them for an hour, and then that was the job done. And then maybe we move that same thing online and we say watch this video and then do a quiz. And the answer is, there will be some people that will prefer I just want to pick it up online, I prefer face to face, some people would like short bite-sized chunks, other people really want to understand the detail.

We have the option with technology to start giving people much more of an opportunity to pick how they want to consume it. And so I think the challenge on compliance and ethics safety professionals is to start saying actually, I can now present my audience with something that resonates with them. A lot more work because I've got to start thinking about different ways of presenting it. But if I take a medium like video, one of the things that's interesting is assuming we want to move towards digital, particularly as we have people in remote workforces, hybrid working, we're going to need...

Digital becomes a key component in how we train people. And what is fascinating about that is that we can start to use a medium like video. And if we think about using video, video works really well for some people in some contexts, other people might like audio. And if we've produced a video, we've got the audio there. Other they want to read stuff, so we can give them the transcript. So if you start with a written word, it's very difficult to create a video and audio product out of it.

If you start with an audio product, you can build through those things. And so what I want to encourage people to do is to think much more about what are the different ways that I can engage my audience? And how can I create a plurality of things that doesn't change the message that's there, doesn't change the information they consume. But we present it to them in a way that is much easier for them much more relatable for them, and they can therefore find it in a way that works for them. And if you think about how people consume information nowadays, in a digital environment, there are some people that are very happy to watch an hour-long YouTube videos.

There are other people that are massive fans of Tik-Tok who like it in very short, sharp bursts. We can cater to the both of those if we think smartly about it. A little bit more work but if we get that, right, we've landed much more, and they're much more able to listen to what we're saying. And we'll have an audience that we've communicated with effectively, as opposed to assuming that a one size fits all works.

- Yeah, there's actually a lot of talk in the communications industry as well about neurodiversity, and, you know, how different people, you know. When I see a video online, I just want to read it. I don't want to watch the video. Whereas, of course, there are other people who would prefer the video. So I have a few questions that I ask every guest.

So the first one, I'm going to call the University of Christian. If you were to develop your own safety management training curriculum, where would you start? So what core human skills are the most important to develop in tomorrow's safety professionals?

- So I think the answer is if you want to either influence your own behavior or other people's. And so I'm approaching your safety piece from I'm either an individual that is in a position where I want to change how I do things to make myself safer, or I'm trying to communicate to someone else.

Understanding what drives human decision-making is critical. So I'm a huge fan of behavioral science just understanding the basics. Why do we decide to do the things that we do? Now there are circumstantial reasons, but there are a ton of basics there, things that influence our decision-making that we might not necessarily have thought of. So other people huge influence on how we choose. So I would say a basic behavioral science course....or you can call it psychology, understanding that is absolutely critical, both for us to have personal mastery, but also if we want to persuade other people.

So for me, that should be on the curriculum as an absolute one-on-one.

- Let's turn back time. If you could travel back in time and speak to yourself at the beginning of your career, and you could only give young Christian one piece of advice what would that be?

- Follow your passion. And I say that because one of the benefits of being in the UK is our education system allows you to study things that are not necessarily relevant to a potential final career. So if you want to do medicine, or something very, very technical, engineering, you need to have decided that.

But I did a literature degree. So I studied French and German literature. And I decided that once I'd done that...and I was fascinated by that because it's all about we write books about people. And what I realized afterwards was that that didn't feel relevant to business. And so I went off and became an accountant. And I'm terrible at math, right just not good. And I did it to try and be sensible.

And it's fine. And I have the qualification, and I learned some things on the way. But I feel like that sent me down a course of being sensible. And I think with the career options that are available nowadays, being a little bit less sensible, and going with what your heart tells you what to do, is much more powerful. And it took me 20 or 30 years, to get to a point where I felt comfortable doing that. Now I recognize that my past is part of how that happened but I think if I had followed my heart a little bit more earlier on, I might have got there more quickly.

- So you might have realized that what drew you to literature in the first place was your interest in human behavior?

- Yeah. Or the excuse I've just given you is a retrofitted argument that allows me to justify my previous decision, both are possible. But yeah, I think I had always wanted to be...I'd always been attracted by the idea of radio and it took me 30 odd years to start a podcast. I think I could have gone down that particular path. And I spent a number of years sacrificing things for what I thought were sensible reasons.

I'm less sure about whether those are sensible now, but, you know, you can't regret the things you've done. So that's my advice to everybody else is I think you would be better...if you're passionate about something, you'll do a better job of it and you're more likely to succeed than if you hate what you're doing.

- True. So, when I said we'd get back to behavioral psychology, I'd like to ask for your best most practical tips or resources for safety managers looking to improve their core skills. And in this case, I would say looking to learn more about behavioral science. So this could be a book, a website, a concept.

- It would be remiss of me not to plug my own podcast. So the "Human Risk Podcast" looks at human decision-making as a risk. And I interview academics that range from behavioral scientists through to people that have practical experience of human risk comedians. I had a sexologist on, ton of people. So that's obviously the first place you would go. But imagine you listened to all of those episodes consumed that. I think there are a number of people that I point to, that you wouldn't necessarily read with a sort of safety mindset, but they are very helpful understanding what goes on.

And those people are Dan Ariely, who is a professor who looks at behavioral science, and he's written a book called "Predictably Irrational" that I think is an absolute must-read and very helpful. I'll pick another Dan...this is Professor Daniel Kahneman, who's written a book called "Thinking, Fast and Slow," which is all about how our brains work.

And I'll pick a brand new book that is just out by Zoe Chance, who is a professor at Yale, and she's written a book called "Influence Is Your Superpower." And that takes some very complex behavioral science and looks at how do we develop things like charisma? How do we influence other people, and why influencing other people isn't a manipulative thing.

It's actually a key component for how we communicate to others. So if you want an easy gateway drug, I would go with Zoe's book, but the other two also worth a look. And then the podcast, obviously.

- Of course, obviously, which leads me to where can our listeners find you on the web? So podcast, where else?

- All sorts of places. So humanriskpodcast.com is for the podcast, human-risk.com, humanrisk.com had gone so human-risk.com is my business website. I'm available on LinkedIn and Twitter if you just search for human risk in both cases you will find me. And I love connecting with people so please feel free. I know the rules of social media are you're only supposed to connect with people you know.

I think we know each other because you've been listening to me this long, so please feel free to connect with me.LinkedIn and Twitter I provide content there as well.

- Thank you very much. I appreciate your time. And that's all the time we have today.

- Thank you. ♪ [music] ♪ - Safety Labs is created by Slice the only safety knife on the market with a finger-friendly blade. Find us at sliceproducts.com. Until next time, stay safe.

Christian Hunt

Author of 'Humanizing Rules' & founder of Human Risk, a consultancy & training firm that brings behavioural science to ethics & compliance

Christian Hunt on Twitter

The Human Risk Podcast

The Human Risk Website

“Predictably Irrational” by Dan Ariely

“Thinking Fast and Slow” by Daniel Kahneman

Influence is Your Superpower: The Science of Winning Hearts, Sparking Change, and Making Good Things Happen” by Zoe Chance