- [Mary] Hi there. Welcome to "Safety Labs by Slice." We're on the brink of a technological revolution driven by artificial intelligence, large language models, virtual and augmented reality, wearable tech, and innovations that we can't yet imagine. But what does all this have to do with workplace safety, especially the human side of safety? As it turns out, quite a lot.
Today's guest works at the intersection of technology and safety, which as we'll discuss, have a lot more in common than you might expect. Cameron Stevens is a technologist and a chartered health and safety professional. He's worked in the OHS profession for over 15 years, developing expertise in transforming organizations with his unique approach. He's been called Australia's leading safety technologist and safety futurist.
Cameron is the founder of Pocketknife Group, a consultancy focused on making innovation more accessible and improving the design, experience, and safety of work using a combination of technology, the scientific method, and human-centered design. He created the Safety Innovation Academy, a coaching and consulting solution for OHS leaders trying to navigate the future of work.
Through the academy, Cameron provides digital literacy training and bespoke safety innovation solutions for complex problems. Cameron joins us from Perth, Australia. Welcome.
- [Cameron] Thanks, Mary. I'm under the flight path, so it's pretty noisy this morning. It's peak hour.
- I'm sure it's fine. I can't hear it at all. So, tech has had a bad reputation for being inaccessible. We all have visions, maybe stereotypes of hyper-technical nerds speaking in a way no one can understand, or visionary futurists, I'm thinking Steve Jobs, who aren't necessarily in touch with day-to-day life.
Do you think that perception overall is changing?
- Yeah. Absolutely. I think it's changing because without even realizing it, we're using technology every day for all other purposes. Listening to Spotify, watching Netflix, continually being exposed to technology, even though we might not realize that that's happening. So, without realizing, I think we're building a base understanding of what it means to interact with technology, perhaps without even realizing that that's happening.
So that's why it's so important that we bridge the gap between those two extremes.
- Okay. So the common view of tech is that it's all about the tech itself, that it amounts to nifty gadgets that we're playing with just because we can. But you see a lot in common between worker-centered safety design and tech innovations. So can you talk about the overlaps there?
- Sure. I think the best example is, and a lot of the contribution to this issue, particularly in the safety space, is the regular safety expo where, you know, a lot of the...basically have a whole bunch of technologies that are plunked around a big arena, and people walk around and to look at the shiniest gadget of what's going on from...it's all about the tech.
It's not really so much about the use case. There's no metadata in the conference program that suggests why some piece of technology will do, or what some piece of technology will do. So, we get really disconnected from the value that that piece of technology may bring to the business. And what we often see...and 9 times out of 10, I get engaged by companies that say, we were at this conference, we saw this amazing piece of technology, we've bought some and it's still sitting on the shelf.
Or we didn't realize it couldn't do certain things because we didn't understand what they said to us, or we didn't understand the fine print, or it didn't align with our business culture, or we just weren't ready. So, fundamentally, humans have to use technology. So it doesn't matter even how amazing and comprehensive and validated the actual technology stack is. I say tech stack quite off the tongue now, and I probably shouldn't.
But when I say tech stack, I mean the hardware, so the gadget, the software that is used to interact with the human and the gadget, the network, so the internet connection that drives the knowledge or the data transfer. So that whole solution that you need to get a technology going, the most critical piece is the human because they have to use it.
And therefore, like with any change, we have to understand human motivation. We have to make the interaction with that technology as delightful as possible, and we have to make a what's in it for them. And when you see people go to a conference or expo and they go and have a chat with the vendor. And it's almost all about me and what can I bring to the business and what's all fabulous, shiny, and new rather than really understanding how is the human going to interact, and therefore how will the people in the business have a better experience and a better design of work.
- So, you've talked about your role as that of a hotel concierge. I think this kind of dovetails with what you're saying here. So what do you mean by that?
- So it's been cautioned to use. And so language matters. So the word, concierge, is one that makes sense to me, but it might not make sense to others. And I think the origin of the word, concierge, really came around in France, I believe, around the war era where the lady would...the house housekeeper, typically a woman, would hold the keys to all of the rooms.
And she would know all the gossip that was going on. And there wasn't really great connotations with the term, concierge. But if you look now at sort of the hotel concierge or the flight purser in an airplane, you know, they're the person that knows a little bit of a lot, is a really good communicator with people, and they can effectively translate complex problems and find out who or what could be the solution to that problem.
So, the example I give when I'm trying to fumble my way through an explanation of why I think that concierge is a good metaphor almost of the way I do the work that I do is if you go to a hotel on a family holiday or you are going solo, traveling somewhere, you arrive at a destination and you want to have Vietnamese food, for example, and you're in Calgary.
So, you go to the hotel concierge and say, "You know, where's the best Vietnamese in the city?" And the hotel concierge will say, "Oh, sir, Calgary is not the best place for Vietnamese, but there's excellent Cambodian food, or there's excellent French food. What's the reason you're wanting to go for Vietnamese?" And it might be, oh, well, it's my wedding anniversary, or it's my birthday, and every year I have Vietnamese, or there's some motivation as to why I feel like it.
Now the concierge may gently steer the person to another direction, or they might provide a variety of different options and say why they may be a suitable alternative, but meet the needs of the reason why that person asked in the first place. So translating complex technical challenges and problems with really human-centered problems, there appears a need, at least from my experience to have someone act as that mediator or the translator between the technical folks and the workers who are going to be using the technology or interfacing.
And if you think about the role of the health and safety professional, we're very well-placed to be that role, whatever we call it. So we understand work provided we get out there and understand how work is done on a day-to-day basis. We understand how work is planned and designed. We understand the work systems.
But the missing link is the technology side. So, with a little boost of digital literacy, we get to bridge the gap and play that role. And I think it's a massive opportunity for health and safety practitioners to explore that side of it.
- I find that it's sometimes there's a one-to-one, but often there's not necessarily a one-to-one relationship between what a technology is built for and how you want to use it. So, my example for that is like a spreadsheet. A spreadsheet can be just a table or it can be a database, or it can be like...it can do a million different things very well.
It's very flexible. So I guess you need that translation piece of like, you know, now that I understand your problem, this would be the right tech or, you know, you could use this, it was maybe built for something else, but... Anyway, yeah.
- Yeah. Well, that's use case discovery. That's a massive part of...and I would've done working with a wearable computing company, well over 200 deployments. And I was chatting on a live webinar type event with David Provan, and we were talking about some of the lessons around...or not the lessons, but the insights that I've gained from those technology deployments.
And one of the biggest insights was health and safety professionals were invited to the use case design or the discussion about how the technology could be used for less than 10% of the time. And when I asked teams why, because I felt responsible that there's something going on here as to, you know, what is the reason why a safety professional is not invited to discuss the way that a technology could be used with the humans because there is such a great place to do that, is there's a typical response to a novel challenge in a business from health and safety professionals that we, and I'm using the term we, because I consider myself a health and safety professional or part of that group in a way, is we typically tend to respond to novel ideas with the need to control things and constrain things with a significant, you know, these are all the things that could go wrong.
And the challenge with that is that gets tiring for people. They don't like that. If the first thing...you know, if we want to use a drone to do confined space inspection, and oh, we have to watch out for this and we'll have to look out for that, and that might happen. We need to do this and we'll have to get regulatory approval. And we're not set up for that, or it'll never work.
That response has almost been drummed into what we do because we're trying to protect people when it comes from the right place. We're trying to protect the business, we're trying to protect people. So we're trying to put critical controls and do all the things that are going to stop bad things happening. But what we fail to do when we immediately shift into that head space is we fail to recognize the opportunity and we fail to recognize the upside, the potential that these things could do to fundamentally transform the way we work.
And I think on reflection, I think that's just a mindset shift. It's not so much of a personality issue, it's just the way that we immediately respond. So, taking short time to pause and reflect and go, let's look at the opportunity of this, as well as the way that we could be responsible.
So I call it responsible innovation. Yes, it is about risk management and risk control, but it's really about how do we be responsible in the way that we bring this into the business, responsible for the business so that we're not overloading the business with too much, responsible to the humans in the business. So protecting their privacy, looking at ethics, making sure that they feel comfortable with the change, but balancing and making sure that we don't introduce unwanted hazards into the business, or at least they're understood and controlled.
But we do that with a massive counterbalance of upside. So that contribution to use case design is just, I think somewhere where health and safety practitioners are so well poised and we have the ear of leadership, we have the ear of the frontline worker. We're visible in the business.
We really are the only role in my mind that has access to be effective. So we have to grab that with both hands.
- The first thing I'm thinking is that you're saying there's sort of a default no or a default maybe not no, but this is going to be difficult when it comes to new tech as opposed to defaulting to, let's see where the opportunities are. So my next question was going to be, why do you think the safety side of organization shies away from new tech?
So I think that's one reason, but I'm speaking specifically about like big data, wearable tech, biometrics, AI. Is it sort of a fear of change? Is it not maybe knowing what the upsides are?
- There's a few things, but one that really comes to mind, and this is quite technology-specific, but when we think about health and safety use cases for technology, we think about health and safety rather than improving the design of work. So my definition of safety tech, and I don't know if this is coming up somewhere in our chat, but when I describe a definition for safety tech, it is any technology that can improve the design, the experience, and the safety of work because...
And safety's a catchall for psychological safety, process safety, all the different forms of physical and psychological health and safety. So, any technology that improves the design, the experience, and the safety of work, because the safety of work is an emergent property of really well-designed work. So if I'm, for example, in a call center and I'm freezing cold, a safety tech in that context would be air conditioning.
Like, heating the room to a comfortable temperature. That would be, in my mind, considered safety tech. But let's go to the more modern technology, so computer vision using CCTV camera to monitor what's going on. And even the word, monitor, is a loaded word. To monitor what's happening on the shop floor in a factory.
And what I often see is tech founders who don't understand human psychology in a way that...not all obviously, but generalizing, they don't understand the dynamics of frontline work. They don't have that context that a health and safety practitioner has. The flagship use case for CCTV is PPE-compliant.
And that's because it's so palatable for the market to understand, yeah, this is what we're going to be doing. So, this is the bit we're crafting a really interesting or more valuable use case for the business. So PPE compliance may be important to a particular business for a particular reason. Generally speaking, a lot of the modern safety science is moving away from if we're doing PPE compliance, it's not so that we're concerned about telling a worker off because they haven't got their vest, it's more understanding why don't they have their vest or why don't they don't have their hat on.
So, sure we can use the technology for that purpose. But more importantly...or sorry, more valuable use case might be we're monitoring over a portion of time how much movement forklift vehicles have in and around loading zones where people are. And that might give us understanding as to the design and the layout of the factory.
Have we got an opportunity to change the layout to mitigate forklift and human interaction rather than alert people when there was a forklift and human interaction? No, the opportunity there is to change the design of the work because if we improve the work design, which will then improve the experience of the worker, because they're not worried about having to follow particular rules or have concerns about a forklift coming up behind them and not realizing, then we can improve the safety of work and the experience of those workers in the workplace.
So I think there's a function of people looking very surface level at health and safety use cases rather than looking at the way technology can improve work itself, the design of that work. So really technology improving work design is what we're trying to achieve here, not improving...so health and safety will be improved by improving the work design.
We're not going straight to the thing that we think health and safety is about because then we start focusing on all of the stuff that you'll hear from the folks like Greg Smith being paper safe, like David Provan. We're focusing on the work of safety rather than the safety of work. A lot of those thinkers are...I resonate with those people because I'm looking at work design, I'm not looking at, are these safety activities going to be more automated for the safety professional.
Because that isn't that valuable in my opinion.
- Opening up the opportunities of how it can be helpful rather than, yeah, I understand, rather than just looking for symptoms but looking at systems and... Anyway. So I'm going to move on. Artificial intelligence is the tech that's in the public consciousness right now. You know, people are talking about it.
It's a huge field of study with a lot of subsets. Where do you see opportunities for safety, or I should say work design, if it can harness some of the things that AI can bring to us?
- Oh, such a hard question to answer in a short space of time. I think, first of all, we need to be very clear which subset technology we're talking about under the umbrella of artificial intelligence. When we talk about artificial intelligence, everything is AI-driven. This podcast hosting solution that we're talking on now is AI-driven with the way that it manages the noise algorithm, the way that it optimizes the frame rate and the lighting.
They're all computers getting data, analyzing data against a set of criteria, and then feeding back up an improvement, which we then say, yes, we like that improvement, maintain that fancy lighting thing that you've just put on, or no, we didn't like that. And the computer then continue to learn.
So, we first need to be very clear about which technology we're talking about. Is it computer vision? Are we talking about a large language model that's being used to generate text content? What is it that we're talking about? That's very important, number one. Number two is when it comes down to it, this is data, all about data. So health and safety data sets, horrible, almost [inaudible], almost 100% of every organization's health and safety data sets.
And I say sets deliberately because you have so much data that could be considered data that can be used to improve design experience and safety of work. This is typically horrible. And let me explain a little bit. So, good quality data will be available when we need it. It'll be accessible, it'll be structured, it'll be in the format that we need, and it'll typically be...if it's going to have personal data, it will be managed from a privacy standpoint.
It'll be secure. So, all of those tenants of good quality data, I generally don't observe when I look at a health and safety data set. So the good, rich, quality data will be on Jenny's computer in a secret spreadsheet. It won't be in the corporate database.
So many reasons why that's the case, but that's an accessibility issue. You know, Jenny can have it on her computer, but Paul can't see it on his...he can't get access to that because it's only on Jenny's private drive. Or we've got a lot of checklist-type data, which is check boxes, but with no context, no pictures, no videos, no conversations, nothing to give us an understanding of why the no was a no and whether that means...
We've got a lot of high-volume, low-quality data that offers almost zero value to a machine. So this is the other thing we need to...I think the question you asked is, you know, what value can health and safety obtain from artificial intelligence? Well, right now, not a lot because our health safety data sucks. So, we have to be clear about what types of things we're hoping to achieve, and then make sure that our data is structured, stored, processed, pre-processed, available for a machine to learn from.
We also have to appreciate that right now, we don't have what's called general artificial intelligence or general AI. We have very discreet AI. So, the machines can learn or the computers will learn over time with reinforcement, typically with human in the loop, but the original dataset is full of bias.
It's our bias. We're the ones that generate the data. Sure, you can get sensor-derived data, so weather data, for example, but there's still bias in that data. There's a lot of bias in the data that we generate. And therefore, what the machines are serving up to us is full of bias. So right now, there's significant opportunity for what computers can do to assist us through decision making.
But we've got a responsibility to provide unbiased to the best of our ability, so much less bias in the system, or at least know what the biases are, make sure that data is of good quality, and be in the loop. So the human is in the loop to reinforce, to support the machine to learn.
Just like a sniffer dog going out and, you know, sniffing a whole bunch of different scents. We train the sniffer dog to learn which scent is the scent we're looking for and reinforce the dog to continue to look for that scent for us. That's all we're doing with data when we're machine learning with reinforcement learning.
So if we can bring the principles right back to basics, this is data science 101. I was horrible at maths at school, so this is an uncomfortable place for me. Understanding maths and numbers is not my forte, but getting right back to basics really clearly understanding what are these machines doing for us.
And then when you look at the data set that you have from health and safety data, you'll realize just how unlikely it is that we're going to get good-quality outcomes. So that's where we're at in health and safety. We also are really poor at determining what is the value to the business because we are often challenged to say, what's the dollar return on investment?
And that therefore, immediately aligns with the cost of an injury, which then goes to this whole therefore number of injuries that we save equals this amount of dollars in insurance. This is why we should use this technology. Really difficult way to demonstrate return on investment, which is why we see a lot of artificial intelligence, machine learning-type solutions used for advertising for things that are money generators because that's, you know, dollar return on investment pumps some money into the algorithm to serve up more content, more things that people would purchase.
And that's a tried and tested approach. In health and safety, we need to be far more creative about how the business understands its driving value. So it comes down to data. That's the fundamental basics. So if data scares you like it scares me, you need to get out of that head space, be a little bit more curious.
And, you know, it's a really uncomfortable place for me, but I'm finding it really interesting to learn, you know, the fundamentals of data science. I wish it came on naturally for me.
- Well, I think actually some listeners are going to feel comforted by that because I think a lot of people have similar discomfort, I would say. So when you're saying data, for our listeners who are like, well, what does he mean by data? Because data really is information, so it could be all kinds of things. What are the types of data that you think could be really rich sources if we can get them dialed in so that we are getting good data basically?
So we're getting something that we can derive insights from.
- Sure. So, I'd say there's four types that I would go for. Two are pretty almost exactly the same. So photos and videos. So they're the two that are almost exactly the same. They provide rich visual context. So data can be anything from a small bit of information coming from a sensor, so the temperature or the air quality, or it could be something like a conversation, a recording, or it could be a box in a checklist.
So, data can be effectively any piece of information that gets generated by work and then gets consumed by the business and we can use it to make...it's information that we can use to make a decision. So photos and videos provide visual context, something that's often missing when we assess and manage risk.
So we often try to plan work without any visual context. We plan it on, A, spreadsheet typically, or we have a PowerPoint deck that might have a few drawings and things on there. But good, rich, quality photos and ideally videos, and nowadays 360 video or simulation, so will give visual context.
So that's one data set, data stream, photos and videos. The second is voice. So you can get the voice file out of the video. In this podcast recording, the service that you're using will record the video stream and the audio streams completely separate to each other because with podcasts we know that there's a visual element, but the audio is the most important.
So the audio stream can be taken out and there's actually different types of audio quality. So there's 48 kilohertz, 44 kilohertz. Each one will provide different level of richness to the tone and the ability for a machine to learn off that data. So we can learn sentiment, we can learn voice modulation. I even have my voice cloned so that I can develop fast-cycle content.
Just by having my voice cloned, I can type in my words and it will come out with my voice. So voice and conversational data is extremely rich for health and safety. Without doing too much of a plug, there's a really great set of guys, live in my town, Perth, they're called Risk Talk.
So, they understand the richness of voice when it comes to talking about risk. So rather than writing your risk assessment down, you talk your risk assessment. And they do or their customers often ask to translate the voice file to text so that you can read. But the listening to the voice file and having the actual voice file itself, that's the rich piece of data.
So there's lots of AI tools that can be used to analyze that voice file, the actual talking dynamics. So photos, videos is one, audio is another, so conversational recordings. And then the last one is sensor data. So sensor data, there's quite a lot of...there has been quite a recent boost in the technology capability of wearable sensor technology, biometric sensors, as well as then machine-based sensors.
So condition monitoring sensors, vibration, temperature, running speed, those types of things will be machine-induced data or machine-generated data. There's system-generated data. So, you know, the amount of times people log into Microsoft Teams, the amount of times people will get login and access certain components of the management system, so how often a procedure I'm working at heights is downloaded, that's system-based generated data.
The user doesn't know that that data's being captured of them, but it is. Not in all cases by the way. So for those of you that think all of your computer ticks are being monitored, don't get too alarmed, but it is entirely plausible that that is happening in your business. So yeah, we can generate data ourselves from our bodies, biometric data, heart rate, pulse oximeter, blink rate, pupil dilation.
And then there's the wearable sensor data, which is typically movement and relative position of the worker to something else. So GPS coordinates, for example. So they are the four key data types that we generate that are extremely rich for health and safety. What I haven't included there, which are also data types, but I'm finding them far less valuable, are things like completed audit reports or completed working at heights permit checklist.
If we want to generate great insights from those data, we need to reimagine the checklist from scratch. Just capturing a digital version of our process that we've had in place for decades is highly unlikely to generate data that's going to be of value. There'll be so much noise in that data that it takes so much effort to cleanse and get it into a valuable format that it just isn't worth its while.
- As you're talking about this, I'm wondering, have you had conversations with safety professionals who maybe have never really thought about the opportunities in technology and then you start talking about sentiment analysis and biometrics? And is there a point at which they say, "This is starting to get creepy?" And the reason I ask that is because it's a trust issue and an ethical issue that people are already talking about with things like Alexa or Echo or yeah.
So, what are some of the ethical issues around that?
- Sure. So first things first, I have already realized that me saying biometric data, words like that can already disenfranchise a large portion of people kind of driving along listening to the podcast or having their cup of tea and trying to consume something. So there is a large portion of the world that just that word without good quality framing is too much.
Talking about visual context, talking about all of these things might just be inaccessible. So, I need to continually sense check myself to make sure that I'm not doing everything the opposite of what I try to do is try and be nice and simple and clear.
But I think there's a balance between trying to nudge people a bit further and think versus be so that base level that we're not getting to that next level. So, I use the social responsibility framework to get health and safety practitioners to look at their management system.
So, there's management system frameworks for health and safety, for psychological health and safety, for quality, for cybersecurity. There's environment, there's also one for social responsibility. And I think it's the best starting point if you are used to the ISO standard frameworks. If you like using that approach in your business with an integrated management system, the social responsibility elements are probably the best glue from a governance perspective to try and bring everything together.
And realistically, it's all about understanding the users and the users of the data. Not only just those that generate the data or interact with data, but the flow of that data and where it goes. Because data can lead the organization and be provided to a third-party audit firm, like, you know, Ernst & Young or PWC might be required to look at board sustainability reporting for the business.
So, the data will leave the bounds of the organization, they'll leave the...you know, we've seen more recently where, you know, people have been putting in private company data sets into public large language models and getting in trouble for certainly not realizing that they've done that.
People haven't deliberately gone out, you know, they haven't understood the gravity of what that is. Just a segue here, cybersecurity is a challenge that IT professionals are trying to deal with people in the business. They're trying to tell people how to behave around data. Don't do this, don't click on that thing. If you think about the original way of managing health and safety, it was the same, you know, person, don't move that, don't do this.
So, we are very much in that compliance approach with cybersecurity. And then there's people are culpable if something happens and then they get in trouble. And so there's a lot of like really interesting similarities between how we manage cybersecurity risk and how we manage health and safety risk. We're the same people, so the same humans are the one interacting with cyber, as well as physical and psychological health and safety risk.
But in answer to the question around data and ethics, one thing that I haven't seen in any of the organizations that I've been supporting when I look at their data is there is there hasn't been anyone with a health and safety data policy. So, you know, what is the governance structure around how we acquire, store, process, and move data around or inside and outside our business?
What types of data do we collect? Some organizations are very clear about how they're governed. So German organizations, for example, have German workers council and they have GDPR compliance requirements, which are very, very top of mind for them. So they know.
And there's the EU AI Act that's sort of informed in development and started to be executed. So they're aware of their obligations, but if you think about it, again, this is sort of very compliance-driven and we're doing it because we have to. I'd say every single health and safety practitioner that's listening to this, everyone is generating data in their business.
And if you don't understand what happens with that data, then you're not being responsible about supporting the people in your business to feel comfortable about generating that data. I do a six-part digital safety transformation certificate, and each module has a practical activity. One of the practical activities is to basically do worker insight.
It's really just going and understanding frontline work, have a conversation with frontline workers about what's going on with their work, but the exercise is to video it. So, there's a very simple step-by-step approach where they have to ask site or tell site that they're coming or that they propose to come with a camera to video some frontline worker interactions.
And they are required to observe and document the response from the frontline from the site. So some might say, no way, you're not allowed that, or you've got to fill out this checklist, or you've got to do this waiver form, or you have to do this, or make sure that there's no public and we can't have minors.
And so you'll understand quite quickly the initial response and the emotion and how much people do or don't want a camera in the workplace. And then we move through a series of other things that they need to do. But fundamentally, it's an exercise in understanding trust. And this is where a lot of people when they think about...when they hear safety tech and they think of my name and they don't know me and the way I do my work is they think that I'm just the gadget guy that tries to put VR in your business, which is so far from the truth.
It's quite hilarious. The foundations of trust psychological safety, the ability to feel like you can raise issues and be heard, the way that people behave in the business, the attitude, the culture, they are mandatory ingredients for successful technology deployment.
And most organizations will come to me at the point that they feel reasonably comfortable with those elements, and now they're trying to step it up. And they're trying to really...like, they feel comfortable with the way things are going, but they just can't get past.
You know, they've got trust, you know, workers are coming up with great ideas. Everybody's feeling comfortable, clearly, there's always going to be some issues, but they feel pretty good. They've done a lot of stuff around leadership. They've done a lot of stuff about management system design. They're now talking about critical controls. They're doing all of the things, they're ticking along, but now they're like, well, how can I scale this out? How can I make this impactful across my entire business?
And at that point, do I typically get engaged? For those that engage a little bit earlier and, you know, they ask questions like, people don't like wearing the headset, or people say that they can't log into the app. I call BS on a lot of that because provided that the user experience for the app is really simple, you know, scan a QR code and log in, people will know how to do that.
There's this excuse that they're unable to do it, or our older workforce don't know how to use technology, that's not true. Nine times out of 10, there's enough basic literacy in the workforce to use simple mobility devices, basic stuff. Sure, you know, there's the few that...likely in any population, there'll be a small portion of people that don't, but the vast majority are absolutely fine.
So if I hear something that is talking to the technology not being able to do its job, yes, there could be a problem with selection. It was a really bad selection of technology and solution design. But most of the time it's the fact that there's a fundamental issue with trust. So trust is the place to start.
The next is mindset. So, it goes trust, mindset, and then we can start potentially exploring what's happening with the technology side of things.
- Well, you just answered my next question, which was going to be, what's the role of trust in adopting new tech? The tech world, and here I'm talking about software apps and various, talks a lot about user experience, experience design, and customer success. How do these concepts, maybe a brief explanation of what they are, but how would they or do they relate to safety?
- I'll go back because you always remember it's customer success, user experience design.
- User experience and experience design, pick whichever one you...
- So I'll probably split...I'll talk about user experience design and customer success. So, if you look at the sales cycle of any technology, so if you're coming from the sales side, you engage with the business, you're typically trying to understand the marriage between the use case or what the business need is and the capability of the technology.
So the closer that the technology is designed for the specific context of the work, the more delightful the user experience will be for the worker. So generally speaking, so from a business to consumer, when we do something like, again, I'll use Spotify, they put a lot of effort into making sure that the interface, so the interface between the human and the technology, so the way that the app is presented to you is intuitive, simple to use, and meets the needs of the use case, which is to find music, generate playlists, those kind of things.
Arguably, I think the latest iteration for Spotify, the way that they're doing it now with sort of TikTok style is not the user interface that I like. I actually am frustrated with that particular piece of technology more recently. But it shows that just how powerful that interface can...and the design of that interface. So the color scheme, the layout, the size of the buttons, the text, all of those things, the contrast, those things will determine how delightful the experience is.
And going back to Wrist Talk again, so they've looked at the user experience and of the frontline worker as it relates to doing a risk assessment. They've gone out and they've realized to do a risk assessment, you stand around with a piece of paper that's pre-filled and you talk through a bunch of things and you write, or someone sits at a desk and writes down a whole thing.
And they looked at that...they observed that experience and they thought, there must be a better way to do this. Why aren't people looking around? Why aren't they talking to each other? So that's how they designed their technology. So that's the user experience design side of things, understanding the needs of the users and then making sure that the design takes into consideration the needs of those users in the context of their work, the workplace, and the world, the environment that they're in.
So that's user experience. So, in the sales cycle, if you've got a good user experience, you're more than likely going to get a sale at some point because, you know, people like this, they're starting to use it. And at this point, you may have talked about one particular type of use case for the technology, and that might only be helpful for small portion of the business.
Say, 5% of the business are using it regularly, 95% of the business just don't really understand how these technologies could help them enter customer success. So, customer success, a little bit like if you go and purchase an electronic gadget or a washing machine or, you know, something that's reasonably technical and mechanical from hardware store or from a White Good Store or the Hi-Fi store, there's some elements within the onboarding process.
So how do I use the technology? So the user manual, the guide could be in your vehicle, your car vehicle, you've got, you know, what is the warranty process? What are the simple steps I can do to self-diagnose? They all contribute to customer success. So it's post, I've already bought it and now I need to stay a customer. How do I maintain as a customer?
So it could be in the context of a health and safety tech. It might be you have an associated account manager that will check in with you regularly to determine is there other ways that we could make this technology useful for you. Or it could be continually updating features that keep you engaged and go, wow, this is really understanding the needs of the work that I do and I'm going to start using these features as well.
So, there are two key things, understanding, you know, the way that workers interface with technology and designing for them, and the second is how do you keep people engaged with the technology, particularly if it's a novel technology that requires training, support, handholding, so to speak, to get people comfortable. Customer success is not as important once it becomes so usable and so dependable in the business that it just becomes part of the DNA of the organization.
So that's the holy grail of technology vendors is they want it to be what in the business is called sticky. You want it to be sticky so that you can't unstick it from the business. Having said that, there's lots of technologies nowadays that almost, you know, one of the best questions you can ask a technology vendor is, what's the process to get you out of my business?
Like, if I need to get you out of the business because it's not the right technology, how easy is that? Because most technologies will try to hook you into...to clamp onto something that makes it dependable, which is good business sense. But sometimes it's important to know, you know, how do I actually get my data out of your solution if I need to move on?
You know, is that easy? Is it an exportable file? Is it something simple that I can do or not? So those types of questions that you ask a vendor can really determine the success for the business.
- The last time we spoke, you said that not enough people are looking at scaling or scaled organization to support change, and that lasting organizational change isn't possible without tech.
Can you elaborate on that a little bit?
- Sure. I think there's two things there. So I'll try and answer the question specifically. So, a lot of the people that you interview on this podcast are great thinkers, great leaders in this space. And they all, you know, particularly they're practitioners rather than the academics, but the practitioners that are delivering success in their business. More often than not, they can only percolate out or scale out their impact as much as they as a human can possibly do.
So, they can only get to certain number of sites. They can only touch a certain number of people. They can only have their quality version of an interaction or a conversation. They can train others to do it, but they will never be as impactful as them because, you know, they've got the secret sauce, there's some magic within that individual. So I observe a lot of organizations that paralyzed when that person leaves.
So, it's not persistent, there's no sustainability in the approach that that individual or maybe their small team were delivering. And now that that person's left, that tenacity, the drive, the vision is lost because they were unable to get more of the business going.
So that's what I call scaling. So how do I scale that person? How do I replicate that person's impact? And I truly believe that there is no alternative then to leverage technology. There isn't an option. I talked about voice cloning before. So there is cloning options, so you can clone these people.
But ultimately, you're leveraging digital capability to spread the impact that they're trying to have. So that's one thing. The second part about...there was a second part about scaling and I've lost it, but that's...yeah, I've lost what I was going to say.
- Well, that was the second part really, that lasting organizational change isn't possible without tech. And so you're talking about like a human has a...there's a limit to their influence.
- Yes. And the second bit that I was going to say is how do you actually get a technology to scale? So first of all, there's an individual can't have scaled impact without technology, but technology can't get and be used by all people in the organizations without a journey to get to scale. So what that is that's strategic use of technology.
So often what I see is we go back to the conference expo, people will go and find a toy to play with or something that they think is going to impact. It's funny there's a LinkedIn post at the moment where there's like a procedure, a piece of paper procedure with some photos.
Typically, this is called a safe work method statement, or SWMS, that's what they're called in Australian construction anyway. But it might be something like a job method statement of some descriptions. So a picture of the task and then some hazards and risks and steps and controls in the text. And basically, there's a video on LinkedIn at the moment where someone's got their smartphone and they put the smartphone over that piece of paper.
And the little picture that's on the piece of paper turns into a video. So, you put your smartphone over the top and the picture comes to life basically. And it talks to you about the step and the hazards and risks in that step. And I've had four people independently message me and say, we want to do this. And I am supporting one company go through the process of doing that.
And the interesting part is, so you've seen something, you immediately think it's going to be valuable to your business, and let's do it. And we'll just do a pilot. So a pilot for those that aren't familiar with the term, not flying the plane, but it's a short, small ring fence, timebound safe place to experiment technology.
And often businesses now have an entire innovation department that are set up to experiment. Now, what these little pilot programs and innovation labs do is they are experimenting and they'll demonstrate this perceived return on investment, perceived value, will it or won't it work. But to get that pilot scaled out into the organization so that it's used and part of the way business as usual is done is very, very difficult if it was never strategic.
So people going, hey, can you do this? Can, you know, anybody do this, or can you show me like a great technology for this or that? I've just seen this on LinkedIn. Can you replicate it in our business? For fun, I like to give it a crack and say let's experiment and give it a go. That's probably okay. But more importantly will be what is the strategic reason for you wanting to do this?
So generally speaking, that will be we're unable to get knowledge transfer around critical steps at scale for our subcontractors because they're only in the business for short periods of time. So we want to show them short format videos of critical steps in the procedure. That's great.
That's strategic. So that lets me know and I'll have asked those questions, how many users? Where? What format? Do they even have mobile phones? Do they have an internet connection? Do they have access?
Do you really want to continue to do paper, or is that a sustainability strategy that you have where you want to remove paper? So does it have to be taken off tablet? And are you going to be doing Apple or Android or da da da da? And that is strategic because that determines whether or not it'll be successful at scale in the business, or whether it'll just be another pilot that's in the pilot sort of crevice of doom where everything...you know, where all technology goes to have fun experimenting with each other and nothing actually impacts work.
So strategy and digitally enabled safety strategy, so a strong link between the technology strategy of the business, the business strategy, so the vision and the values of the organization and its actual execution and health and safety strategy. They need to intertwine for a digitally enabled safety strategy if we want to get technology from pilot to scale.
- Yeah. I think basically the why is the first question you ask. And if it's because it's cool, that might be a clue that it's not a strategic initiative. So, just a few more questions on the topic. What do you think is the cost of not adapting new tech in the safety industry?
- I think there's a big cost as an individual. I think there's a mental health cost. I think there will be a feeling of drowning. You're just never getting ahead of the game. You're never feeling like you're genuinely doing anything different because... Actually I never, ever like to use the word different and safety in the same sentence, especially if you use the word safety and then the word differently straight after.
To feel like you are on a real trajectory of growth in and you're... You see the statistics of, you know, we're not actually getting any better. We're just sort of babysitting the way stuff's always happening. I do believe that if you don't feel comfortable or confident in your ability to converse around technologies, and it makes you uncomfortable, I genuinely believe there's a mental health impact in relation to that.
So either an imposter syndrome thing, or I don't feel like I contribute, or I might leave the industry because, you know, I just can't compete with these new ideas. So there's an element of that. I also think that there's just a continual disenfranchise of the health and safety role if the health and safety role can't contribute to technology-enabled design of work.
And I think that loops back to feeling isolated, feeling like you can't make an impact in the business. And so I think it's an imperative really to be...so that you feel comfortable in the work that you do, that you feel like you're contributing meaningfully and at scale as [inaudible 00:54:43] a lot in this conversation.
I think they're the outcomes of not contributing to this discussion, and at least preparing your mindset that you'll give it a go.
- Yeah. I mean, the future and the tech is coming, whether we accept it or not.
- Yeah, I mean, it's here. We're all using it.
- Yeah. It's here. It's here. And we've... Yeah.
- It's like this new normal thing. You know, it just is name me a job that doesn't use technology, some format. Either to plan it, to report on it, to execute it. There's not many, if any. I'm probably sure I could probably find some. And then what is technology? Because, you know, the crux of...the definition of technology is the application of the scientific method to improve the way we work and society.
That's the kind of Encyclopedia Britannica-type definition. So it's about applying science, applying scientific knowledge to the betterment of society. That's what technology definition is. So we're using technology in all forms of what we do, even if it's to consume research or to experiment. We're using it for everything.
So being really purposeful and strategic about that, that is absolute non-negotiable in my book.
- It's not the last question because I have a few that I ask every guest, but I'm curious, if you could wave a magic wand, talk about new technology, and magically convince global safety professionals of one thing in relation to this topic, what would it be?
- If I could wave a magic wand and get every safety professional globally to do something, it would be to actually simply just be curious. I don't think there's any one particular action apart from to have that mindset of curiosity and innovation. So be open and that's my big ask is innovation mindset, growth mindset.
Be open to do things that make you a little bit uncomfortable. If you can get there and be curious, creative, that unlocks everything. And it sounds really simple, and you can be really purposeful about how you do this. You can do things like walk a different route home.
Rather than look at the ground when you're walking home, look at the tops of the buildings and see the shape that they make, the sign that you never saw before. Eat something different. Go to the supermarket and buy something deliberately different that you've never had before. What does it taste like? How does it make you feel? All of those things, that micro experimentation, deliberately being curious, being creative, draw stuff, play games, have fun.
You know, go out of your way to be uncomfortable and being courageous, being bold, thinking fast, being okay to fail within constraints. It's surprising how many people just don't go there. It's just get on with I'm too busy, I've got other stuff to do. Deliberately being conscious about that is just...I can appreciate how this sounds to some people, but it's been so life-changing for me.
The mindset shift from getting out of that constraint role in safety, which I was good at, and then into a tech role where I was really uncomfortable, but then realized just how incredibly powerful just the mindset side of thinking is.
- Well, and I think that's also a hopeful thing for anyone who's listening and maybe feeling it like, oh, this would be an uncomfortable area. Well, it was for you too, right? So, that's something to look to look at. Now, the questions I ask everyone. Where would you focus sort of soft skill, human skill training for tomorrow's safety professionals if you were, say, developing a curriculum, or what kind of skill?
Would it be curiosity, perhaps?
- You mean the course I've created? So I think the soft skills or, I don't know what you call them, but creativity, curiosity, thinking big, so being bold, thinking fast or acting fast, and being courageous, those kind of things.
They're part of what is considered an innovation mindset. So, Carol Dweck has the growth..she's got a book called "Mindset," which is great to look at fixed mindset and growth mindset. There's a different sort of form of mindset of the innovation mindset, which is closely linked with agile mindset.
Agile, if you're from the States, agile mindset, so having agility in the way that you think and the way that you approach things. So, Steven Denning does a lot of work on agile, but he does a lot of framing around mindset. So that's where I'd start. There's a need to be really curious with your questioning.
So I don't have a go...I'm sure there are some folks out there that have a go-to. So good quality questioning is a mandatory skill in health and safety professional practice regardless. I don't have a go-to resource. I think a lot of my questioning has been framed around trial and error. A lot of the way that I learned my questioning was as a physiotherapist.
So in my training as a physio, we spent a lot of time asking questions beyond the obvious. So, how did you injure yourself versus what types of activities do you find uncomfortable? You know, learn questioning as a skill and then you validate through observations.
So asking good quality questions and then really acutely observing what's going on and listening. So listening, asking good quality questions, but doing it in a curious way in a creative way, and so applying all of those things together. But that takes a lot of practice.
The bit that I struggled the most with is my body language cues of when I think people are calling bullshit. So I'm like, my face and my body just can't hide the fact that I'm asking a curious question, but I think that the person's talking absolute rubbish back to me. I can't not throw that.
So that's a skill I'm still trying to...
- Your eyes have an automatic roll.
- Yeah, something like that. I need to get my poker face happening. So yeah. There are some softer skill side of things that I think people should start with.
- Okay. And if you could go, and maybe that'll be the same answer, if you could go back in time to the beginning of your safety career, let's say, what's one piece of advice that you might give to yourself? Don't roll your eyes. No, I'm joking.
- It's probably play the long game. I think I was so emotionally invested in every interaction that I had that I just didn't let...I was probably quite abrasive and knee-jerk reaction to a lot of things because I was so passionate, I guess, about what I was doing. So that came out. It's in my personality, but I think it's not helpful in the workplace context to be so passionate about what you do.
I think there's an element of like the way knows the way, a little bit go with the flow. I think if I had the maturity to understand what that meant, and I don't think I could have done it back then, that's what I think would've been far more...I would've had a far more impactful career early on if I had been less emotional about how I responded to things that maybe didn't sit with me.
So, yeah, I call it the long game or seeing beyond, you know, and knowing which battles are worth kind of fighting and which ones are worth letting go. That's something that I think would be if I [inaudible 01:03:00] but that comes with so much other element of maturity that you could say that you should do that, but I don't know how helpful that would've been for me back then either.
- Yeah, I had a guest, I can't remember now, who said I don't know what I would've said, but I know I wouldn't have listened to myself, so it doesn't much matter.
- Pretty much. Yeah, that would've been me too, I think.
- So, are there any places that you could send listeners to learn more about some of the topics in our discussion? So they could be websites or projects, books. Is there some resources you could suggest?
- Yeah, there's a general three that I say to everybody, probably four I guess you could say. So one is industrial revolution, so Industry 4.0 by Klaus Schwab, the World Economic Forum founder. It's a little bit older now, but it kind of still works around just an introduction to Industry 4.0 and that fourth industrial revolution.
Very brief history on the first three industrial revolutions and then what's coming. And then Eric Redmond, he's actually the technology guy at Nike, or Nike, depending on which way you say it. He developed a book called "Deep Tech," which looks more at trends. And if you ever get exposed to any of the work that I do in this space, I talk about leveraging the trend, not leveraging the technology.
So leverage the trend that the technology is written on. So he talks about deep technology trends, book's called "Deep Tech" by Eric Redmond. So those two like foundational reading, but what is really good entry-level, best, best place to start is, you know the World Day for Safety and Health at Work. It happens on the 28th of April every year.
Just wasn't that long ago from the time of recording this podcast 2023. This year, I think the topic was how workplace health and safety is a fundamental right, worker's right or something. In 2019, so the year pre-COVID, it was all about the future of work and technology. And there are some amazing...there's like a compilation of thought pieces, some incredible thought pieces, and an introduction to sort of the different technologies that are shaping the future of health and safety and the future of work.
So that's from the International Labor Organization on their website. So those three are foundational. I think in terms of a safety institution that is doing the most thought leadership and collating thinkers from around the world, I was fortunate enough to speak and keynote their event at the start of the year, the Future of EHS Conference, which is the National Safety Council have a series. And I mean, I really don't like the name of the series.
It's called Work to Zero, but just take away your assumptions and loaded nature of what you think about that name. Just take that away, the Work to Zero component that they have on their website, National Safety Council's Work to Zero initiative. They are dedicated to explore how technology is going to improve health and safety. So they have some really great resources.
They've got a digital readiness roadmap, which is pretty good. It's not as good as mine. And there's, you know, some good entry-level content on there. The Safetytech Accelerator are a great Lloyd's Register funded tech accelerator. They're doing some interesting stuff around regulatory sandboxing and how you work with the regulator, so the HSC in the UK, about how to look at the regulatory environment and what the impact that is on being able to use merging tech.
You know, Lloyd's has always been a long-standing institution with shipping, so they do a lot of step around maritime safety tech, but they also do other safety tech mostly focused on UK stuff, but they do have a global remit. We haven't seen a huge amount of their work outside of the UK at this point, but that's a good resource.
But aside from those resources, which are fine, it really hasn't been a good place to go for safety professionals and safety adjacent folks to learn about the role that technology has to improve the design experience and safety of work. Most of it has been like quite vendor-driven, you know, vendors putting out white papers and stuff like that.
And the magazines doing some articles here and there because it's in Vogue. So, I put together the safety tech news, which I wish was a more regular collation of articles on LinkedIn. It's all me, so it's when I get around to doing it. But it's typically between once every two to four weeks at the moment. It was a lot more regular.
So that's a space where I curate all of the latest articles. I use a machine learning algorithm to scour 50,000 resources each period. And then I then train Leo, my algorithm, to serve up the best content and then I make commentary on that and try and thread it together so it makes sense.
But in terms of like actually where to get started and boosting digital literacy, how it works for a health and safety professional, and that's the reason why I started the academy and why I started my business because I felt that there isn't a place beyond a little bit of light reading, read a couple of books and read some articles here and there.
How do you operationalize that? Where's the coaching service? Where's the guidance? Where's the safe place to ask agnostic questions to help drive what you're trying to achieve? You know, that's why I figured, you know, the time is now to get out of steady-state employment and be of service to the community.
- Well, good. I mean, someone has to start the conversations. And yeah, vendor-driven conversations have agendas really. They have to. It's the way they work.
- Yeah, and I've been on both sides of the fence. So one of the things I do actually is help organizations have conversations with vendors. So what kind of questions should I ask? Can you help me document my requirements? Can you go and do a request tender? Can you help assess this technology? You know, those things are...you know, where do you go?
There's no book on that. And it can be a really draining and expensive process, both in terms of time, effort, mental effort, cost. And, you know, there's an organization I was loosely supporting the other day who was spending $50 million on a safety management system software.
Unbelievable. Not with my advice. So, it's incredible, you know, the waste that we can see out there in the market and the misdirection of funds and resources.
- Well, I'm glad you're stepping in with a little bit of digital literacy for safety folks. So our listeners can find you on LinkedIn, on your website, I'm sure?
- I wish they could find me on my website, but just like all good innovation, I'm building the plane while I'm flying it. So at the time of recording this webinar...oh, this podcast, I have web presence in development. Best place to catch me is on LinkedIn.
And, you know, with the academy itself, there's a variety of different courses that I am currently doing. So I don't have any public courses at this point. That's definitely the intention. But organizations are doing them in-house, so they're all branded for their organizations. I think it's important to speak the language of the organization and the look and feel needs to be congruent and consistent.
So a lot of content there. And then I'll be setting up a space within a place called the Safety Exchange. It's yet to be formally launched. It'll be launching hopefully sometime in mid to...yeah, probably mid-2023. And that will be a place where I will be recording my approach to safety innovation at the Australian Robotics and Automation Precinct.
So when I test new technologies or I'm coming up with a vision for the strategy, or I'm coming up with a policy, all of the policies are videoed, everything's 100% digital. So, I'm doing connectivity testing and network testing, all of that stuff I'll be filming and documenting, so it'd almost be like a documentary of how to inject safety tech, but I'll be doing it at the Automation Precinct.
And that will be available on the Safety Exchange once it's launched here, as you'll see. But best place is to follow my LinkedIn. And don't be a stranger, say hello. I'm pretty good at responding.
- That's all the time we have for today. Thanks to our listeners now in nearly 90 countries for tuning in. And thanks for the great conversation, Cameron.
- Thank you. Thanks, Mary.
- I'd like to give a shout out to the "Safety Labs by Slice" team, whose intelligence is anything but artificial. Bye for now.