The more time I spend in incident investigation, risk, and cultural safety work, the more I notice how the same dynamics play out across very different sectors. Food manufacturing, universities, aged care, energy—they couldn’t be more different on the surface. But underneath, the challenges rhyme.
The problems are rarely about a missing procedure or a poorly designed control. They’re about pressure and silence. They’re about what gets normalised, what gets tolerated, and what doesn’t get said until after the fact.
Working with AI in this space has been a revelation—not because it replaces judgment, but because it sharpens it. In incident investigations, for example, the most useful insights come not from the machinery or the sequence of events, but from the cultural signals: was there pressure to hurry? did anyone feel it was unsafe to speak up? was fatigue quietly shaping behaviour? With SPiRA prompts woven into the process, investigations stop being a hunt for blame and become a structured way of learning what was really at play.
In universities, the challenge looks different but carries the same root. They sit on mountains of risk data—registers, spreadsheets, audits—but much of it is static. Through RiskSense, I’ve seen what happens when that data becomes live: when the real question is no longer “is this risk on the register?” but “how do we know the controls are still working, here and now?” That shift changes risk from paperwork to conversation.
In aged care, the pattern is even more human. What keeps surfacing isn’t just procedural gaps—it’s the emotional weight staff are carrying. Investigations reveal the silence of fatigue, avoidance, or unresolved conflict. AI helps here by holding up a mirror to those patterns, surfacing what teams struggle to name in the moment.
And in energy, where hazards are ever-present, the focus is on clarity of ownership. A simple check-in like, what uncertainty are you holding right now? often unlocks a depth of reflection no matrix or dashboard can replicate.
Across all these settings, the same lesson emerges: risk isn’t data, it’s lived. It breathes in the relationships, the pressures, and the silences of everyday work. What AI offers in this space is not automation, but augmentation—a way of noticing drift earlier, making silence visible, and giving organisations a memory they often lack when incidents are treated as one-offs.
The real frontier isn’t whether we can prevent every incident—we can’t. It’s whether we’ve built the kinds of systems and conversations that detect drift before it hardens into harm. That is the line between compliance and genuine resilience.
So here’s the question I keep coming back to—and the one I’d leave with you:
In your own organisation, what risks are already present but remain unspoken? And what would it take to bring them to the surface, before they speak for themselves?
At ACN, we’ve seen what happens when those unspoken risks do come to the surface: leaders save time, organisations reduce costs, and confidence grows. In some cases, incident investigations have been completed 85% faster, staff turnover has fallen by more than 30%, and insurance costs have dropped by double digits. The pattern is clear—when silence gives way to dialogue, resilience becomes measurable.
Steve Lang
Co-Founder and Managing Director


