Operational experience. Applied through AI.

Bringing systems thinking from complex infrastructure projects into the world of AI-enabled ventures.

Founder

My career over the last 20 years has been spent operationalising technology in environments where the stakes are real and the stakeholders are many. Working across Australia, New Zealand, South East Asia, and the Middle East — coordinating hundreds of moving parts across complex governance structures, industrial constraints, and live operational environments. Systems integration across infrastructure and education sectors, where the gap between "the technology works" and "the organisation can use it" is where most projects fail.

Alongside that, I built Digital Pulse Records — a creative technology venture where I learned how to think in layers, use constraints as design inputs, and build systems that serve people who think differently. Music production and education technology aren't as far apart as they sound. Both require you to understand the user, respect the complexity, and deliver something that actually works in practice.

AI is where all of this converges. It's the first technology shift that rewards operational thinking as much as technical skill. Most organisations building with AI have the engineering. What they're missing is someone who knows how to make technology stick inside complex, messy, real-world operations — and who's done it before AI was part of the conversation. That's what Digital Pulse brings.

How we think

Three principles

Systems over features

Good products are built on good systems. Get the operating model right and the features follow. Most failures aren't about engineering — they're about organisations that aren't ready to use what they've built.

Proof over promises

Build the smallest thing that validates the idea. Then build the next smallest thing. Every proof resets what's possible. Skip the validation, and you're building castles on sand.

Clarity over complexity

If you can't explain the system simply, the system isn't ready. Complexity creeps in quietly. The job is to keep it out, or cut it down before it cuts you down.
Why now

The AI gap no one talks about

Most organisations have tried AI. They've run the demos, built the proofs of concept, maybe even shipped something. And then they hit the same wall: the results aren't consistent. Run the same prompt twice, get different outputs. Run it next week, get something that contradicts last week. There's no reproducibility — just hope that this time the output is good enough.

Worse, the process is a black box. Something goes in, something comes out, and nobody can tell you exactly what happened in between. There's no audit trail, no traceability, no way to explain to a stakeholder — or a regulator — why the AI produced what it did. When things go wrong, you can't diagnose it. You start over.

And underneath all of it: there's no validation layer. AI generates, a human reviews manually, and you hope they catch the errors. That doesn't scale. It doesn't hold up under pressure. And it definitely doesn't work when you need to produce hundreds of consistent, quality-checked outputs across a real operation.

This isn't an engineering problem. It's a systems problem. And systems problems need people who've spent their careers solving them.

Let's work together.

Get in touch