27 July 2025
Neil Jennings
Using a small word to ask better questions and reduce AI exposure...
The Problem with “If Only”
We’ve all said it.
🚩 “If only we’d seen that coming…”
🚩 “If only we’d done more due diligence…”
🚩 “If only we’d slowed down before integrating that AI tool...”
Hindsight’s great at showing us what we missed. It tells us what we should have done, once it’s too late to do anything about it.
The trouble is, if only usually arrives after the damage is done. After the contract falls apart, the data slips through the cracks, or the tool goes live without the right checks in place.
And that’s what makes “if” so interesting.
Alan Partridge, Kipling, and X, Y, Z
There’s a moment in I’m Alan Partridge (a cult-favourite British comedy) where Alan tries to summarise Kipling’s poem If.
“If you do X, Y and Z, Bob’s your uncle.”
It’s ridiculous, obviously. But it’s also... not wrong. There’s truth in it: if you understand what X, Y and Z are, and if you act accordingly, with some clarity about your intended outcome, then yes, things probably will go to plan.
The key, as ever, is in the detail.
Hindsight Is a Cheat Code
When we say “if only,” we’re not doing risk management, we’re doing an investigation. We already know that something went wrong. The likelihood is 100%. It’s no longer a risk, it’s a real event.
Hindsight lets us cheat with probability. We suddenly become very good at identifying root causes... but only after the fact. And so the essence of risk management (i.e. anticipating the likelihood and impact of bad outcomes) gets bypassed completely.
Santayana said that “Those who cannot remember the past are condemned to repeat it.” True, but in AI risk and governance, we often don’t get the luxury of remembering.
At the corporate level, we rush. Maybe we skip questions. Maybe we don’t realise something was risky until it’s already played out. We probably should have. It seems to be happening at a faster pace - the pressure to “do something with AI” across different business units often leads to something happening. Just not necessarily the right thing.
That’s why “if only” becomes such a common refrain. And hindsight just tells us what we missed.
Turning “If” Into a Tool
But “if” doesn’t have to be reserved for hindsight.
Used properly, it’s a planning tool. A gap analysis framework. A conversation starter. It invites teams to ask the right questions before something goes wrong, across legal, compliance, ops, product, HR, and beyond. ‘If’ becomes a kind of structured foresight. A way to surface hidden exposure and set boundaries before you’re cleaning up a mess.
So how do we apply this in practice?
Common AI ‘If Onlys’
Let’s look at a few examples from the AI world:
🟠 If only we’d mapped our supply chain before this latest AI integration…
🟠 If only we’d defined our risk tolerance, we wouldn’t be spinning in circles…
🟠 If only we’d done a basic risk review before launching that tool, we wouldn’t be stuck in legal back-and-forths…
These aren’t edge cases. They’re common, repeatable failures. The kind that come from speed without structure. The opposite of risk-awareness.
Proactive ‘Ifs’ to Use Now
Instead of waiting for regret, here are a few forward-looking ‘ifs’ that might help guide better AI governance decisions:
⁉️ If someone is building AI for you, make sure it’s fit for purpose, and that you're properly protected if it’s not.
⁉️ If you're using AI internally, no one should ever be able to say, “I didn’t know we weren’t allowed to do that.”
⁉️ If you can’t explain the value of a tool, pause and figure it out. Otherwise, risk wasting time, money, and goodwill.
⁉️ If you haven't defined acceptable risk (or your non-negotiables) hit the brakes and get aligned across executive, product and other relevant teams.
⁉️ If no single person can clearly describe what AI tools are in use, or what data they’re using, you’re already exposed.
⁉️ If your privacy or compliance foundations are weak, your AI governance framework is likely insufficient by default.
⁉️ If tools are spreading across departments without central oversight, you need to ask: What’s the mandate? Who owns this?
⁉️ If your contract templates or client terms don’t mention AI, that silence might bite you later.
⁉️ If AI risk management or governance means a few Slack messages and a Notion page, it’s time to level up.
⁉️ If your vendors are using AI and you haven’t asked how or why, that risk is now your risk too and you at least need to understand it, if not control it.
⁉️ If your internal policies sound great but live in a PDF no one reads, you’re relying on hope, not governance.
⁉️ If your AI plans were drawn up without legal, privacy, or frontline ops involved… start again, properly this time.
The Real Work of AI Governance
AI governance isn’t a Friday afternoon job for a Monday launch. It’s cross-functional, messy, iterative, and political. But it’s also essential. Recognising and controlling risk in AI is about understanding the intended end results, knowing who needs to be involved in what conversations, and having a solid grasp of risk appetite at the highest level. Without this, any governance or risk management when using or building AI is just guess work.
Governance done well is the thing that lets innovation scale, without tripping over ethics, law, or operational chaos.
👉 Yes, absolutely lean into AI. If you don’t, that might be the biggest risk of all.}
‼️ Just don’t do it blindly.
Make Risk Aware Decisions.
Because “if” is a small word, but left unanswered, it causes big problems.
At GLF Strategic Compliance, we bring clarity to complexity, and help businesses ask the right questions, identify real exposure in AI, and create risk-aware AI governance frameworks.
Reputation takes years to build, but only a moment to destroy.
Don’t let ‘if only’ be the reason for this. Reach out today to ask about our AI Risk & Governance Baseline or our AI Governance Program Builder packages!
This content is informational only and not legal advice. GLF is not a law firm regulated by the SRA.
Get in touch to talk about AI governance, compliance and risk management solutions!