Many board conversations about AI still start with: “What should we do about AI?”
But that’s not the important question board’s should be asking: The real question is: “Do we actually understand how AI is being used across our organisation and the risks that creates?”
Because in most organisations I speak to, the honest answer is: Not fully.
AI is already being used. Sometimes explicitly. Often not. And that creates a problem for boards.
This is not a technology problem. It is a governance and accountability problem.
There’s a growing gap emerging:
- AI adoption is accelerating
- Governance, oversight and visibility are not keeping pace
Industry evidence backs this up. Many organisations are already using AI, but a significant number still lack a clear policy, strategy or governance framework leaving boards exposed.
That is not because boards are doing something wrong. It is because they don’t yet have a clear line of sight.
Very few boards can confidently answer:
- Where is AI being used?
- What data is it using — and is it safe?
- What decisions is it influencing?
- What risks are we accepting (knowingly or not)?
- Who is accountable if something goes wrong?
If those answers are not clear, then neither is the risk.
And this is where the conversation needs to shift.
AI isn’t a future scenario to plan for.
It’s a present reality to govern.
Boards can’t wait for regulation to catch up.
And they can’t delegate this entirely to IT.
Because the real question is not: “Are we using AI?”
It’s: “Are we in control of how AI is being used?”
The organisations that get this right won’t be the ones experimenting the most.
They’ll be the ones that:
- See it – understand where AI is already in play
- Say it – are explicit about risks, intent and accountability
- Sort it – put governance, guardrails and direction in place
More boards are starting to ask these questions now, often prompted by a sense that AI is already further embedded than they expected.
How is this showing up in your organisation or board discussions?



