Are people the biggest blocker to AI value?

Sarah Driesmans
December 17, 2025
4
min read
are-people-the-biggest-blocker-to-ai-value
Copied

Few statements provoke as much debate in boardrooms as this one:

“Our people are the biggest blocker to value from AI.”

On the surface, it sounds uncomfortable. Even accusatory. And yet, it surfaces repeatedly in conversations about stalled pilots, underwhelming returns, and AI initiatives that never quite escape the proof-of-concept stage.

In a recent debate between Greg Freeman, CEO and Founder of Data Literacy Academy, Kyle Winterbottom, CEO and Founder of Orbition Group, and Dr. Alex Leathard, CEO and Founder of Hecaton Consulting, this proposition was put under deliberate strain.

Greg and Kyle argued for the motion: that people, and more specifically leadership behaviours, are the primary constraint on AI value.

Alex argued against it: that systems, governance, risk, procurement, and policy failures are the real brakes on progress.

What followed was not a binary answer, but a far more useful one: a diagnosis of where AI initiatives fail, why they stall, and what organisations misunderstand about the relationship between people, systems, and value creation.

The “people problem” is rarely about skills

One of the most persistent myths in AI adoption is that the blocker is capability, that people simply don’t have the technical skills to use AI effectively.

In practice, that explanation rarely holds up.

Across industries, most employees already interact with AI in some form. Language models, automation tools, analytics platforms, and decision support systems are no longer exotic. What is missing is not exposure, but permission, confidence, and clarity.

Kyle Winterbottom’s experience advising executive teams reinforces this point. When leaders express frustration about adoption, they often describe behaviours downstream like resistance, hesitation, inconsistent use, without interrogating the conditions they themselves have created. Adoption does not necessarily fail because people refuse to change, it fails because leaders have not made change safe, meaningful, or worthwhile.

In that sense, the “people blocker” is rarely the workforce. It is more often than not the environment they operate within.

Is the people blocker actually a leadership blocker?

This is where the debate sharpens.

If people are hesitant, risk-averse, or disengaged with AI, it is usually because leadership has failed to do at least one of the following:

  • Role-model meaningful adoption themselves
  • Articulate a clear “what’s in it for me” beyond generic efficiency claims
  • Signal that experimentation is expected, not exceptional
  • Create guardrails that enable trust, rather than blanket restrictions

Kyle made a critical observation: leaders cannot communicate the value of AI if they do not understand it themselves. And understanding here does not mean technical fluency. It means being able to explain how AI changes decisions, workflows, and outcomes, not just how it speeds up tasks.

Greg pushed this further. In many organisations, AI is still treated as a reporting or tooling layer, something that demonstrates activity rather than solves core problems. When leadership frames AI this way, it is unsurprising that teams mirror the same shallow engagement.

Culture follows behaviour, and behaviour follows leadership signals.

The counterargument: people aren’t the real constraint

Alex offered a deliberate challenge to this framing.

In his view, blaming people is often a convenient oversimplification. In reality, organisations are constrained by structural and systemic issues that individuals cannot solve:

  • Opaque governance and procurement processes
  • Poorly designed operating models
  • Unclear accountability for risk and data ownership
  • Technologies that are not transparent, auditable, or trustworthy

Alex’s experience working with organisations led by technically fluent executives highlights the difference. Leaders who understand the breadth of AI, beyond today’s popular tools, are better able to communicate its role, assess risk intelligently, and embed it into decision-making. Where that understanding is absent, organisations default to defensive postures: blanket bans, excessive controls, or paralysis by policy.

From his perspective, people are not blocking value, systems are.

Risk aversion versus capability gaps is a false choice

One of the most productive moments in the debate came when the group rejected the idea that risk aversion and capability gaps are separate problems.

They are not.

They form a feedback loop.

People who do not understand a technology are more likely to fear it. People who fear it are less likely to use it. And people who never use it never build capability. The gap persists, not because of laziness or resistance, but because organisations fail to create psychological safety.

Alex described organisations with low tolerance for failure, where experimentation is discouraged unless success is guaranteed. In such environments, innovation becomes performative: isolated teams are given permission to “experiment”, while the rest of the organisation waits, becoming disengaged and unconvinced.

Greg reframed this as a strategic issue. At senior levels, risk aversion often stems from the same root cause: lack of understanding. Leaders cannot trade off risk intelligently if they do not trust what they are approving. When trust is missing, restriction feels safer than progress.

The result is predictable: innovation slows, adoption stalls and AI becomes something to be managed, not leveraged.

Data quality and governance: technology excuse or human reality?

No discussion of AI adoption is complete without addressing data quality and governance, topics often cited as massive blockers.

The debate surfaced an uncomfortable truth: most data quality problems are not technical.

They are behavioural.

Data is created by people. Entered by people. Maintained by people. And yet, organisations routinely treat quality issues as purely downstream system failures rather than upstream human ones.

Kyle highlighted a familiar pattern: governance processes that are so complex they discourage compliance, leading people to work around them. Alex pointed to the tension between operational teams who know the data is flawed but lack resources to fix it, and executives who assume that signing off a policy is the same as improving reality.

Greg brought the loop full circle. When organisations blame data quality for poor AI outcomes, they often ignore the fact that the same behaviours that undermine data today will undermine AI tomorrow. Governance is not a control problem. It is a culture and incentive problem.

So… are people the biggest blocker?

The debate never resolved into a neat answer, and that is precisely the point.

AI value is blocked by people and systems and leadership and operating models. But what emerged clearly is this:

  • Systems rarely change themselves
  • Governance does not enforce culture
  • Technology does not create trust

People do.

More specifically, leaders do, through the behaviours they model, the risks they tolerate, the incentives they design, and the environments they create.

AI will not deliver value simply because it is deployed. It delivers value when organisations are willing to change how decisions are made, how failure is treated, and how responsibility is shared.

Blaming people is unhelpful and ignoring people is fatal. The organisations that understand that distinction will be the ones that move from AI activity to AI impact.

Unlock the power of your data

Speak with us to learn how you can embed org-wide data literacy today.