Key Takeaways
Boardrooms have never moved faster to adopt a technology so few people know how to use. Half your workforce has had fewer than five hours of AI training and 66% of workers rely on AI output without evaluating its accuracy. If you’re one of the 88% with an AI strategy, how confident are you that you sit in the 4%?
It's no surprise then that the AI skills gap is now the single biggest barrier to AI integration. That gap is not a technology problem. It is not a change management problem. It is, in almost every case, a literacy problem, and it is playing out inside organisations that, on paper, have already accounted for it.
The strategy exists. The execution does not. And the distance between them is measured in skills, not spend.
The cost is quantifiable
Workers currently spend 2.8 hours every day on data-related tasks. A third of that time is lost to inefficiency, the equivalent of 27 working days per employee, per year. Across the UK economy, that calculates to £61.94 billion annually. These are not abstract figures. They represent decisions made on flawed data, outputs accepted without scrutiny, and AI investments that return noise rather than signal.
The risk is not limited to wasted time. When AI outputs go unchecked, the consequences compound. A single high-profile hallucination during Google’s Bard live demonstration wiped roughly $100 billion from one of the world's most valuable companies in a single day.
At a smaller scale, the same dynamic runs through every organisation deploying AI without the critical thinking infrastructure to catch it. Harvard Business Review research found that employees encounter AI-generated content that appears polished but lacks substance at an invisible cost of $186 per employee, per month. Under time pressure, those outputs get used. Decisions follow.
66% of workers rely on AI output without evaluating its accuracy. 56% make mistakes as a result.
Training is not the same as literacy
Nearly every organisation (99%) reports having some approach to developing AI skills. Most rely on mentorship and self-directed learning. Where structured training exists, almost half of executives say it reaches fewer than 10% of the workforce. That is not a literacy programme. That is a small specialist capability sitting inside a largely unprepared organisation.
Genuine AI literacy comprises four capabilities: understanding how AI systems work, including their limitations; interacting with tools well enough to get reliable outputs; evaluating those outputs critically for accuracy, bias and hallucination; and operating within ethical, regulatory and organisational guardrails. As of February 2025, the EU AI Act makes the last of these a compliance obligation, not an aspiration.
There is also a sequencing argument that most strategies are missing entirely. AI literacy without data literacy is a surface-level intervention. If your people cannot interrogate where data came from, how it was collected, or what biases it may carry, they cannot meaningfully evaluate what AI does with it. Data literacy is what allows people to question the input. AI literacy is what allows them to question the output. An AI strategy that invests in the latter without securing the former is building on unstable ground.
Organisations with mature AI literacy programmes are twice as likely to report significant ROI from their AI investments.
Literacy is a leadership problem
Effective programmes do not start with training. They start with an accurate picture of where capability actually sits, by role, by function, by seniority, and build from there. The requirements are not uniform. What a risk professional needs from AI literacy differs materially from what a frontline operations team needs. A generic module serves neither.
IBM's 2025 research across EMEA identified cultivating AI literacy from board level to entry level as one of five priorities for accelerating ROI. Of organisations that had done so, 66% were already reporting significant productivity gains. Further research found that 95% of executives rank critical thinking as equally important as technical expertise in an AI-enabled organisation. Nearly half say managers hold minimal accountability for developing it.
That accountability gap is where most programmes stall. Not in design. Not in intent. In the absence of anyone whose job it is to close the distance between strategy and capability.
The window is still open
The organisations that move fastest from here will not be the ones that spent most on tools. They will be the ones that built literacy into every level, starting with an honest assessment of where capability actually sits, not where the strategy assumes it does.
At Data Literacy Academy, we work with enterprise organisations that are serious about closing that gap, starting with an honest picture of where capability actually sits before a single learning budget is allocated. Our programmes build from there, by role, by function, by seniority. Because the window for competitive advantage is still open but not indefinitely.
Unlock the power of your data & AI
Speak with us to learn how you can embed org-wide data & AI literacy today.

.png)

.png)
.jpg)
.png)
.png)
.png)
.png)