Most organisations know they have an AI skills gap. Few have an accurate picture of where it actually sits, how deep it runs, or what it would take to close it. This guide sets out a practical approach for senior leaders who want to move from assumption to evidence before committing learning budgets or designing programmes.
WHY THIS MATTERS
Deloitte's 2026 State of AI in the Enterprise report found that insufficient worker skills is the single biggest barrier to AI integration, identified by more than 3,200 senior leaders across 24 countries. Yet fewer than half were doing anything significant about it. The gap between recognising the problem and actually diagnosing it is where most organisations are stuck.
Step 1: Distinguish capability from activity
The first thing to get clear is what you are actually measuring. Many organisations conflate AI activity with AI capability. The fact that tools have been deployed, that some teams are using them, or that a training module has been completed does not tell you whether people can use AI well.
Genuine AI literacy comprises four distinct capabilities:
- Conceptual understanding: knowing how AI systems work, what they are optimised for, and where they fail
- Interaction capability: engaging with tools effectively enough to produce reliable, useful outputs
- Critical evaluation: assessing outputs for accuracy, bias and hallucination before acting on them
- Ethical and regulatory awareness: understanding the guardrails that govern AI use within the organisation and under applicable law
An assessment that measures only tool usage or training completion will give you an activity picture. You need a capability picture. The distinction shapes everything that follows.
Step 2: Map capability requirements by role before measuring anything
The most common mistake in AI literacy assessments is applying a single standard across the organisation. Capability requirements are not uniform. What a risk or compliance professional needs from AI literacy differs materially from what a frontline operations team needs. Measuring both against the same benchmark produces data that is neither accurate nor actionable.
Before any assessment takes place, map the capability requirements across your organisation by role and function. A useful starting structure is four broad persona groups:
- Decision-makers and leaders: need to evaluate AI-generated insight, govern AI use, and understand strategic risk
- Analysts and knowledge workers: need to interact with AI tools, evaluate outputs critically, and apply domain judgement
- Technical and data teams: need deeper conceptual understanding and the ability to assess model limitations
- Frontline and operational staff: need sufficient literacy to use tools safely and flag concerns appropriately
The Alan Turing Institute's AI Skills for Business Competency Framework, informed by sixteen existing standards, provides a rigorous starting point for this mapping exercise across four learner personas and five dimensions of competency.
NOTE ON DATA LITERACY
AI literacy without data literacy is a surface-level intervention. If your people cannot interrogate where data came from, how it was collected, or what biases it may contain, they cannot meaningfully evaluate what AI does with it. Any assessment should include a data literacy baseline, particularly for roles where AI-generated insight will inform decisions.
Step 3: Choose your assessment method
There is no single right method for assessing AI literacy. The approach should match the scale, maturity and purpose of the assessment. Three methods are most commonly used, and they are not mutually exclusive.
Structured self-assessment
A role-differentiated questionnaire that asks people to rate their own capability across the four dimensions. Fast to deploy, good for establishing a baseline picture across large populations, and useful for identifying where perceived confidence outstrips actual capability. Limitations: self-reported data tends to overstate ability, particularly in organisations where AI confidence is culturally rewarded. Pair with at least one objective measure.
Scenario-based evaluation
Participants are given realistic AI-generated outputs and asked to evaluate them: identifying errors, spotting hallucinations, assessing bias, or making a decision based on the output. This measures what people can actually do rather than what they think they can do. More resource-intensive to design, but produces the most actionable data, particularly for roles where critical evaluation is a core requirement.
Manager and peer observation
Structured observation of how people use AI in practice, against defined criteria. Useful for capturing behavioural literacy that neither self-assessment nor scenario testing can fully reach. Requires trained observers and a clear rubric to produce consistent data. Most effective as a supplementary method rather than a primary one.
Step 4: Analyse the gap, not just the score
Raw capability scores are less useful than gap analysis. The question is not simply how literate your workforce is, but where the distance between current capability and required capability is largest, and what the consequences of that gap are for the organisation.
Structure your analysis around three questions:
- Where is the gap widest relative to the capability required for that role? Prioritise these areas first.
- Where is the gap creating the most risk? A moderate literacy gap in a role that regularly acts on AI-generated insight is more urgent than a larger gap in a role with limited AI exposure.
- Where does confidence significantly outstrip capability? This is often the most dangerous gap, and the hardest to surface without objective measurement.
This analysis is what allows you to design a programme that is differentiated by role and prioritised by risk, rather than one that covers everyone with the same content and calls it done.
Step 5: Treat the assessment as a baseline, not a conclusion
An AI literacy assessment is the start of a programme, not the end of a project. The value of the data degrades quickly as tools evolve, roles change, and new regulatory requirements come into force. Build in a reassessment cadence from the outset, and design your programme so that progress can be measured against the original baseline.
The organisations seeing the strongest ROI from AI investment are not necessarily the ones that spent most on tools. They are the ones that built a clear picture of where capability sat, designed learning around that picture, and treated literacy as an ongoing organisational capability rather than a one-time intervention. DataCamp's 2025 research found that organisations with mature AI literacy programmes are twice as likely to report significant positive ROI from their AI investments than those without.
The gap between recognising the problem and diagnosing it accurately is where most AI strategies stall. The assessment is not overhead. It is the foundation.
Unlock the power of your data & AI
Speak with us to learn how you can embed org-wide data & AI literacy today.


.png)