Why your AI training isn't working, and what genuine literacy actually looks like

Jessica Bryan
3
min read
March 12, 2026
why-your-ai-training-isnt-working-and-what-genuine-literacy-actually-looks-like
Copied

Key Takeaways

  • Most AI training is an event, not a capability. The gap between the two is where ROI dies.
  • 66% of people use AI outputs without checking them; 56% make mistakes as a result.
  • Structured AI training reaches fewer than 10% of workers in nearly half of organisations.
  • True AI literacy requires four skills: conceptual understanding, effective interaction, critical evaluation, and ethical/regulatory awareness. Most programmes only teach the second.
  • Data literacy must come first. Without it, AI literacy builds confidence without foundation.
  • The EU AI Act (Article 4) makes workforce AI literacy a legal compliance requirement as of February 2025 in Europe.
  • Organisations with mature AI literacy programmes are twice as likely to report significant ROI from AI investments.
  • 99% of organisations report having some approach to developing AI skills. So why is insufficient worker capability still the single biggest barrier to AI integration, identified by more than 3,200 senior leaders across 24 countries?

    The answer is that most of what passes for AI training is not literacy. And the distinction matters more than most organisations currently recognise.

    Training is an event. Literacy is a capability. Most organisations are investing in the former and measuring for the latter.

    What the training gap actually looks like

    The scale of under-investment is striking when you look past the headline figures. Most AI skills development relies on mentorship and self-directed learning, approaches that are inconsistent by design and impossible to measure at scale. Where structured training does exist, almost half of executives report it reaches fewer than 10% of the workforce.

    The result is a small pocket of AI-capable specialists sitting inside an organisation that largely continues as before. Decisions get made the same way. Data goes unquestioned. The tools are deployed, but the judgement to use them well has not been developed. And because it looks like adoption from the outside, the gap is rarely examined closely, as that would mean some heads might need to roll or budgets reevaluated which nobody wants.

    On the employee side, the picture is equally concerning. A University of Melbourne and KPMG study of 48,000 people across 47 countries found that 66% rely on AI output without evaluating its accuracy, with 56% making mistakes in their work as a result. Only 34% of FTSE100 annual reports mention AI training at all.

    The investment in AI has scaled. The investment in the people using it has not.

    What genuine AI literacy actually requires

    AI literacy is not a prompt engineering workshop. It is not an introductory module bolted onto an existing L&D programme. At its most rigorous, it comprises four distinct capabilities.

    The first is conceptual understanding, knowing how AI systems actually work, what they are optimised for, and critically, where they fail. A model that produces confident-sounding outputs is not the same as a model that produces accurate ones. People who do not understand that distinction cannot catch the difference.

    The second is interaction capability, engaging with tools effectively enough to get reliable, useful outputs. This goes beyond knowing which buttons to press. It means understanding how to frame prompts, how to iterate, and how to recognise when a tool is the wrong instrument for the task.

    The third is critical evaluation, the ability to assess outputs for accuracy, bias and hallucination before acting on them. This is where most current training provision fails. Generating AI outputs is taught. Interrogating them is not.

    The fourth is ethical and regulatory awareness, understanding the guardrails that govern AI use within an organisation and within the law. As of February 2025, Article 4 of the EU AI Act requires that all providers, deployers and users of AI systems ensure adequate AI literacy across their workforce. This is a compliance obligation. For most organisations, current training provision falls well short of what it demands.

    Generating AI outputs is taught. Interrogating them is not. That is where the risk lives.

    The sequencing argument most strategies are missing

    There is one further element that rarely appears in AI training programmes, despite being foundational to everything else: data literacy.

    AI tools are only as reliable as the data they process, and only as useful as the judgement applied to what they produce. If your people cannot question where data came from, how it was collected, or what biases it may contain, they cannot critically evaluate what the AI does with it.

    Data literacy is what allows people to interrogate the input. AI literacy is what allows them to interrogate the output. The two are not interchangeable, and they are not equally well developed in most organisations. An AI strategy that prioritises the latter without securing the former is not building capability. It is building confidence without foundation, and that is a more dangerous outcome than no training at all.

    What a literacy programme actually looks like

    Effective AI literacy programmes do not start with content. They start with an accurate picture of where capability actually sits, by role, by function, by seniority. One-size approaches do not work because the capability requirements are not uniform. What a risk or compliance professional needs from AI literacy differs materially from what a frontline operations team needs.

    Rigorous frameworks exist to support this work. The Alan Turing Institute's AI Skills for Business Competency Framework maps competencies across four learner personas and five dimensions. The Digital Education Council's AI Literacy Framework integrates critical thinking, ethics and domain expertise alongside technical understanding. Neither is a prescription, but both provide a more honest starting point than a standard training catalogue.

    The commercial case is equally clear. DataCamp's 2025 research found that organisations with mature AI literacy programmes are twice as likely to report significant positive ROI from their AI investments than those without. IBM's 2025 survey found that 66% of organisations that had cultivated AI literacy from board level to entry level were already reporting significant productivity gains.

    The question is not whether your organisation has done some AI training. Almost everyone has. The question is whether what you have built or assigned constitutes genuine literacy, the kind that changes how people think and decide, not just which tools they know how to open.

    Unlock the power of your data & AI

    Speak with us to learn how you can embed org-wide data & AI literacy today.