Let's be honest. Most data and AI programmes have a value problem, and they had it from the very beginning, because nobody designed them with value in mind.
That's a pretty uncomfortable thing to say out loud. But it's true. And it applies just as much to data and AI literacy programmes as it does to data projects in general. Whatever you call it, whether it's fluency, literacy, confidence or culture, the challenge is the same. When someone asks "so, is this actually working?", most teams don't have a great answer.
Let's unpack this together. Not just identifying the problem, but thinking through how to fix it.
Why proving value is so hard
Here's the thing about data and AI literacy: the return doesn't show up immediately. You train people, you run workshops, you roll out programmes, and then someone in finance asks what the ROI is, and you're stuck.
Part of the problem is that organisations try to squeeze new measures into old frameworks. They want the same kind of direct, one-for-one attribution they'd get from a sales campaign or a product launch. But it doesn't work like that.
At Davos this year, there was an actual conversation about what the straight ROI on an AI investment looks like. The answer? One to two years. Just because you can get an instant answer from a prompt doesn't mean you're going to see instant value from the tool. It's like building a house, the hammer doesn't build it for you. It takes time, skill, and application.
So instead of trying to force the new into the old measurement frameworks, maybe we need to invent new ones.
ROI and ROE: Two sides of the same coin
Everyone knows ROI. Return on investment, how much did we spend, what did we get back. Simple in theory, tricky in practice, especially with something like data literacy where the benefits are diffuse and compound over time.
But there's another metric that deserves just as much attention: ROE, Return on Employee.
This is the question of: have we actually improved this person's working life? Are they more capable, more confident, more likely to stay? Are they doing their job better because of what we've done?
A good example of this in practice comes from a large UK insurer, NFU Mutual, where the conversation shifted from "here's your problem, go solve it" to "here's what we're giving you back in terms of people." That reframe matters. Because if you can show that a person is more engaged, more productive, and less likely to leave — that has financial value too, even if it doesn't show up neatly on a P&L.
The goal isn't to pick ROI or ROE. It's to build both. The hard quantification and the human story, together.
Before you even start a programme, there are five things you should be able to say yes to. Most organisations can't. Here they are:
1. Have you defined success criteria with the business, not just the data team?
This is not about getting sign-off from your Chief Data Officer. It means sitting down with senior business leaders and agreeing: if this moves here, we call it a success. If that doesn't happen, you're measuring things nobody outside your team actually cares about.
2. Do you have a baseline?
You cannot prove progress without one. It's data literacy 101 and it's astonishing how often it gets skipped.
3. Do your measures link to enterprise KPIs?
If your data programme's success metrics don't roll up into the things the organisation already cares about, you're measuring in a vacuum. You'll produce results that mean something to you and nothing to anyone with budget authority.
4. Is there clear ownership of outcomes?
Right now there's a bit of a scrap happening in organisations between CTOs, CIOs, and Chief Data Officers about who owns AI outcomes. It doesn't matter who wins, someone needs to own it, on both the data/AI side and the business side.
5. Are finance involved?
If you haven't looped in the people who hold the P&L, you haven't closed the loop. They're the ones who can tell you whether the needle moving in your spreadsheet actually translates to money saved or money made.
Activity is not value, full stop
One of the easiest traps to fall into — especially in learning and development — is using activity metrics as proxies for outcomes. Training hours completed. Number of people trained. Modules finished. Dashboards built.
These things are not value. They're not even particularly good signals of value.
The example that sticks: if you built 50 dashboards, someone should probably ask you why you needed 50, because you probably only needed 3.
What actually matters is behaviour change. Did the person do something differently after the training? Did a decision get made faster? Did someone catch an error they previously would have missed? Did a team stop going back to the data team for the same question every week?
That's the stuff that moves the needle. The rest is just keeping yourself busy.
Goodhart's Law: The KPI trap that breaks everything
There's a principle called Goodhart's Law that goes something like this: when a measure becomes a target, it ceases to be a good measure.
The most dramatic example of a bank that was fined somewhere in the region of $1.2 billion, that set a target of new accounts opened per day for branch employees. The pressure was high. So people started opening fraudulent accounts to hit the target. The measure had been gamified. The target had completely disconnected from the underlying goal.
That's an extreme case, obviously. But the pattern shows up everywhere. The housing crash of 2007-2008 is another version of the same story. And in data and AI programmes, you see it constantly. People optimise for the metric instead of the outcome.
The question to ask about any KPI you're thinking of setting: can this be gamed? Can someone make it look good while the actual thing you care about gets worse? If the answer is yes, don't make it a target. Track it, sure. But set your targets on the leading indicators, the things you can actually control that drive toward the outcome, not the lagging ones.
Behaviour change is the whole game
If there's one thing to take away from all of this, it's that behaviour and mindset change is what underpins everything else. ROI, ROE, KPIs, all of it is downstream of whether people actually do things differently.
The formula is roughly: literacy + fluency + embedded practice, backed by measurement and leadership reinforcement. And that last part is where a lot of programmes fall down quietly.
Leaders say they're on board. They tell you they support the programme. But then there's no budget. There's no one putting their hand up to drive it into their team. There's no actual behaviour from the top that models what they're asking others to do. That's not leadership reinforcement. That's lip service. And you need to be able to call that out, even if it's uncomfortable.
One more analogy that lands well here: if the amount of investment going into data and AI isn't equal to or greater than the desired output you want from it, the maths doesn't work. That investment isn't just financial. It's time, mindset shifts, change management, genuine commitment. You can't just buy the hammer and expect the house to build itself.
"Pilot Purgatory" and why Proof of Concept is the wrong frame
95% of data experiments never make it past the pilot phase. That's a brutal stat, and it's not a coincidence.
Organisations get stuck in what might be called "pilot purgatory", constantly building pilots, feeling productive, never actually delivering value. Part of the reason is that we, as humans, seek comfort. Building pilots feels like progress. It looks like activity. But if it never becomes something real, it's just expensive practice.
The language shift that matters: stop calling it proof of concept. Start calling it proof of value. Concepts don't drive decisions. Value does.
If you can build three to five genuine proof-of-value examples in your own team, things that actually worked, with measurable outcomes, you have something to take upstairs. You have a story. You have evidence that this isn't just talk. And once people see it, they want to know how you did it.
Three Layers of KPIs to actually track
There's a useful way to think about measurement in tiers. Not everything goes on the same dashboard, and not everything matters in the same timeframe.
Foundational KPIs are about where you are today. Data quality scores. Data trust indices. Percentage of your data estate that's accessible via a single source of truth. These are the unsexy foundation-building metrics that seem abstract until you realise that low data trust means constant rechecking of work, which is a hidden cost centre eating time across thousands of people.
Behaviour KPIs are where things get interesting. What percentage of decisions are actually using data? What's your velocity to insight, how quickly compared to before are you getting to insights that go beyond just describing what happened? Are data team support tickets going down as more people self-serve? (And importantly, are you tracking the difference between new people raising new issues versus the same people coming back with the same problem, because those tell very different stories.)
Business KPIs are the ones everyone already knows: revenue uplift, cost avoided, risk reduced. These are the ones that get stakeholders to pay attention. The critical thing is that your programme's measures need to roll up into these. If they don't, nobody senior will care about your results, no matter how impressive they look.
Helen Blaikie, CDAO from Aston University flagged an interesting comment in the chat. One of her favourite signals that the data and AI confidence programme was working? Senior leaders started bringing their own story to executive meetings. They came in with data, with a narrative, with a perspective, instead of waiting to be told what the numbers meant.
That's not something you can hard measure. But in terms of cultural change, it's exactly what you're aiming for.
And there's a counterintuitive pattern to be aware of: when a literacy programme is working well, the volume of data-related questions often goes up before it comes down. More people caring about data means more people noticing problems, asking questions, flagging issues. Initially that looks like more work. Over time, the same people stop coming back with the same questions because they've built the capability to handle it themselves. The requests get fewer but more strategic.
That's what a good programme looks like in the long run.
Where to start
If you're sitting with a programme that doesn't have the measurement infrastructure it needs, the path forward isn't complicated, but it does require honesty about where you are.
Start by asking which of those five checklist items you actually have in place. Define success with the business, not just internally. Set a baseline so you have something to measure against. Map your measures to the KPIs that already matter to your organisation.
And use AI to help you build this out, not as a shortcut, but as a genuine thinking partner. Different organisations are at different stages. There's no one-size-fits-all approach to KPI mapping, and prompting an AI tool to help you think through your specific context, your specific team, your specific 12-month objectives, that's exactly what it's there for.
Greg Freeman:
Hello everyone, welcome to this webinar with myself and Jordan Morrow, who is also on the call. We're going to give a few people time to pile through the doors, back off their last meeting, and then we'll get underway to discuss what is one of my favourite topics, which is how do we actually make this thing called data and AI fluency truly valuable rather than it just being a thing that people do as a tick box training exercise. So, we'll get into it in a minute, and I am looking forward to it. It's a great topic.
We've got my colleague Sarah in the background who is able to pick up any questions that come in via the chat. So if you've got questions to ask, feel free to do that. Jordan actually last time we did this a month ago proved he's pretty expert at picking the questions up as well as doing the talking, which isn't something I've quite managed yet. Jordan was also great at answering questions on the fly and in real time.
My name is Greg Freeman. I am the CEO and founder of Data Literacy Academy. A business that helps large enterprise organisations to change the way that their people, and by people we mean the non-data professionals of the world, how they think about data and AI, how they work with data and AI, and the skills they've got in data and AI, which are all different parts of the same thing. How do we educate them, how do we give them the mindset, the behaviours, and the key skills to be effective in this space.
One of the biggest challenges we get, because people have to pay us for our services, is how do we justify the value of this, how do we show that this is delivering a return to the business, and that's what we're going to talk to today. And I am very, very happy to be joined by Jordan Morrow, who is affectionately known as the godfather of data and AI literacy, and I will let Jordan introduce himself now.
Jordan Morrow:
Yeah, thank you very much. It's good to see everybody. I love seeing people in the chat.
I'm the CEO and founder of Bhodi Data and the Senior Vice President of Data and AI Transformation at Agile One. And I do think this is one of the most important topics within data and AI fluency for senior leaders in particular, because they want to figure out how do we measure this, and without good measurement, it can really struggle. So hopefully throughout today, you'll share your questions, thoughts, everything with us, but we'll also be able to share and give you some ideas of what you can do here.
Greg Freeman:
Absolutely.
OK, so, headline statement. Most data and AI programmes have a value problem because they weren't designed with it in mind, and I think it's worth saying that when we make that statement, we are talking about arguably data and AI programmes generally, but especially data and AI fluency or literacy programmes. We are indifferent about your choice of word, you can call it data and AI confidence, you can call it literacy, you can call it fluency, you can call it culture, we don't mind what type of programme you're running. It's the same premise for the problem. So what we mean by that is that a lot of businesses really struggle to show that when they've done a period of data and AI literacy or a period of data and AI training that it has been valuable, and typically it's because firstly they don't know how to manage that and how to think about that, but secondly because they didn't know how to think about it, the programme wasn't designed with it as a first principle. So what we're going to talk about today is how do we design data and AI literacy and data and AI programmes more generally to have, ironically, measurement as a first principle, which is of course the irony that we all struggle with in our world because it is a data and AI world and it should be measured.
And the things we're going to talk about in some level of detail are the concept of ROI, return on investment, added together with ROE, the return on employee. That is what's going to give you your successful measurement of success. Everybody typically knows what ROI is, so it's the return on investment, which is a measure of financial gain or loss generated by an investment relative to its cost, so how much have we spent on this thing? Has it made us more money, has it cost us money, because we've not made enough money back. Fairly simple concept, although when you get into the nuance of it, measuring it is fairly challenging at times.
And then this one is really important. I think it's evolving very quickly and I've got a good work friend at a business in the United Kingdom called NFU Mutual, and they're a big insurer. My pal Farron talks a lot about this concept of offering return on employee to his business, which is, you've given us a requirement that you don't feel the people internally are up to speed with data and AI in the way you want them to be, and therefore what we are giving you is a solution to that problem, and what that solution then gives us back is a people-focused measure of how well that investment is improving capability, improving engagement and maybe even hard productivity. And of course overall business performance, but it's really about, does the individual do their job better, are they happier in their role, are they more likely to stay and therefore not churn because we've done this work. So as much as we've got a hard ROI calculation, we also want to be building out this ROE calculation, so that we can more ably say, yes, there's hard quantification here, but there are really important things we've got to consider about how it's improved our people's lives.
Jordan, I was just wondering, have you got any thoughts on where you've seen both of these work effectively as a measure?
Jordan Morrow:
Yeah, and I really love the concept of ROE and the reason is ROI, when it comes to data and AI and not just fluency and literacy but coming at it from an actual value perspective, when you think about attribution, how much can you attribute to the literacy and fluency programme? How much value can you attribute to the data? Let's say you work on a marketing project for 6 months, right? And this is why I think data and AI programmes kind of get pushed aside at times, whether it's literacy or just work in general, is they want that very instantaneous or very easily visible ROI. And data literacy and AI literacy at the senior level would teach them ROI is not as direct as in other cases.
And so I do think in the age of data and AI, the ROE, velocity to insight, how much capability are we improving, how much engagement, productivity, all of those in a way should transfer to ROI. But we really need to be looking at the value it provides the employee, and then what value does that employee bring to the company.
And so I hope, as senior leaders and anyone who's trying to drive this, really looks to implement programmes, whether it's data and AI projects all the way up to data and AI literacy, you have to have that mindset that says it's not a direct one-for-one attribution. So how can we create new measurement? And I think in the age of AI we should be creating new measures anyway, not trying to fit the new in with the old. The new in with the old doesn't work. We need to fit the new with the new. And that means we get to invent and be creative.
Greg Freeman:
Love that, and I think you're absolutely right that the measures are evolving rapidly. We're going to talk a little bit later in the session about how we use typical KPIs to measure data and AI programmes or even outside of literacy and fluency. And I think if we don't focus on the fact that the measures of the world are shifting, we'll be behind the times, not only in how we adopt AI but how we prove that it's actually working because there is an obsession on what we're doing differently now, when sometimes actually we're building towards being able to do something differently and that needs measuring in the interim.
Jordan Morrow:
I'll add real quick if I could here, Greg. At Davos this year, it was talked about what is the straight ROI on an AI investment. It's 1 to 2 years. And I think the world, if we're discussing that at some of the biggest forums across the globe, that it's 1 to 2 years, just because you can get instantaneous answers with a prompt doesn't mean you're going to get instantaneous value from a tool. It takes time and application. It's like building a house, right? When you build a house, the hammer is the tool and it takes time to build. That's the same. AI is a tool, a partner, it takes time to build, and we need to get out of this instantaneous value framework into a much more, I would say, intelligent way of looking at these things.
Greg Freeman:
100%.
So again, all of this applies really to both data and AI programmes and data and AI literacy and fluency programmes. When you're setting out on your journey, a simple checklist that you can follow is how many of these boxes does your programme tick right now. So does it have defined success criteria agreed upfront? And that is not just success criteria agreed with you as a data team. That is, have you pulled in senior business leaders to agree that if this success criteria is met, they will deem this section of the programme or this programme overall as a success.
Have we decided that those success criteria are things that have a baseline measure? Because without a baseline measure it's very difficult to prove progress, and I know some of this is data and data literacy 101 in some ways, but it is amazing the amount of data programmes and data projects and data and AI projects that start without these things as core principles. Do those baseline measures you've set and the success criteria you've set link to the enterprise KPIs? Do they link to the measures that sit above any data and AI programme or data and AI literacy programme, and above that you've got what the organisation cares about. Is there a clear linkage to those? If there isn't, there's no point measuring them because nobody's going to care outside of your own little world.
Do we have it defined, and I think this is a really interesting one that Jordan and I talked about last time, ownership of outcomes. Right now, CTOs and CIOs are quite keen to own the outcomes of AI, whereas I've seen from the guest list, there are a lot of people who are traditional Chief Data Officers and have been a Chief Data Officer for longer than AI has been mainstream. So does the CTO or CIO still want to own data and its outcomes? Not always. At the moment we're seeing, actually, there's quite a desire to own AI because people think it's very hip and trendier now versus the types of outcomes that data has either produced or failed to produce previously. But we've got to have an ownership of outcomes, who in both the data and AI office, and on the business side, owns whether this programme is going to work or not and the measurement of that.
And then bringing in that really important party, and I think I've seen that Helen is on the call from Aston, who is a fantastic Chief Data Officer and has been one of our partners for the last 3 years, and was a finance background by trade, so understands that link directly to commerciality. And if we're not getting those finance people involved to say, does this actually make a difference where it matters, arguably the P&L, especially from an ROI perspective, then have we done what we set out to achieve? So that link between the business' KPIs, the business' financial impact, and who owns the success of that is all super relevant, and if you've not got that in place, you need to think about resetting.
And then there's the illusion of progress, so hopefully, and I would love it if there are some learning people in the room right now, ultimately, what we are very clear on through data and AI fluency is activity alone does not equal value. However, the more activity we deliver, the greater opportunity we have got to get value. So it's not to say that training hours or the amount of people you've trained isn't important, it's definitely a guiding metric.
But what we actually want to care about is what is the value measured we're receiving, and that is something like, we have reduced decision cycle time by 30%. That has financial impact because if you can pull a decision making process forwards in the year, whatever decision was made to make money or save money off the back of it hits the P&L two months earlier or three months earlier, and that is real impact, right? So it has to be escaping that premise that just because we've trained some people it was valuable. Actually, what are we able to measure that those people are doing differently that actually delivers value is what we care about.
Jordan Morrow:
Yeah, and I'm going to add a couple of things here. One person on the call is Carissa who works with me, and this is a conversation she and I have directly had, and that is you could train hour after hour after hour, but if behaviour change does not occur, or if a better understanding, because it does have to tie to business strategy, a better understanding of how to apply this better. Just because you become more efficient, what if you're just becoming more efficient at something that doesn't matter? You're just repeating things more efficiently that don't make any difference.
There is a premise around this, and a huge shout out, I hope everybody follows someone in the participants. Her name is Valerie Logan. She's the godmother of data literacy. She and I were the first two who kind of got this whole thing together, so I just wanted to give a huge shout out to her. Her company approaches this a little differently than just training people; she trains people on how to lead programmes. So huge shout out to Valerie. I love seeing you on there. Hi Valerie, I see you in the chat. But yes, it is all about behaviour change. If behaviour change occurs, you'll start to see the better outcomes.
But if people just take a lot of trainings or have a lot of usage in the tools, so what? And I love that it literally says "so what" right there. I don't care, right? Someone could say, we built 50 dashboards, I'm probably going to look at you and say, you only probably needed 3, so I don't want to hear it. I want to see behaviour change that occurs.
Greg Freeman:
100%.
And then we can talk about the KPI problem, which isn't that most organisations are lacking KPIs because if you work in an organisation, you probably feel like you've got too many. They have too many and most are irrelevant. So to Jordan's point there about dashboards, the same goes for KPIs, and it goes for KPIs around data and AI projects and programmes specifically too, which is you could put 40 measures in place, but it's probably going to be less than 5 that actually evidence the value and the change. So there's no point having this epically long list of KPIs if actually all of them are just activity metrics and they don't really measure the behavioural change that Jordan mentioned.
Worst case is that you've got bad data and bad KPIs because that's going to result in bad capital allocation. But actually, poor KPI design leads to really hidden commercial problems and it allows things to be swept under the carpet because you're measuring and viewing the wrong things. So if a programme sets out, and I'm going to let Jordan talk to this in more detail because he actually brought this law that we're going to talk about to the table in my mind, if you measure the wrong things, you are fundamentally covering up the fact that the right things should be measured, and I think that's a real big problem. So Jordan, do you want to bring Goodhart's law to life for us, because I think this is something you've observed quite a lot with both data and AI but also literacy, right?
Jordan Morrow:
Yeah, and not just those, just businesses in general. In fact, one time I was leading a workshop and I had a person in the audience who really wanted to argue with me that setting a metric or target of net sales wasn't a good idea, and I was like, no, it isn't a good idea. And he was like, but you have to measure it. I never said you don't measure it. It's when you make something a target, it ceases to be a good measure. It's the whole adage around leading versus lagging indicators.
Net revenue is a lagging indicator. We actually have little control around it, salespeople do, but the end decision comes from your client or a potential new client, whether it's an expansion or a new build. What is the leading indicator that got you there? Number of contacts and things like that. So I'm going to set number of contacts more as the measurement and the target versus net revenue. Obviously, you need to measure revenue or else companies are going to go under, cash flow, all that.
A big example of this, and I'll try and leave the bank's name out of it, a bank set out and got fined. I don't know how much, it was maybe over a billion dollars for this. They had high pressure sales for their branches or their employees to hit a certain number of new accounts as a target, whether it's 8 new accounts a day, 9 new accounts a day. So the measurement of new account was set as a target. Well, what ended up happening is because of the high pressure sales, what people started doing is fraudulently opening up accounts, because the target had to be hit. The sales manager was like, go hit the target, hit the target, hit the target. Fraudulent accounts, people were getting fees on their accounts even though they never opened them. That bank got hit with maybe a $1.2 billion dollar fine because they were fraudulent.
That's the prime example, and I get it, that's an extreme example. But that's the prime Goodhart's law. We made the KPI, number of new accounts, the target. High pressure sales gets gamified, loses meaning, loses everything. Whereas if the measure was how many accounts are you opening a day, don't set it as a target, but measure it. And then the target is how many new contacts did you set, how many relationship-building contacts did you make. Then you can actually see what are the leading indicators that led to a new account being opened. Then you can target those, not how many accounts you opened. And again, this was a huge issue.
So when you're looking at setting KPIs around data and AI programmes, remember Goodhart's law. Can this metric be gamified? Can it have things going on behind the scenes that make it appear to be a good one, but in the end it's not? And you can see these all over. And I love how you said it, Greg, it can cover up the reality of what's really happening, because everybody is so dead set on hitting that target, they're not understanding it's blowing up the business. Another one that could be a really drastic example is probably the housing financial crash of 07, 08, 09. How many loans are you setting? How many homes are you selling? Boom.
And Chris says, so what we're saying is that it's not about not measuring certain data points, it's what we choose to assign meaning to. That is the key. What is the leading thing that you can do that gets you to the lagging answer you're looking for?
Greg Freeman:
Absolutely, thank you for that.
And this is the point Jordan made really early on, which is that fundamentally, behaviour and mindset change is what underpins the ROI and the ROE potential of the programme. So it's really easy to try and track those input metrics, but it's much more difficult to track the outcome metrics. And the winning formula is literacy plus fluency plus embedded practice, which is then backed by measurement and leadership reinforcement. Some of that is tough to measure. How do you measure that leaders are or aren't reinforcing your work, right? Because a lot of people say to us, our leaders are on board, and it's like, what makes you say that? They've told us. OK, good. So you're still telling me you've got no budget, you're still telling me that you've not had anybody put their hand up and say they want to drive a product forward into their team and department, but you're also telling me your leadership are on board because they've told you.
Those things are difficult to measure, but we've got to be realistic about them. If you are a leader who is sat there with no budget for data and AI, or you're a leader who's sat there with a lot of budget for AI right now and all your data budget's been stripped, you've got to question whether you've actually got leadership reinforcement and whether people are actually just giving you lip service for their backing of it. Because there are a lot of ways that leaders can truly evidence and role model the right behaviours of reinforcement, but there's also a lot of ways that they can give lip service and never speak to you about it again.
So what we're saying here is literacy and fluency is what's needed to create the capability, there's no doubt about that. You won't get the right mindset, behaviour and skills without good literacy and fluency. People are not just going to adopt products because you've built them and put them out there, I mentioned last time the idea of the field of dreams, build it and they will come. We've got to win their hearts and minds.
We've got to make this a change management approach, so the behaviour only changes if we effectively not only do a structural change management, but also a behavioural change management. This is where behavioural science comes in and we've actually been doing a lot of work on this ourselves internally at the moment, how do we use the positives, like incentivization, how do we use enablement? There's no point training people on a tool if they can't then access the tool. How do we think about restriction, which is something that's quite frowned upon in large enterprise, but actually, if you switch something off, people will use the alternative, whereas if you leave everything running, they'll probably use the one they're most comfortable with.
On top of that, you've got this idea that actually the mindset to wake up every day and do something different, that's actually what's going to move the needle for the organisation. If 5000 people wake up tomorrow and want to do something differently, then you're onto a winner. Otherwise, you'll carry on operating the same way.
Jordan Morrow:
Yeah, I'm going to echo a couple of things you said. One of the first ones is, just because leadership talks doesn't mean the backing is there, and it doesn't always have to be financial. One of the things that I say around leadership is the amount of investment behind data and AI has to be equal to or greater than the desired output that you want from data and AI. Now, that isn't only financial investment. That means investment like you just talked about, Greg, the mindset shift, the behaviour shift. It also talks about the increase in desire of literacy, change management.
One of the first people I ever hired into this team here at Agile One was Sheba, a change management master, and Allie Hartman, because she knows how to project manage, she knows change. That's a forgotten space in data and AI. Every new dashboard, every new tool, every new programme is a change that has to be managed. Without it, good luck.
The other thing I just want to make sure we're really emphasising is it's not just leadership reinforcement, it's our reinforcement. The amount of output we're looking to achieve with data and AI must be met by our investment into it, and that could be 30 minutes a day of study, 15 minutes a day of reading and prompting, whatever the case may be. Again, it's not just buying the tool and thinking it's going to work. I can't just buy a hammer and expect the house to form. I have to learn how to do it. And the other thing I like to use with that analogy of the hammer to a house is it takes time.
In fact, I was asked last week, and I saw Jason Simeon here, I was at Nike's headquarters last week, and someone asked me a really interesting question. I've been asked this before, and that was, what if I learn a tool, and then that tool becomes obsolete? And I flipped it. I said, don't learn tools. Learn principles that will remain across the tools. And so you want to make sure that in everything you do, the investment through these behaviour and mindset changes is principle-based, not tool-based. And that will help you go through whatever iteration and variation of technology you're using.
Greg Freeman:
Love that. I think I'm just going to end by reinforcing yours there. First principles should be 80% of the teaching. Too many organisations think that a literacy programme is just giving people tech training, and therefore that person becomes absolutely correct in their concerns, which is, what if it changes and I don't know it anymore? The first principles lens is really the most important. And that goes even more so when it comes to AI because AI is either going to make the problem better, we hope, or it could make the problem a lot worse. And that problem is the mess that organisations find themselves in around data, data governance, decision making capability. It's either going to scale to a point where we've got thousands of local products that nobody really understands, that don't speak to each other, that don't tell the same story, or we're going to bring the whole organisation on the journey at the speed we need to, and we're going to scale the people capability at the same speed as the technology, which I will re-emphasise right now is extremely hard at the moment because the technology is moving and changing so fast.
Which brings us back to, if we want to do this properly, we have to go to those first principles of data, but also those first principles of what does critical thinking look like, what does effective decision making look like. Right now we are just layering artificial intelligence on poor quality data, no definitions, nobody agrees what everything's called, yet we're expecting a machine to give us a definition and roll it out, poor behaviours, poor mindset.
And again, to Jordan's point there, if your AI fluency programme is just about Co-Pilot, you're absolutely missing the trick. Of course it's nice to teach people how to use Co-Pilot or how to use Claude. But actually, as we'll see in a moment, the effective use of that comes down to how you think about your job role, how you think about the processes you work within, how you think about optimisation of workflows. If you're not thinking about any of that in the right way, all of which can be taught through first principles, it doesn't matter what tool you layer on top.
So what we end up with is loads of rework, loads of errors, loads of poor outcomes like the one Jordan mentioned with the big bank as a sales example. But we're seeing more and more of those, one of the big four consultancies may have been a victim of this in Australia recently if you give it a Google, where they're facing massive fines and reputational damage because they're not using AI in the right way, because they've not taught their people the first principles.
An AI ROI is just impossible without the behaviour change piece. If it's going to be used well, it will provide productivity gains. I believe that 100%, and Jordan might be able to give us some examples of really nice productivity gains in a moment that we see. And that is possible and that is measurable. But if it's used the wrong way, we end up with everything we've just described. It isn't going to fix your broken systems, it's only going to amplify them. It isn't going to fix your broken mindsets and your broken corporate culture and your broken data culture, it's only going to exemplify and amplify them.
So Jordan, can we talk about what we want to see from people beyond just being a little bit more productive? What types of things do you want to see from organisations that are really going to move the needle and give them something measurable to work from?
Jordan Morrow:
Yeah, for me, I hate the terms like disruption, even though transformation is in my title, it's not my favourite word because disruption has a negative connotation to it, and transformation evokes the idea of a massive programme, right? In reality, I like the word creation or create, which ties to creativity.
When it comes to behaviour change, one of the ways that we can actually, and I love the phrase up there, AI doesn't fix broken systems, it amplifies them, number one, AI should be uncovering areas of inefficiency and ineffectiveness very quickly. If we are implementing beyond just a co-pilot, but we're building agents, and we're using, don't forget predictive modelling by the way, it's been around for ages and it's forgotten far too often.
But for us to really dive into behaviour change, I'm going to start to measure velocity of innovation, right? That and velocity of insight, those are two metrics to create within an organisation. Number one, how quickly are we able to create and innovate something new? One of the best things I think AI allows our behaviour to do is get rid of the old and implement something brand new, because it can amplify, yes, the broken, but it amplifies our ideation at the same time.
So velocity of innovation, how quickly are we tangibly innovating towards new solutions that make us a better company? And then number two is velocity to insight. And this is a metric I have created numerically, if anybody wants it, connect with me. Velocity of insight is how quickly, compared to historically, did it take us to arrive at insight that is beyond the descriptive analytic. The descriptive analytic is the what. I want to know why things are happening and we don't know for sure, but we get to test it. We get to be creative, we get to measure it. How quickly can I do that with velocity of insight, to see how quickly better insight comes in for better decisions, because then you can get to real, old-school ROI tangible metrics. Because we know this decision was made in this marketing campaign, we saw an increase of X. There is tangible ROI.
But velocity of innovation and velocity of insight, those are two things that AI has enabled us to do, eliminating the soul-sucking work that we've had to do forever and bringing about things that excite us. So those are ways that I would look at behaviour change, because people are excited, they're creating. And again, I don't like disruption, that's got a negative connotation. If we're creating, disruption's going to naturally occur.
I fully agree and I think I would quite like to see the quantification of that, if not everyone else in the room. I'm sure you'll have some people reach out to you about that.
Greg Freeman:
Absolutely.
OK, so where does value break down then? There's a brutal statistic out there at the moment that 95% of data experiments never make it past the pilot phase, and that's the same in some ways for both data and AI literacy and data and AI more generally. I think people are dipping their toe in the water and they're not doing enough and they're not investing enough to show real impact, partly because when they look at their programmes and that 5-step checklist I put on at the start of the conversation, maybe they're not doing some of that. But actually, especially around literacy and fluency, there's very much this attitude of, well, it might not deliver return in the financial year. Right now we're being asked to do this and this and this differently and we won't get to spend money there if it doesn't clearly prove the return this financial year.
Actually, a bit like AI more generally, data and AI literacy have to accrue over time and it's a compounding benefit. And I think what we see from our clients is that the longer you stay working in this space, and the more you do, the more people you get out to, the more that value compounds. If you just try and dip your toe in the water and do some L&D training on LinkedIn Learning, it's probably not going to deliver any return, I'll be completely honest with you. Doing anything by halves in business is probably a waste of time and money, and literacy and fluency is no different from that.
But what we actually want to get to is not just the capability piece, not just the behaviour piece, but actually that decision making outcome and impact comes from the reinforcement element of a good change model. You can Google any change models, but there's a really simple one, which is Lewin's model for change, which is we need to unfreeze the organisation or the group we're working with, we need to create the change and the impact with them, and then we need to refreeze it. And a lot of the time that sustained impact over time comes from the refreeze element. Lewin's model is from the 1980s and it's fairly certain it's older than me.
But that refreezing takes a lot of energy and a lot of structural changes and policy changes, so things like having job descriptions that ultimately require somebody to be data and AI literate when they enter the organisation. That is a big conversation with your people team and therefore people don't want to have it. Actually though, it will mean that when we've achieved some level of impact, it's sustained and built upon and compounds because everyone we then hire is already more data literate than the people we've got in the business right now. So those things need to stack with time, they can't just be deliver some training and hope it works. So it's really important that we don't let the value break because we don't do the change management reinforcement element.
Jordan Morrow:
Real quick, can we go back real quick? One of the things I want to hit home on with pilot purgatory is 90 to 99% of people are not data and AI professionals by trade or title. That's not their background.
Pilot purgatory, I hate the term proof of concept, and the reason I hate that term is that concept doesn't necessarily drive value. So I want everybody to get that term out of your mind and start calling it proof of value. One reason we get stuck in pilot purgatory is we as humans seek comfort. Well, if we're continually building pilots, we might feel we're working. But that doesn't necessarily mean we're bringing value. And so people who are not data and AI professionals get stuck in that pilot purgatory because it's comfortable. It means I'm building, and people see me building, but it never goes anywhere.
And I like to use the analogy of going to the gym, right? In order to get results in the gym, you get uncomfortable. If you're walking and never breaking a sweat, not breaking down your muscles, things like that, you're not going to get the value you desire. And that's where I would say AI does at this point require some actual discomfort to hit us. And that is that human side, we seek comfort, right? We seek being comfortable, lazy, and I don't mean that in a bad way. Evolution has made it that that's what we want. And so if our background is not data and AI, we might seek pilot purgatory, because it makes us feel like we are being productive. It's like going to too many meetings. Meetings to some people means I'm being effective. Nope, it means you're being busy.
And so we have to teach that behaviour change of shifting beyond just pilot purgatory into proof of value. Because when value can be demonstrated, and again, there are different ways of measuring it, then people can get on board with it.
Greg Freeman:
Absolutely love that. I also love the phrase pilot purgatory, it's probably going to appear in some sort of LinkedIn post soon and I will credit you. I will tag you and credit you when that goes up.
OK, so what does good look like then? This is really what we're aiming for in everything, right, an understanding and clarity on what good looks like and most importantly, how to implement what good looks like. The next few slides are going to give you a view of things you can and should be measuring throughout the life cycle of a good data and AI literacy project or a good data and AI project more generally. And some of the first principles we should be thinking about when it comes to actually executing this effectively.
Jordan Morrow:
So, value chain thinking. This is as first principles in my mind, not just from a data and AI perspective, but it's also one of the biggest weaknesses of business more generally and therefore actually the opportunity for data and AI literacy programmes to address. What happens upstream of me and downstream of me, what impact does that have on my world, our world, the business's world? How does data in a data value chain context flow through that? All things that most people are very bad at.
I call it level two thinking. Level one thinking is I sit on a machine and I push a button or I scan something through a checkout or a till. Level two thinking is what most people in business do, which is I've got this task, I've got my blinkers on, that's what I think about 99% of the time. Opening up to value chain thinking, level three thinking, isn't just a data literacy problem, it's a business literacy problem. How do I think about the horizontal impact of what we do? So yes, we absolutely want to talk about data and data value and data quality in their world, but we also have to open up that lineage of the business, the process, the operating model more generally, so that people through the AI literacy programme, through the data literacy programme, actually understand their business better, not just data and AI literacy.
It needs to be designed upfront, so all the things we talked about earlier, we are designing the programme, the key messaging, the customisation of the learning, the measures in advance so that we know what we're heading for. Because if somebody's got to hear a message 20 times, and if Jason works for Nike, I reference Nike a lot, why do Nike, so many years on from when Shoe Dog started, why do we still see Nike billboards everywhere we go? It's because they are constantly reiterating the message that we're aiming for a common goal, we're aiming for the right person with the right trainer to be as successful as they can be. And that's where that messaging needs to hit you 25 times, not just once. If you only talk to people about data governance once, they're never going to hear it because it's once. So it's got to be designed upfront with the right outcomes in mind, and then it's got to be measured continuously.
Greg Freeman:
To Jordan's perspective earlier, are we using just lagging indicators, which makes it very difficult to see what's happening up until the point we know that we've done well or badly? Or have we got a continuous set of leading indicators that tell us that our literacy programme is or isn't working, so that we can pivot, we can create new interventions when we think about behaviour change? If you see you've got low adoption, there's no point finding that out at the end. You've got to find it out early and you've got to intervene through behavioural change to actually create impact. So what does good look like? It looks like a big story of those three things, and I'm sure Jordan will reinforce that.
Jordan Morrow:
Yeah, and yes, number one, it's the story, make sure we're building narratives. And number two, I like what Karissa said, that through all of this critical and strategic thinking are of the utmost importance. I've mentioned going to the gym and working out. One of the things I do worry about is AI gives us such confidently correct answers, even though they might not be correct. It seems super intelligent because of its linguistic capability, and if we don't critically think and strategically think on things, just like going to the gym, if you go for 6 months and stop working out for 2, your gains are going to go.
Our brains right now, and I don't think it started with AI, I think it started with social media, and it's just getting exacerbated quicker with AI — if we're not flexing and working out our creative muscles, we are missing the mark and our ability to strategically think and critically think can go away. So I encourage you to find any hobby. In fact, Jason is on the call, and I mentioned last week at Nike, one of the things I do is poetry. I read it and I write it. And that gives my brain creative workouts, right? Trying to write a haiku on a certain topic to meet the syllable requirement, writing a poem using different things, that is creative juice. And just like going to the gym builds physical muscles, during the age of AI for these three things to work, you have to be flexing and working out your mental muscles too.
Greg Freeman:
Love that. Again, I won't say I'm definitely going to write a haiku for LinkedIn, but maybe you can write one for me and I'll put it up there. Done.
OK, so three layers of KPIs. We're going to start with the Foundational KPIs. And what we're talking about here is really setting the tone for understanding where we are today. People talk all the time about, we've spent 2 years building the foundations and people don't see why it's valuable. Well, actually, here are a few measures, data quality score, data trust index. I'd recommend you spend some time just looking into these because they're too deep to give you a quick overview of. Percentage single source of truth, so how much of our data estate is available in the way we want it to be via a single lens.
Fundamentally, this is a way of testing both the foundational work you've done in data and AI more generally and in your literacy programme, because if these things are improving, or at least you've managed to set a baseline for them, it gives you a story that you can tell. Because if we've got low trust, that means we end up with constant re-checking of work and that awful back and forth that goes on between data and AI teams and the wider business. And then that's loads of wasted time. So that translates into time spent validating data as actually a hidden cost centre. Time spent doing things manually that we don't need to be doing across tens of thousands of people and conversations we don't need to be having is a hidden cost centre, because it takes so long to get things right that we never move forward, and that's without the opportunity cost of actually making the decision in the right way.
Then we've got our behaviour KPIs, so what percentage of decisions use data and AI across the organisation? The velocity to insight that Jordan mentioned earlier — how are we actually changing our behaviour so we increase and improve the velocity to insight? Data confidence scores, reduction in data team dependencies. One of our favourite measures is if a data team right now picks up 80% of the organisation's data help desk type issues, if you do a literacy programme well, that should go down. Yes, you'll have more people suggesting they've got problems because the more people that are data and AI literate, the more things that will be flagged, which is what you want. But actually, the same people coming back to you to ask you for the same help desk type issues should massively decline. So that should be measured. And what that means is if we've got people moving from "which number's right" to "what are we doing with that number," all of a sudden we have actually improved the decision making behaviours of the organisation.
So these are the types of behaviour KPIs you might want to think about tracking.
And then finally, you've got your business KPIs, the type that we all know and somewhat love, things like revenue uplift, cost avoided, risks reduced. And fundamentally, like we said in that initial checklist, if your KPIs for your data and AI literacy programme or your data and AI programme more generally don't directly roll up into these business KPIs, the likelihood of people caring is so much shallower. So let's make sure that we've considered all of these different types of measures, because if we get them tracked through the life cycle of the programme, you will have a really compelling story to tell, and you'll hopefully be able to evidence a really strong ROI and ROE.
These are just some things to think about as you're going through. Capability, the KPI would be skill attainment. It's actually a really popular one amongst learning people, but "I went to a Power BI training module" does not evidence skill attainment. I would want to measure Power BI adoption for that. So do we have a baseline for the amount of people we are training who already had adopted Power BI versus those who have adopted it afterwards? If not, it shows both mindset and behaviour adoption, which is, I now think about data and where to access it for decision making more often, and I've attained the skills to do so and I use the Power BI tool that I've been trained on more regularly.
And then you've got things like impact, so the KPI would be financial improvement, cost savings. How are we measuring the amount of money that's being saved by reducing that decision making cost centre? Or as you'll see in a moment, some of the things that we're measuring with clients in our current programmes — if people avoid a huge cost error because they've become more data literate, that in itself is a huge evidence of impact if we tell that story correctly. So Jordan, any thoughts on these types of KPIs and any that you've seen that are really, really potent?
Jordan Morrow:
Yeah, the one that I really like is the behaviour example, fewer data team tickets. And you actually had it on a previous slide too, that over-reliance on a data team for everything. Companies right now, economic uncertainty, budgets are tight. The more you can put self-service into place to a degree, and there's always going to be a tie-back to a data team, but the less burden you put on them, the more they can be strategic in their outcomes, whereas you're spreading out throughout the organisation data-driven and AI-driven decision-making. So that would be the one I'd emphasise the most when we're looking at a KPI value chain and where are we truly seeing some form of really good impactful measurement. Reduction in data team reliance is key because just like everybody, they have busy jobs, but the more that we can free them up to do more advanced work, the more you're going to get done as a company. I really like that way to look at measuring how the literacy programme and data and AI programmes are actually working. Are we not relying on them as much?
Greg Freeman:
100%. And I think again, a similar vein but this is a really important one for people to get their heads around. In some ways, you may see more of those requests coming in, because if you now have 10,000 people who love data and AI and they want help, then that's a really positive thing, you see more. But you've got to track that the same people are coming in less because they're improving their mindset, their behaviours and skills.
Data quality is another one that's very similar, where people say, oh well, we'll have fewer data quality issues flagged if we become more data literate. Actually, you'll more than likely have a fairly steep uptick in data and AI quality problems flagged initially, because the more people care about it, the more they'll actually see that your CRM's got poor data in it or your ERP's got poor data in it. And then over time, in those systems where you've trained people, those should dwindle because not only are they being flagged more often, but they're also being fixed more often because people care about it. So I think it's a really good pair of measures and it all reduces impact on the data team by people being able to self-serve more locally.
So where is your programme bucket leaking? These are some things that you may or may not be doing right now that could be hurting the long-term success and sustenance of your programme. After the training, is there any behavioural tracking, is there any mindset tracking, have we gone beyond, we've trained some people? After the behaviour change, have we really determined upfront that the linkage to the core KPIs is there? Yes or no, are we then tracking that a year later or 2 years later?
Helen's on the call, and this is a bit of an analogy-based one. Helen told me a really nice story of her organisation, Aston University, where one of her favourite observations about the change that data and AI confidence, in their world as they call it, has had, is that senior leaders now bring their own story to the table when it comes to executive meetings and leadership meetings. And that isn't something that you can hard measure, but in terms of return on employee and even return on culture, it's absolutely what a good data and AI culture looks like, and I think that's really what we're looking to achieve. But if you're not able to track it, it makes it much more difficult to show.
And then after the KPI movement has happened, is there that financial translation? Have we got the finance people involved upfront to say, if this moves the needle, this will have an impact on the money we make or the money we spend, or the less money our audience, if we're in a government sector, need to spend to allow us to do our work, whatever it might be, but does it translate to finances?
And then I'll let Jordan wrap up after this before we do 5 minutes of Q&A, but what you should be thinking about over the next 6 to 12 months is, are you building a capability KPI mapping system, so is it just about the capability or is it how we're mapping those KPIs to execution and outcomes? Are we able to see any kind of way of tracking benefits across the whole organisation? At one of these webinars soon we'll probably talk to you more about our learner value realisation product, which is all about that benefits register at scale.
What is the measurement cadence? Is it monthly, is it quarterly, why are you choosing monthly for some measures and quarterly for others? Is it a leading indicator or a lagging indicator? Have we got that ownership of value, so do we agree that some of that is representing the data office well, but is it also representing the business units and the business unit leaders you're working with? Because if you can be part of making them look good, they'll want to work with you over and over again.
So Jordan, do you want to do a quick wrap up about the key focuses?
Jordan Morrow:
Yeah, and I would say on this, just so we have time if we have any last questions, I think we've gotten through them as I've been looking at the chat, but I would say a key with this is allow AI to be your partner as you build this out for your organisation. It's not a one size fits all. Some organisations will be further along, some will be behind. Allow whatever AI tool your organisation uses to be your partner that builds this out. What is our KPI mapping system could be different for one organisation than another. How do we register the benefits, measurement cadence, what fits into your culture? Allow AI to build this, that's one of the things it should be doing, being your partner. You go in and you prompt it, say, this is what I want to do over the next 12 months, here are the key things, here are things I learned in this webinar, how do I make it grow?
And as you do that, then you get, and I love how Helen said it in her point, yes, I agree, your data requests should go up, but they get to be more strategic, and that's what this can do over the next 6 to 12 months. You're getting away from some of the mundane and getting into the strategy, and I think that is absolutely critical.
Greg Freeman:
Wonderful, and yes, the chat has been absolutely on fire today compared to normal webinars. I loved Helen's point that the volume will go up, but the quality will go up hopefully exponentially, and over time the same people will find ways to solve their own problems and then you'll just have more coming from different people, and that's really what a good literacy and fluency programme culture looks like.
So Jess has actually asked one question which I'll ask out loud so that you can give an answer to it. How do you get AI strategy and change management strategy into the same conversation and who in the organisation needs to be in the room?
Jordan Morrow:
So to me, they should not sit separate ever. Change management strategy, AI strategy are all a part of the business strategy. So any time you talk about those things or your business strategy, they're one and the same. How does AI help us build the business strategy? How does AI and data help us succeed in the business strategy, and then change management, how do we take that to make those things work?
Now, that's easier said than done. The people in the room need to be senior leaders of data and AI, senior leaders in the company, but that gets hard. Sometimes if we're working for big organisations, start small, grassroots. This is something Valerie, who's been on the call, and I would talk about historically. You do need senior leaders to be a part of it, but you also need a grassroots movement to make this happen.
And so Jess, to make that happen through the grassroots movement, this is why I don't like proof of concept, I like proof of value, because once you build maybe 3 to 5 proof of value things that actually worked, you can go throughout the organisation and it's no longer just talk, it is tangible practical application. So I would say, get going with some grassroots things where you're proving this is not the entire business strategy, but this is what my team has been tasked with doing in 2026. How can AI be a part of this? How can change management be a part of these three initiatives that my team has to accomplish? Wonderful. You just built the proof, then it starts to go up the chain.
As you start to see that going up the chain, people start to love it. Oh my gosh, how did you do this? Well, I used AI, I used data and change management. We went 30% more productive in this and that. It starts to help you grow. So start grassroots, get the conversation going at the senior leadership level as much as you can, and then push it forward.
Perfect. I am very conscious of people's time and we are at the hour now, so thank you so much to everyone who's dialled in and contributed so much to the chat — we love it when the chat's on fire. In terms of connecting with both of us, we are obviously available on LinkedIn and you may or may not regret connecting with either of us in terms of the amount of domination of your timeline. But honestly, it's been a brilliant conversation.
If you do want to explore data and AI literacy programmes with anyone as a partner, we would obviously love to have that conversation. Do reach out to us via the DL Academy website, which is www.dl-academy.com. Otherwise, I look forward to seeing you all at the next one of these.
Thank you everyone. Bye!
Unlock the power of your data & AI
Speak with us to learn how you can embed org-wide data & AI fluency today.

.png)
.png)



.png)
.jpg)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)