Enterprises are investing in Agentic AI. Few are preparing their people.
That’s the tension we tackled in our latest webinar: How to empower your people to become agent bosses, hosted by Greg Freeman (CEO, Data Literacy Academy) and Stephanie Gradwell, Managing Partner at Pendle.
We've seen how GenAI has changed people's ways of working. Now Agentic AI is the new kid on the block. This type of artificial intelligence means that machines will have more autonomy to help humans work better, by actively taking on workflows, decisions, and execution at scale.
But what often gets deprioritised in the conversation, is how to prepare your workforce to lead, trust, and work alongside AI agents.
What is Agentic AI?
Agentic AI refers to systems that don’t just generate content or answer prompts. These are autonomous agents that set goals, learn from their environment, and make decisions with minimal human input. It completely changes the way of working, and requires a new way of thinking to go alongside it.
Here are a few examples of what this can look like in practice. Price optimisation agents can monitor competitors and adjust pricing in real-time. Or fraud detection agents can monitor transactions, flag suspicious activity, and take action, all before a human ever gets involved.
In short: they’re not your assistant anymore. They’re your newest hire.
The Agent boss challenge
Here’s the problem: while the tech is advancing fast, most people aren’t ready to work this way.
In a live poll during the webinar, over half the audience believed that fewer than 20% of their colleagues would even know what Agentic AI is. And they’re right.
The majority of employees, even in data-driven businesses, aren’t equipped to lead or collaborate with AI agents. They’re stuck in phase one: using AI as a souped-up writing assistant.
The leap to phase three, where humans set the strategy and agents execute it, requires a radical rethink of roles, workflows, and skills.
This leads us into the biggest mistake many businesses are making. They're spending millions on AI platforms… and pennies on people.
Greg shared one example: a company spending £100m on Google tech, but just £50k on training their workforce to use it. And then leadership will wonder why ROI on AI is lagging.
Agentic AI doesn’t work without organisational readiness. That means:
- New operating models
- AI-augmented workflow design
- Clear governance and risk frameworks
- And most importantly, empowered employees who know how to lead, trust, and question AI
The Space Model: Skills + Strategy + Support
Stephanie introduced the SPACE model, a framework for empowering employees to become “agent bosses.” It covers:
- Skills development: Not to code, but to understand how agents work and where to delegate.
- Purpose alignment: Clarity on strategic goals is essential when employees move from doing to directing.
- Autonomy: Employees must feel trusted to lead agents and make decisions.
- Community: People learn from people. Build Communities of Practice around AI.
- Engagement: Rethink recognition and reward. Being an effective agent leader must count.
Why your people need common understanding and language
Data Literacy Academy’s AI Literacy Curve shows that even a modest increase in data and AI literacy significantly boosts confidence and adoption.
You’re not trying to turn your workforce into data scientists. But you do need them to:
- Understand what Agentic AI is (and isn’t)
- Know how to use it responsibly
- Trust it enough to delegate real work
- Feel confident experimenting with it
And that starts with language. But it also requires shared mental models, curiosity, confidence to question and challenge systems, and training rooted in real business use cases, not theory.
And the growing gap between leaders and laggards is staggering.
Companies that get stuck in legacy models could face up to 40% higher operational costs within a year.
Competitors with AI-augmented teams are already making decisions 50% faster.
And as GenAI becomes ubiquitous, employee engagement will suffer in organisations that fail to empower their people, up to 35% drops, according to Stephanie’s analysis.
So, where should you start?
Greg and Stephanie outlined a clear 3-step strategy:
- Launch a targeted AI literacy programme: Focus on practical use cases, governance, and common language.
- Redesign core workflows: Don’t layer Agentic AI on top. Rethink from the ground up.
- Create agent boss career pathways: Recognise and reward leadership in this new hybrid world.
And importantly: build in the metrics from day one. If you can’t measure the ROI, you’ll struggle to scale the solution or defend its impact.
A final note: Be the storyteller-in-chief
The call to action was clear: If you want this to work, you have to lead the story. Senior leaders won’t move without a compelling narrative about how AI helps the business win, not just how it works. And while a lot of executives are jumping on to the AI hype train, it's up to data and AI leaders to make that impact tangible.
So be the one who tells that story. Who finds the right use cases and brings the people with you. Because when done right, Agentic AI can be the next wave of technology that unlocks incredible productivity, value and innovation. But it needs to be done with care, forethought and investment in your people to make the most of it.
Greg Freeman: Let's do this. The topic for today's webinar is a slightly different one in the sense that very often at Data Literacy Academy we talk entirely about data literacy or AI literacy. AI literacy definitely plays its part in this conversation, but today I am absolutely delighted to be joined by Steph, who I will let introduce herself in a second, on how to empower your people to become agent bosses in the age of Agentic AI, which is a probably more advanced topic than we normally talk about. I'm quite excited for this one. For anyone who doesn't know me, my name is Greg Freeman. I'm the CEO and founder of Data Literacy Academy. I've been very close to most of our large programmes and projects over the last few years. I typically take the lead on these webinars, but today it's really nice to be joined by Steph and share the stage again. I will let Steph introduce herself.
Stephanie Gradwell: Hi everyone. I'm Steph. I'm the managing partner at Pendle. Pendle is a boutique data and AI consultancy and we really focus on the development, deployment and assurance of AI solutions but in a responsible way. This topic is actually really close to my heart. It's something we've actually been doing a lot of work on recently. Hopefully you'll take away some really key nuggets of how, if you did want to get started on Agentic Solutions, how you do that from more of an operational and people perspective rather than deep in the tech. Thank you very much.
Greg: And it's a great topic. I think it's incredibly front of mind for a lot of people. It's probably the new kid on the mainstream block right now -- Agentic. I was looking the other day at the Gartner hype cycle around AI and, not that I'm always the biggest Gartner fan, but they have their place in the community and it's super interesting to see that already generative AI is on its way down into the trough of disillusionment and we're now moving into the peak topics being agentic AI, responsible AI, data, duality, readiness for AI. It definitely feels timely in that sense, and I think there's going to be huge opportunities for large organisations to really make the most of it. First of all, I said it'd be pretty quick into the survey. What percentage of your business colleagues do you believe would know what Agentic AI is? By business colleagues, we largely want you to focus on the people who wouldn't consider themselves data professionals, but of course the data professional population is also there too. What percentage of your business colleagues do you believe would know what Agentic AI is? Nought to 20%, 20 to 40%, 40 to 60%, 60 to 80 or 80 to 100?
I found the answer that we got to this when we presented on stage really interesting. Let's see what the answers are. There we go. Awesome. Okay. This is a more realistic answer. I think that's good. Over half of the people in the room feel that nought to 20% would know what Agentic AI is. I think that's very fair. I think it's interesting that one person feels that everyone in their business or nearly everyone in their business would know what Agentic AI is. As I said at the time that we presented this on stage, I hope that's because you're part of a data consultancy or potentially you work with Steph. That might be the reason. But ultimately I think if you're in a large organisation the answers towards the top, the 56% of people who are nought to 20%, 17% for under 60%, is really the more realistic option. I believe if you're in a large enterprise to truly know what Agentic AI is, you will be in the nought to 20%. From all the conversations we have, from all the learners and the employees of large organisations that we meet, things like Agentic AI are just not something they've really considered and very often can blow their mind. I would suggest if you're in a larger enterprise, it'll be in that nought to 20% bracket, but really appreciate everyone feeding in and it's good to get a realistic view. With that in mind, I'll hand over to Steph to get us into the details of why this is so important as businesses want to start adopting.
Stephanie: You've obviously all joined this webinar for a reason. If you don't know what Agentic AI is, I'll just give you a quick definition. It's really an element of artificial intelligence that can autonomously make decisions. How it's different is it actually sets goals, performs tasks with limited human supervision. There should always be a human in the loop in some way. In addition to that it continuously learns and adapts to the environment. It's not a static model or AI solution that you build and, as Greg said, this is the new hot kid on the block and enterprises are significantly investing in this space to improve efficiency. What they're doing with it is executing complex workflows, ensuring they can make data-driven decisions and being able to scale really quickly without adding a significant amount of additional headcount, which used to be the formula: add more headcount, get more growth. That's being flipped on its head and we'll talk about that a little bit. The basis of today is really to say that the newest team member that you get into your business in the future won't be an additional human, it will potentially be an AI agent. As employees, how do you work with that AI agent and become empowered to really understand and get the most out of it without being scared, fearful, and feeling like it's taking over everything that you want to do? It's definitely not that. It takes away all pain points so you can focus more on the strategic value-add work.
Just to give you a bit of an overview of the capability and how it glide paths up. I think probably a lot of you on the call, and probably everyone in the country now, is at this stage in terms of phase one where it's a human and an AI assistant. That's where every employee is using an AI assisted productivity tool. That might be AI writing assistance such as ChatGPT or Co-Pilot, or something that organises your emails for you or transcribes audio, something really where you have to activate it and it does quite a simple task. Then you move into phase two and we're seeing some companies get there but it's not the generative norm at the moment. This is where an AI agent actually works alongside a colleague. The agent is directed to assist in real time. An example would be a customer services assistant where if you are ringing up to ask about where's my order or I'm not happy about this product that you've sent me, an AI agent would essentially be analysing a conversation, suggesting optimal replies to that customer service agent that they can go back to the customer with and automatically pulling up the relevant information for that customer such as their purchase history. What it means is that human can really focus on the human interaction but allow a lot of the leg work that they would have done to understand why the customer might be calling to be done for them. We're seeing some of this come into the market. If you've ever used the comet technology that's come through Perplexity or ChatGPT5 where you can put in a prompt and essentially it'll go away and build and do your shopping order and things for you, that's essentially the same thing. It's acting autonomously but it's not taking over everything.
And then lastly, this is a true agentic solution: where humans set the strategy but the agents execute the business processes. An example for this would be if you were working as an e-commerce director in a business and you set a pricing strategy. That's within your role remit and you said I want to get a 25% profit margin but still remain competitive with our top three rivals in the market. What the AI agent would do is look at the pricing for thousands of products that you list but also your competitors list and also all in the market. The agent would then continuously monitor the competition and how their prices are moving and what demand they are seeing and then adjust prices in real time on the company's website to ensure it met those strategic goals. The key difference here is that the human is only the director who essentially sets the strategy and then maybe at the end reviews performance to understand if there's a broader consequence such as brand reputation of doing it like this, rather than actively going in, looking at the market, changing the pricing. It's a completely different view of how operational process flows work. But it's not all doom and gloom. That's a real competitive advantage if you're in phase three. But the reality is many organisations are just stuck in phase one. They are using it as sophisticated productivity tools, but they haven't yet realised the potential of using AI agents and agentic solutions as autonomous digital colleagues.
Greg: It's super interesting, isn't it? Because that example you've just used there is such a shift in terms of people's acceptance of trust in a machine. The idea that people who've always done their job one way now direct, in essence, draw up the playbook for the agents and then let the agents make the plays and all those types of things. And I think it fits in really nicely with this view of the iceberg. Something that I do think people are aware of is that probably a lot of the important stuff that happens within a business, especially from a data perspective and the way that the data office and the AI office engage with the business, is the tip of the iceberg and it's just the stuff that's seen on the front line. But I guess one thing that I think is going to be really difficult for people as we move through this process of more normalisation of that type of experience for employees, and especially if you're running a team and a department and now you've got a team of agents basically doing your pricing for you instead of pricing analysts, is that at the moment everything we see is that there's so much investment going into buying the software, planning for buying the software, embedding the software, implementing the software from a technology perspective. But I think the bit that we need to get a lot better at, and I know I'm biased and you're probably quite biased because the operational design is a passion of yours, is that for what you've just described, we're going to have to have such an enhanced level of data and AI literacy, a new way of operating within the business, a pure operating model shift that means that we can actually trust the AI agents to work within our current workflows and our current ways of working. I had a really interesting conversation not long ago with a CEO who said they've kind of really started to invest in this area, not agentic but bringing the company on the journey. I said wonderful, I've been waiting to hear that from you for a while. What are we talking about here? And he was like, well we spend a lot of money with Google, 100 million quid a year with Google. It's a big old budget. And I think we've got to start to get the people piece to catch up. And I said again, love to hear that. I agree. One of my things that I often say on stage when people let me talk is if you look at your budgets and you've spent 10x on the technology what you have on your people, it might be the reason that this isn't working for you and that you might be struggling to get it embedded. His answer to the £100 million spend on Google was a 50 grand budget for the people side. And I was just like, well, that's where we're going wrong fundamentally. I think we've got to recognise that there's going to be a massive obsession over AI tools, but the fundamental transformation that comes surrounding that is really where the area of focus needs to be. And again, if you're sat watching this webinar, it's probably actually because you're more interested in the people and culture side and that's why you follow our business. But it's that message that we've got to share: for the type of amazing use case that Steph just explained, and the fact that that is going to become the new reality with time for a lot of large organisations, we have to do the below-the-iceberg stuff, otherwise it just isn't going to work. And the blind spot that everybody's just going to want it and they're just going to trust it -- I think that is a real problem for people.
Stephanie: Yeah, I would just add Greg that if you are one of those companies that have invested in AI but you're not seeing a return yet, I would really look at operating model and literacy because the tech is brilliant. It's not normally the tech that is the issue. It's the broader organism around it that essentially means that you're not driving the benefit out the back of it. If you are in that position, this call is absolutely brilliant for it.
Greg: 100%. I mean we often talk about the ADKAR model of change. It's probably the best known or most common change model these days. That reinforcement bit, that final R within the ADKAR acronym, is really everything else that you have to think about to actually enable the change to happen. And of course, technology access is one of those, but actually how you frame this around HR and the people team and all those types of things, this is going to be an environment change for the whole organisation. That isn't just an AI team silo. It's a whole business transformation. And I think it's super interesting and then to come back to the point we've made there, I think there's just too many people who expect that this area, and I don't know if you can actually see my arrow on my screen to be fair, good, using it as a pointer, everybody wants the visionary leaders, the people that are highly intelligent, highly empowered, and want to be working with this type of technology on a daily basis. But ultimately, whether you like it or not, and all the research suggests, 80% of your organisation is more likely to be closer to the bottom left than they are to the top right, which means that you're going to end up in a situation where you're going technology first, but it actually ignores the empowerment bit and the bit that brings them on the journey to want to use that intelligence or unlock those constraints. I think what we've got to do is obviously identify those enthusiastic supporters, they're really important, but we've got to be able to unlock the power of it with the wider body of people. If you are sat here thinking everybody's going to want to be on that journey, as I've said a couple of times now, you are probably blinded by the fact that data and AI teams mostly speak to the best of the business. You may feel that you've got lots of enthusiastic supporters, therefore everybody is. Actually, the people who speak to you every day as a data and AI team are the ones who, even if you still think they're not very good or they don't know enough or aren't they frustrating and challenging to work with because they give us poor inputs or whatever it might be, are actually the best of the bunch. It's really the rest of the organisation that you need to worry about. In terms of setting your managers up to be able to deal with this new world of hitting use cases like Steph described, you're going to have to do a lot more work with them to make that happen.
Stephanie: This essentially talks about how you empower people to become an agent boss. It's going to turn into a very different hierarchical structure. Instead of multiple layers of hierarchy as it currently is now in organisations, the hierarchy structure is probably going to get a lot flatter. What that means is there's greater autonomy for people to go and set their own strategy for areas, to run an area of the business, because a lot more people will be empowered to make those decisions. But that can be quite scary for people to have that level of responsibility and not feel that if they're not doing the doing they're not doing a good job. How you are rewarded will shift. This model, the SPACE model, is really well researched. It's not something specific to the data environment because this is just about how you empower and engage colleagues. It identifies five really critical dimensions of how you empower colleagues within the workplace. The first, which we've touched on, is obviously skills development. You've really got to expand your skills base to understand how agents are built. You don't need to build them, but you need to understand how they work and how they're built. Delegation strategies: how are you going to delegate work across those agents, what do you want to retain as human, and what do you want to hand over to an agent to do? And AI workflow design: AI workflow design is completely different to standard process management and we'll go through that at a really top-line level later on. Purpose alignment: it's really crucial that as employees transition from task execution to strategic direction-setting that they're really clear on what their purpose is in the organisation, aligned back to the overall business strategy. Autonomy: autonomy takes on a new meaning when employees manage both human team members and AI agents. And to be honest, this is one area where this hasn't been done before. This is all new. People are learning as they go. That level of autonomy will essentially be set by the risk level and risk preference within your business of whether you want everything to be run by AI agents. If you look at the likes of Delinqo and Shopify, it's actually getting a bit out of control and they're not seeing the results they want, so they're bringing more people back. It's a real balance around the risk profile and what your organisation wants to take. Community building: that's going to encompass both human relationships and human-AI collaboration partners. And then engagement mechanisms: they need to evolve to recognise and reward effective agent leadership. Again, if there's anyone from HR on this call, you really need to be thinking about how you are rewarding colleagues for going on this journey with you and being an effective agent leader.
I pressed the wrong button. I guess from our side, for everyone's context, the data and AI literacy curve is something that Data Literacy Academy have created and it's a people version of the data maturity curve that we all know and sometimes love, not guaranteed. For us this is really just trying to represent the fact that it is possible for us to win this battle and it is possible for us to win this race as an industry. But I think we are racing so fast and we're probably not, according to the data, bringing people with us. Based on that example of understanding enough to trust it to then use it or allow it to run, from the e-commerce pricing example earlier, there is lots of evidence to suggest that if a person is data or AI literate they're over 50% more likely to feel empowered or trust this type of work. And that in itself is a really important facet of getting this to land and getting it to be used in businesses. But at the same time, again, over 50% of people say they've received little to no AI or data training any time in their entire career. If we're aiming to get more people to that level of empowerment, we have to do our bit and get them educated, help them to feel comfortable with it. And it's not going to take all the education in the world in my opinion. I think what we've seen quite a lot, and we're about 12 months into having the AI part of this curve now, we announced it September last year and built this into the data literacy curve that we already had, we're only really talking about needing to get people to this level. What we're seeing is that the adoption of generative AI and the prevalence of generative AI has meant that actually all you've got to do to use generative AI well, in the right way, in a responsible way, and I think your phrasing at Pendle around responsible AI and the focus there is really sensible, is understand why it's going to be valuable, understand how to talk about it, feel comfortable with how to use it. And all of that can come before you even get into what would have traditionally been the more complex data stuff. When people have trained everybody on the BI tool before, probably not actually even necessary to get generative AI and agent AI adopted. I think as long as we give people a level of common language, that governance piece, what we consider democratised soft skills, data skills that are specific to data but not hard skills, governance, quality, management, that type of thing, you are going to need your people to have that level of understanding, education, and trust if they're going to be the people that, even to use generative AI in a lot of cases, but definitely to engage and to feel comfortable managing, working, and trusting Agentic AI. I think to say that we're not necessarily saying that you have to be up here, you're going to have some really smart people up here building your AI agents, but in order to get to this middle bracket of maturity with your people, you are going to have to bring them somewhat on the journey through common language, common understanding of the value, common understanding of why it's safe to use it within their roles. Getting them to make sure that they're able to use it responsibly, because otherwise they'll feel like they're going to make some mistake that is going to cost the business a load of money or cost themselves some jobs. For us it's just about saying that we've got to be investing at least in this area of data and AI literacy to give people the empowerment to be successful with it.
Greg: And then to do that, I think to come back to Gartner, AI literacy is far more than proficiency in the technical tools. It includes even knowing what types of AI exist. I think one of the major problems we've got as a society right now, and that's a grand sweeping statement that I actually maybe regret; there are probably bigger problems, but if we're going to get AI to be truly valuable, we have to stop people seeing AI as just generative AI. People have to know more than that. The big, hefty problems, and Steph, to your point earlier, those phase three problems, whether it's in Agentic AI or whether it's in machine learning or whatever it might be, they're the things that are really going to have material value for the business. It's not going to be somebody being able to write an email faster or in the right tone of voice. We've got to help people be aware of what types exist. We've got to be working to identify strategic use cases, not just operational use cases. And that's a great point from Gartner because that's ultimately where the numbers come from, the metrics that pay the bills. Using AI safely and responsibly, developing the technical skills as well, is definitely part of it. I think we've got to get our heads around that. And you've got some options. Of course, you've got your formal learning paths. We don't deliver apprenticeships as a company, but I think for certain groups when it comes to AI and Agentic AI, the apprenticeship scheme can be a really good way of them learning. I'd say that's probably more for your data professionals who you want to go higher up that curve I just showed, or it might be for people that are embedded at a local level and kind of want to become data professionals and they want that really deep learning. That formal learning, whether it's apprenticeship or whether it's more the type of course we deliver, is one way of doing it. Your social learning, Steph mentioned in the SPACE model that community element and how we bring the community together around the journey of Agentic AI. If you haven't got communities of practice in place right now as part of your data and AI literacy programme, you're missing a massive trick. It's how you're going to reinforce the engagement, reinforce the learning, apply it more accurately to your space and your environment. And then you've got your on-the-job experiential learning. That's where, whether it's done with a business like Pendle where you can bring a partner like Steph in to help you with those experiments and guide you through your first sets of experiments, that's one way of doing it. Equally, you might have teams internally who are able to do that. But if you are sat in that data and AI team, you need to be out there in the business helping people create those strategic pilots that are going to help them understand this space, because it's going to be when it's directly applied to their world that they really get it and they really start to understand the value and are able to evidence the value back to the people that hold the purse strings. Those are your three options really. You've got your formal learning, you've got your social learning, you've got your on the job experiential learning. And I think it's really for all the people in this room to set up programmes that impact all three. And that's the only way you're really going to get a mass majority of people to engage with this space.
Oh, hang on. Screen. There you go. Yep. Go back on. Yep. There we go.
Stephanie: I'm going to switch topic a little bit. You've now got the literacy foundations. But the true transformation requires you to fundamentally reimagine how the work gets done. Historically you've probably moved from left to right in terms of a process flow, there's a start and an end. With agentic solutions and hybrid intelligence it's circular. You start to radically reimagine how things get done. An example here is essentially showing if you were going to build an agentic solution with lots of agents, how that could possibly work. The example that I like to use that can really bring it to life for people is credit card fraud. Apologies if you've heard this before, but I think it's a really simple, easy way to understand how the solution works and where the human comes in the loop. A bank will set the strategy. Whoever is in charge of credit card fraud will say I'm only willing to accept a tolerance level of credit card fraud under £5,000 for any account because it's not worth us chasing after that. It would cost us more to recover the money than it would to just take that as a loss. Then how the credit card fraud detection system would work is you would have a monitoring agent. That would essentially be looking at all your spend information. It might be looking at the location that you are in, what you're going on on a website, as much data as it could possibly take around you, in line with GDPR protections of course. And then it would use that and pass it to an analyst agent. What that would be looking at is: you've spent some money on your card, how likely is it that that might be fraud? And it will go through a number of different machine learning models that they've devised to come up with a probability score of it being fraud based on what it's learned previously and what the agent knows. It would then move to execution. If it's low risk, low probability that it's fraud, it will let the transaction go through. If it's a medium risk, it might send you a text message on your phone and ask you to verify: is this you, are you actually doing this transaction? And you can press yes or no. Or actually, if it's really high risk that it's not you, it will block the payment and it will go to someone in the bank, an actual call handler, who will say contact me. And then you would ring up the bank to either release the payment or they will say someone's trying to access your card, is it true or not. And then the optimiser is almost like the reinforcement learning element. It would then take the learnings from whatever had happened and feed back into the circular process. This doesn't end. It's constantly re-evaluating, re-learning, changing those materiality rules within the analyst agent. If you were designing a solution like this today, in the left-to-right way, it just wouldn't work. It's got to be this human strategy in the centre, setting the guard rails, setting the overarching strategy, and then really sensible steps around: how do we monitor and capture, how do we analyse, how do we execute, and how do we optimise? This is the framework that we use to build the majority of the AI solutions or agentic solutions for clients because they pretty much follow the same thing. There are some nuances, but it's always essentially information gathering and monitoring, analysis, execution, and optimising with the overall strategy. The key message on this is: don't think about adopting agentic AI as just fixing an element of your current process. You have to reimagine the whole process and how you want it to work, which again can be quite scary. As Greg said, do an MVP test and iterate -- it might never get to a level of accuracy right now that you want it to. The technology is always going to improve, so it might be one that you just put on hold for now. But don't plunk it on top, it won't work.
Greg: Nice.
Stephanie: And then if we move on to the five pillars of how you get ready for agentic-ready operations. The first one is codifying the knowledge of your workers. What I mean by this is: what are the processes they follow, but more importantly what is that tacit knowledge that only they know, the small nuances in how things are operated or how the market works. They are so important to getting the output correct in these agentic solutions. The next thing is systems integration and architecture. Can the agents work? Can they access multiple systems seamlessly? Do they have robust APIs that work? Do they have the right data quality standards? That's a more technical element that you'd look for. We just touched on it before, but what are the hybrid workflow designs that you need? How much element of human interaction do you want versus agent execution? What are the handoff protocols between the strategist and the agent operator? What are your feedback loops and how do they work for continuous process optimisation? And then, everyone loves governance, what is your governance process around that? You're constantly monitoring not just the performance of these AI agents for model drift and things like that, but that they're actually doing what you want them to do all of the time. Risk management framework: also stepping into that governance space. What are your failsafe triggers, human oversight mechanisms? What are your interpretability standards for agent decision-making? How transparent do you want them to be? We are a responsible AI provider, so everything that we do is transparent and that enables true accountability on the owner of the agentic solution because they really can see and understand how it works. And if you can see it, that's when you can build trust and take accountability for something. And then the last thing, but definitely not least, is performance measurement evolution. Like anything in business, if it doesn't give a return on investment, whether that's for profit or for social good, if you're in a charity sector, it's highly unlikely you'll be able to invest in future solutions. What are your performance metrics going to be? And I would set those up right from the very start. What does good look like in your process today? And what do you want good to look like once you've implemented your agentic solutions? You need to then track how that productivity and how this process is working. Is it actually taking us longer to deliver than it would have previously? How productive are the teams being, so you can really understand the improvements that the agents are having? And again monitoring the human-agent collaboration quality and outcomes, that's really important, again for building trust and empowerment. And once you've got those communities, once someone's done it and they can see the benefits, understand the benefits, and they've got someone in their own organisation talking about it, the hype starts and the rest follow. It's really important you've got those KPI measurements set up front.
Greg: I think what's exciting about point five, pillar five, is that for me it's the bit that the data industry has always been really bad at: ironically, given what the data industry does for a living, how do we set up the measurement and the process to prove the value? This should be much more measurable, much more evidenceable, and therefore it'd be absolutely remiss if people were building these, starting their pilots without actually having a plan for the ROI. We did, not to plug another webinar, we did a webinar on the ROI of data literacy, and you can get it on our website. It really was the ROI of data and building data products that it actually came down to. I think that is going to be the thing that will make your experiments successful or not. And the Pendle lens of give us, let us work with you to make sure we've got metrics in place, because this isn't going to work for either of us if it doesn't, that knowledge of metrics and measurement is going to be the differentiator, I'd say.
Stephanie: Yeah, definitely.
Greg: And then for me, and I'm very conscious that I said this would take 25 minutes and I'm just an outright liar because without the panic of being in front of 300 people, we're moving much slower than we were last time we did this. But I think this is a real misnomer in data and business generally: that there's no cost of inaction. Well actually here there is a cost of inaction. The competitive advantage is absolutely massive if we get this right, and therefore doing nothing about it, not starting to pilot things, not starting to bring people on the journey, not starting to help the business through this lens is really almost unacceptable from a business perspective. I've been doing a lot of looking at the moment and thinking about the main strategic decision around governance: are we taking an offensive or a defensive approach? I think this type of opportunity, the foundations have to be there, which means that if you are the most governance-defensive company in the world, this is the time to stop being that a little bit. Do it in the right way, absolutely do it in a responsible way, but you're not going to be able to innovate in these spaces if you can't let the guard rails out a little bit and let people start to experiment and test. I think that's a big one. But some data points for you. Companies that get stuck in legacy models, and that's probably a human-led model actually, where there's no augmentation of humans going on, are going to be facing up to 40% higher operational costs by the end of next year. I think we see that as a business. I always say it's a bit of an ivory tower when you work for a company like ours, but we use data and AI every single day on mass and it means we have to hire less people and all those types of things. It just does work if you get it right. Competitors with agent-powered teams are going to be making decisions 50% faster because that human intervention as the guard rail and the direction is a lot faster once the agents get going than if you have to keep having the human in the loop. And then this is a really interesting one. I think this is an unseen challenge at the moment. Traditional firms who aren't taking this on, and I know some of the big firms who are not embracing this at all, especially in professional services. We're very fortunate to work with some that are taking this very seriously, but some of their competitors are not. The employee engagement is just going to drop off massively. These stats reckon 35%, I think it could be even more, as people realise that the future of their role is to have these skills, to be able to use these technologies, to operate in a different way. We've got to be rewarding the employees at a very local level by empowering them, and that for me is the big opportunity here. There's just no excuse for inaction at this point. Do what you can within the constraints you face, but push really hard for those constraints to be increasingly unlocked, because otherwise you're just going to fall behind from a competitive perspective.
Stephanie: To summarise, we see really a three-step process outside of the actual technical build in terms of how you create that path to become an agent boss. As we've spoken through, it's definitely about launching a comprehensive literacy programme, looking at assessments of the organisation across all levels and the gap analysis. And again, this isn't to create people who understand the technology inside and out or can build it. It's just understanding how these work, how this changes their role and responsibilities potentially in the future, and how they can become empowered to use the technology. Next is all about obviously redesigning the core operational processes. And I would say, if you're really interested in this, look at three critical business processes that you might have at the moment and really think about how an agent could help with that process and start to map out that workflow, looking at that circular process that we've been through before. Definitely if you then want to go and MVP it, put those performance metrics up front, get them agreed with your stakeholder, if they've got someone sponsoring your MVP project, so you're absolutely sure that you can measure what comes out at the end and you've got that baseline of performance. And implement those five pillars of agent-ready operations to unlock the value. In terms of MVP, you're not talking about hundreds of thousands of pounds anymore to get one of these up and running to test the viability of the solution. The cost isn't a blocker for a lot of people anymore, just because of the technology that's now available. Some of you might even be building things through vibe coding and things like that. There really is no excuse not to give it a go before you go all in. And then the last bit is around creating those agent boss career pathways. What's the formal recognition for effective agent leadership? Integrating agent management skills into the job descriptions, developing your internal comms processes, and building advancement opportunities that again reward AI leadership capabilities -- because that will be a key skill of the future.
Greg: 100%. And then the call to action for people in this room is, and I know it's a cliche, you have to be the change you want to see. But ultimately you have to be the change you want to see. If you are sat here thinking that you want this to work in your business because you see the potential ROI, you see the success it could bring to you, your data office, your AI office, whatever part of the organisation you work in, you've got to be leading the charge on creating the business case, proving the value, bringing the people on the journey, telling the right story. Be compelling. Don't just think it's going to tell its own story because it makes not much sense to most people. You have to get your head around what the story is. Have those example use cases like Steph did earlier that just bring it to life for the type of business you work in, that actually puts it in the context of where decisions will be made 50% faster, will be right more often than they're wrong from a pricing perspective in that example. And it's really about being the person that can learn enough and know enough to host that journey and drive that transformation. That is everything from Steph and I. I think there's a couple of questions that have come in to the chat. I will have a look at them. Oh no, that's the ROI webinar. Okay. We had some questions in advance, I think. They came in via email when people signed up. Jemima or Sarah, have you got any of those questions that we'd already answered? I'm hoping some of the people will be in the room, but if they're not and they're watching on a recording later, then hopefully they'll still get their question answered.
Sarah: Greg, we got a question from Laura Fernside. She says: in policing we are getting our policies in place around data-driven technology, lots of guidance and frameworks to adhere to, but it's proving to be a really good way of engaging with the wider business. What would you say the key priority knowledge points are for senior leaders to know about AI in order to drive innovation safely?
Greg: Great question. I'm going to take the first bit of that and then I'm going to hand over to Steph for the more responsible bit. I think for me, we've still got to remember that senior leaders don't really care about data or AI. They're probably excited by the concept of AI, but what they really care about in the policing world is how are we going to arrest more people, keep more people safe, what can it help us do? I think clearly in the public sector, and because we should be trying to be as efficient with people's money as possible, what can we do through this type of technology that is cheaper to run, requires less people, all those types of things. Those are the stories we have to build, not necessarily just the glorious, sexy nature of the phrase AI. And getting those stories right is going to be a key element of it. How we temper that and make sure that it's being done in the right way, because the worst thing that could happen to a public service like a police force is that they got this wrong and it went publicly wrong. Steph, where can you help from? What do we need to be communicating to leaders about responsibility and doing it the right way whilst also achieving those value points?
Stephanie: I think it's really having a really good risk framework. When you're thinking about all these things, it's really through the whole life cycle of your AI development, right from the data collection all the way through to model retirement. Really thinking about those things. The things that I would always communicate is around bias in the data. What do you have within your data that is making it biased? And you will never be able to get rid of all that bias, no matter if any consultant comes in and tells you they can, you can't always get rid of it. But there are things that you can do to mitigate bias within your data. And again that is a choice for your leaders around what level of bias mitigation do they want to put in, because that really drives then true accountability for the data that you have and how you've manipulated that data to offset the balance. The other would be transparency, as I spoke about. If you want to drive trust and again accountability of the AI, the people using it, or the people who are using it on a day-to-day basis; really not the whole organisation, have to understand how it works at a really top-line level so that they really trust it and they understand if it's not producing the right information. The next would be accountability. I see it all the time: the AI goes wrong, like the example in the police or Amazon where they had a hiring algorithm and it was just hiring men because of a biased data set over women. But who's accountable? Who made that decision around that biased data set, around how they configured the model? There has to be someone accountable at each step of that process, not to chastise, but just so you know who to go to once to get it fixed and why that decision was made. Security, obviously standard, not going to go into loads but with AI being how it is and what hackers can now do with AI capabilities, security is definitely a massive one and you'll see a lot about quantum computing come over the next couple of years to support with that. And then data access: making sure that you're only using data that has actually been agreed. What data can you use from your customers? What have you said to them about how you're going to use their data for AI? Again, it's building trust in your brand, building trust in how you're using their information to inform your products. And then really understanding the risk: where are you on that risk profile in terms of if you're building something internally, what's the level of risk versus putting it out there in the market? And what level of risk are you really willing to take?
Greg: I think based on what you said there about the accountability piece, and obviously the big topic of conversation for a lot of people is, we can't get anybody to be a data owner, the idea of people being an agent boss and an agent owner is probably even more scary. And I think it's going to come down especially with those senior leaders who are going to take that accountability. Literacy is one thing, and if they don't understand it they won't trust it and if they won't trust it they won't let you use it. But I think it's going to take a lot of one-to-one time with senior leaders actually to really make sure they get it and it's being communicated to them in a sensible way that they can understand, so that they are willing to sign off on being accountable for it. Don't overlook the need to get proper one-to-one engagement and coaching, because I think that's the only way they're going to actually be willing to be owners moving forward. I'd say that's a big one in achieving that ownership piece.
Stephanie: Yeah, I would say as well that we're actually getting a lot of requests from non-executive directors who are really keen to understand this space. If you think about what non-executive directors are there to do, manage the risk of the business and ensure that the business will survive and it'll deliver for its shareholders, they're actively having conversations with us about exactly what we've just spoken about. There will be a push downwards soon in terms of these questions happening at a board level, which is very very positive. Don't be surprised if they start coming to ask about all these different questions -- it will come, because I think the non-executive director level are now really starting to take an interest in what this means for them as a business.
Greg: Quite natural, right? Lots of really smart people with more time on their hands than the average person because they're a non-exec, and a load of responsibility, I'd be quite worried about this space and wanting to get to know it. Yeah, that does make a lot of sense. Have we got any other quick questions, because I know it's obviously coming up to the hour. Anything else that we want to ask just before we wrap up?
Sarah/Jamaima: Yeah, I think we have time for one more question. We got someone sharing: hi Greg, Steph, you shared the AI literacy curve and how teams and individuals can evolve through maturity, but how have you seen organisations benchmarking or measuring this?
Greg: I think this is always a really tough question from a people perspective. The first thing that everyone needs to understand, and I saw a post earlier on LinkedIn about how you measure this, and I fundamentally disagreed with it; it was someone else who works in the space and it's just incorrect, but your big barrier here is that there's a concept called the Dunning-Kruger effect. It's a psychology theory and it's the idea that the less we know about a subject, the more we think we know about it. And that goes for everything around data and AI really. Most people in the business, if you ask them, will think they're as data literate or AI literate as they need to be, because some of them will not think they need to be at all and that'll be enough for them, and therefore they're as happy as Larry. Then you've got people who, well, now they can use ChatGPT or Co-Pilot, so they must be AI literate. I say this as somebody who sells an enterprise-wide baseline assessment of people, I think data literacy or AI literacy organisation-wide assessments, we do do them, but for me that level of subjectivity just makes it quite difficult to trust the data. It's quite a biased data set by nature of Dunning-Kruger. And therefore you're probably better to do more targeted interventions in key areas where you can actually baseline the true competency of people and spend more time doing it objectively, rather than trying to go org-wide and subjectively test the organisation. I think there are lots of different ways. I'm happy to have that conversation with whoever asked the question because it's a really important question. But you could also spend a lot of money trying to work out what is never really going to give you the actual trustworthy insight you wanted. Do feel free, if you want to ping me on LinkedIn, and I'll have that conversation with you and give you more information about it. More than happy to do that.
Okay. We are at time and really appreciate you joining today. If you've got any further questions, you can ping them over to us. We'll try and answer them. Neither Steph nor I would mind you connecting with us on LinkedIn. We look forward to seeing you on the next webinar. My next webinar is actually tomorrow with Synure and we're talking about AI again. If you want to do that, you can join us there. Otherwise, feel free to join the next Data Literacy Academy webinar. Speak to you soon. Bye-bye.
Stephanie: Thanks very much.
Unlock the power of your data & AI
Speak with us to learn how you can embed org-wide data & AI literacy today.

.png)
.png)



.jpg)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)