Enterprises are investing in Agentic AI. Few are preparing their people.
That’s the tension we tackled in our latest webinar: How to empower your people to become agent bosses, hosted by Greg Freeman (CEO, Data Literacy Academy) and Stephanie Gradwell, Managing Partner at Pendle.
We've seen how GenAI has changed people's ways of working. Now Agentic AI is the new kid on the block. This type of artificial intelligence means that machines will have more autonomy to help humans work better, by actively taking on workflows, decisions, and execution at scale.
But what often gets deprioritised in the conversation, is how to prepare your workforce to lead, trust, and work alongside AI agents.
What is Agentic AI?
Agentic AI refers to systems that don’t just generate content or answer prompts. These are autonomous agents that set goals, learn from their environment, and make decisions with minimal human input. It completely changes the way of working, and requires a new way of thinking to go alongside it.
Here are a few examples of what this can look like in practice. Price optimisation agents can monitor competitors and adjust pricing in real-time. Or fraud detection agents can monitor transactions, flag suspicious activity, and take action, all before a human ever gets involved.
In short: they’re not your assistant anymore. They’re your newest hire.
The Agent boss challenge
Here’s the problem: while the tech is advancing fast, most people aren’t ready to work this way.
In a live poll during the webinar, over half the audience believed that fewer than 20% of their colleagues would even know what Agentic AI is. And they’re right.
The majority of employees, even in data-driven businesses, aren’t equipped to lead or collaborate with AI agents. They’re stuck in phase one: using AI as a souped-up writing assistant.
The leap to phase three, where humans set the strategy and agents execute it, requires a radical rethink of roles, workflows, and skills.
This leads us into the biggest mistake many businesses are making. They're spending millions on AI platforms… and pennies on people.
Greg shared one example: a company spending £100m on Google tech, but just £50k on training their workforce to use it. And then leadership will wonder why ROI on AI is lagging.
Agentic AI doesn’t work without organisational readiness. That means:
- New operating models
- AI-augmented workflow design
- Clear governance and risk frameworks
- And most importantly, empowered employees who know how to lead, trust, and question AI
The Space Model: Skills + Strategy + Support
Stephanie introduced the SPACE model, a framework for empowering employees to become “agent bosses.” It covers:
- Skills development: Not to code, but to understand how agents work and where to delegate.
- Purpose alignment: Clarity on strategic goals is essential when employees move from doing to directing.
- Autonomy: Employees must feel trusted to lead agents and make decisions.
- Community: People learn from people. Build Communities of Practice around AI.
- Engagement: Rethink recognition and reward. Being an effective agent leader must count.
Why your people need common understanding and language
Data Literacy Academy’s AI Literacy Curve shows that even a modest increase in data and AI literacy significantly boosts confidence and adoption.
You’re not trying to turn your workforce into data scientists. But you do need them to:
- Understand what Agentic AI is (and isn’t)
- Know how to use it responsibly
- Trust it enough to delegate real work
- Feel confident experimenting with it
And that starts with language. But it also requires shared mental models, curiosity, confidence to question and challenge systems, and training rooted in real business use cases, not theory.
And the growing gap between leaders and laggards is staggering.
Companies that get stuck in legacy models could face up to 40% higher operational costs within a year.
Competitors with AI-augmented teams are already making decisions 50% faster.
And as GenAI becomes ubiquitous, employee engagement will suffer in organisations that fail to empower their people, up to 35% drops, according to Stephanie’s analysis.
So, where should you start?
Greg and Stephanie outlined a clear 3-step strategy:
- Launch a targeted AI literacy programme: Focus on practical use cases, governance, and common language.
- Redesign core workflows: Don’t layer Agentic AI on top. Rethink from the ground up.
- Create agent boss career pathways: Recognise and reward leadership in this new hybrid world.
And importantly: build in the metrics from day one. If you can’t measure the ROI, you’ll struggle to scale the solution or defend its impact.
A final note: Be the storyteller-in-chief
The call to action was clear: If you want this to work, you have to lead the story. Senior leaders won’t move without a compelling narrative about how AI helps the business win, not just how it works. And while a lot of executives are jumping on to the AI hype train, it's up to data and AI leaders to make that impact tangible.
So be the one who tells that story. Who finds the right use cases and brings the people with you. Because when done right, Agentic AI can be the next wave of technology that unlocks incredible productivity, value and innovation. But it needs to be done with care, forethought and investment in your people to make the most of it.
Greg Freeman: [0:00 - 0:42] Let's do this. So, the topic for today's webinar is a slightly different one in the sense that very often at Data Literacy academy we talk entirely about data literacy or AI literacy. AI literacy definitely plays its part in this conversation, but today I am absolutely delighted to be joined by Steph, who I will let introduce herself in a second, on how to empower your people to become agent bosses in the age of Agentic AI, which is a probably more advanced topic than we normally talk about. So I'm quite excited for this one.
For anyone who doesn't know me, my name is Greg Freeman. I'm the CEO and founder of Data Literacy Academy. I've been very close to most of our large
programmes and projects over the last few years. I typically take the lead on these these webinars, but today it's really nice to be joined by Steph and share the stage again.So I will let Steph introduce herself.
Stephanie Gradwell: Hi everyone. I'm Steph. I'm the managing partner at Pendle. Pendle is a boutique data and AI consultancy and we really focus on the development, deployment and assurance of AI solutions but in a responsible way. This topic is actually really close to my heart. So it's something we've actually been doing a lot of work on recently. So hopefully you'll take away some really key nuggets of how if you did want to get started on Agentic
Solutions, how you do that from more of a operational and people perspective rather than deep in the tech.
Thank you very much.
Greg: And it's a great topic. I think it's incredibly front of mind for a lot of people. It's the probably the new kid on the kind of mainstream block right now is Agentic. I was looking the other day at the Gartner hype cycle around AI and, not that I'm always the biggest Gartner fan, but they have their place in the in the community and super interesting to see that already generative AI is kind of on its way down into the trough of disillusionment and we're now moving
into the peak topics being agentic AI, responsible AI, data, duality, readiness for AI. So, it definitely feels timely in that sense, and I think there's going to be huge opportunities for large organizsations to to really make the most of it. So, first of all, I said it'd be pretty quick into the survey. What percentage of your business colleagues do you believe would know what Agentic AI is? and by business colleagues. We largely want you to focus on the people who wouldn't consider themselves data professionals, but of course the data professional population is also there too. So what percentage um of your business colleagues do you believe would know what Agentic AI is? Nought to 20%, 20 to 40%, 40 to 60%, 60 to 80 or 80 to 100.
And I found the answer that we got to this when we presented on stage really really interesting. So let's see what the what the answers are. There we go. Awesome. Um, okay. So, this is a more realistic answer. I think that's good. Over half of the people in the room feel that nought to 20% would know what Agentic AI is. I think that's very fair. I think it's interesting that one person feels that everyone in their business or nearly everyone in their business would know what Agentic AI is. And as I said at the time that we presented this on stage, I hope that's because you're part of a data consultancy or, potentially you work with Steph. That might be the the reason. But yeah ultimately I think if you're in a large organisation the answers towards the top so the the kind of 56% of people who are nought to 20%, 17% for under 60% is is really the more realistic option. I believe if you're in a large enterprise to truly know what Agentic AI is, you will be in the nought to 20%. From all the conversations we have, from all the learners and the employees of large organisations that we meet, um things like Agentic AI are just not something they've really considered and very often can blow their mind. So we I would suggest if you're in a larger
enterprise, it'll be in that nought to 20% bracket, but really appreciate everyone feeding in and it's good to get a realistic view.
So, with that in mind, what I'll do is hand over to Steph to get us into the the details of why this is so important as businesses want to start adopting.
Stephanie: You've obviously all joined this webinar for a reason. So if you don't know what Agentic AI is, I'll just give you a quick definition. So it's really an element of artificial intelligence that can autonomously make decisions. So how it's different is it actually sets goals, performs tasks with limited human supervision. So there should always be a human in the loop in some way. But in addition to that it continuously learns and adapts to the environment. So it's not a static model or AI solution that you build and you know as Greg said this is the new hot kid on the block and enterprises are significantly investing in this space to improve efficiency. So what they're doing with it is executing complex workflows, ensuring they can make data-driven decisions and being able to scale really quickly without adding a significant amount of tradition of additional headcount which used to kind of be the formula add more headcount get more growth. That's kind of being flipped on its head and we'll talk about that a little bit. So really the basis of today is based on that to say really the newest team member that you get into your business in the future it won't be an additional human it will potentially be an AI agent and so as you as employees how do you work with that a AI agent and become empowered to really understand and get the most out of it without being scared, fearful and a bit like oh it's taking over everything that I want to do. It's definitely not that. It takes away all pain points so you can focus more on the strategic value ad work.
So just to give you a bit of a overview of the capability and how it glide paths up. So I think probably a lot of you on the call and probably everyone in
the country now is um at this stage in terms of phase one where it's a human and an AI assistant. So that's where every employee is using like an AI assisted productivity tool. So that might be AI writing assistance such as you know ChatGPT or Co-Pilot or something that organises your emails for you or transcribes audio. So something really where you have to activate it and it does quite a simple task. Then you move into um phase two and we're seeing some companies get there but again it's not the generative norm at the moment. And this is where an AI agent actually works alongside a colleague. So the agent is
directed to assist in real time. So an example would be like a customer services assistant that if you are ringing up to ask about the where's my order or I'm not really not happy about this product that you've sent me. An AI agent would essentially be analysing a conversation, suggesting optimal replies to that customer service agent that they can go back to the customer with and automatically pulling up the relevant information for that customer such as their purchase history. So what it means is that human can really focus on the human interaction but allow a lot of the leg work that they would have done to understand why the customer might be calling is done for them. So we're seeing some of this come into the market. So, if you've ever used the comet
technology that's come through Perplexity or ChatGPT5 where you know you can put in a prompt and essentially it'll go away and build and do your shopping order and things for you. That's essentially the same thing. It's acting autonomously but it's not taking over everything.
And then lastly, this is kind of a true agentic solution is where humans set the strategy but the agents execute the business processes. So an example for this would be if you were working as an e-commerce director in a business and you set a pricing strategy. So that's within your role remit and you said I want to get a 25% profit margin but still remain competitive with our top three rivals in the market. So what the AI agent would do is it would look at the pricing for thousands of products that you list but also your competitors list and also all in the market. The agent would then continuously monitor the
competition and how their prices is moving and what demand they are seeing and then adjust prices in real time on the company's website to ensure it
met those strategic goals. So the key difference here is that the human is only the director of that who essentially sets the strategy and then maybe at the end reviews performance to understand is there having a broader consequence such as you know brand reputation of doing it like this rather than
actively going in looking at the market changing the pricing. So it's a completely different view of how operational process flows work. But it's not all doom and gloom. You know that's a real true competitive advantage if you're over on the phase three. But the reality is is many organiSations are
[11:40 - 12:22] just stuck in phase one. Um, so they are
using it as sophisticated productivity
tools, but they haven't yet realized the
potential of using AI agents and agentic
solutions as autonomous digital
colleagues.
It's super interesting, isn't it?
Because that example you've just used
there is such a shift in terms of
people's acceptance of trust in a
machine.
Yes. um like the idea that people who've
always done their job one way now direct
in essence draw up the playbook for the
agents and then let the agents make the
plays and and all those types of things.
[12:19 - 12:58] And um I think it it fits in really
nicely with with this um this view of of
the iceberg. And and something that I do
think people are aware of is that
probably a lot of the important stuff
that happens within a business um
especially from a data perspective and
um the way that the data office and the
AI office engage with the business is
kind of the tip of the iceberg and it's
just the stuff that's seen on the front
line. But I guess one one thing that I
think is going to be really really
difficult for people as as we move
through this process of more
normalization of that type of experience
[12:56 - 13:36] for employees and for especially if
you're you're running a team and a
department and now you've got a a team
of agents basically doing your pricing
for you instead of instead of um pricing
analysts. is at the moment everything we
see is that there's so much investment
going into
buying the software, planning for buying
the software, embedding the software,
implementing the software from a
technology perspective, but I I think
the bit that we need to get a lot better
at, and I know I'm biased and you're
probably quite biased because the
operational design I think is a is a
[13:33 - 14:15] passion of yours, is that for what
you've just described, we're going to
have to have such an enhanced level of
data, AI literacy, a new way of
operating within the business, like a
pure operating model shift that means
that we can actually trust the AI agents
to work within our current workflows and
and our current ways of working. Um I
had a really interesting conversation
not long ago with um a co who said we've
um we've we've kind of really started to
invest in this area. Not a genti but
like bringing the company on the
journey. I said wonderful like been
waiting to to hear that from you for a
[14:13 - 14:51] while. Um what are we talking about
here? And he was like well we we spend a
lot of money with Google 100 million
quid a year with Google. It's it's a big
old big old budget. Um,
and I think we've got to start to kind
of get the people piece to catch up. And
I said again, love love to hear that. I
agree. One of my things that I often say
on stage when people let me talk is um
if you've um if if you look at your
budgets and you've spent 10x on the
technology what you have on your people,
it might be the reason that this isn't
working for you and that you might be
struggling to get it embedded. Um and
[14:48 - 15:27] his answer to the£100 million um spend
on um on Google was a 50 grand budget
for um for the people side. And I was
just like, well, that's where we're
going wrong fundamentally. I think we've
got to that there's going to be a
massive obsession over AI tools, but the
the fundamental transformation that
comes surrounding that is really where
the the area of focus needs to be. And
again, if you're sat watching this
webinar, it's probably actually because
you're more interested in the people and
culture side and that's why you you
follow our business. But it's that
message that we've got to share that for
[15:25 - 16:04] the type of amazing use case that Steph
just explained and the fact that that is
going to become the new reality with
time for a lot of large organizations,
we have to do the below the iceberg
stuff, otherwise it just isn't going to
work. And the blind spot that
everybody's just going to want it,
they're just going to trust it. I think
is a is a real real problem for people.
And yeah, I would just add Greg that you
know, if you are one of those companies
that have invested in AI but you're not
seeing a return yet, I would really look
at operating model and literacy because
you know the tech is brilliant. It's not
[16:02 - 16:44] normally the tech that is the issue.
it's the broader um organism around it
that essentially means that you're not
driving the benefit out the back of it.
Um so I just say if if you are in that
position that this call is absolutely
brilliant for it
100%. I mean we we often talk about um
the ADCAR model of change. So um it's a
it's probably the best known or most
common change model these days. Um, and
that reinforcement bit, that final R
within the ADC car acronym is really
everything else that you have to think
about to actually enable the the change
to to happen. And of course, technology
[16:41 - 17:25] access is one of those, but actually how
you how you frame this around like HR
and the people team and all those types
of things. Like this is going to be an
environment change for the whole
organization. That isn't just a AI team
silo. It's a it's a whole business
transformation and yeah, I think it's
super interesting and then um to to come
back to the point we've made there, I I
think there's there's just too many
people who expect that this area in here
and I don't know if you can actually see
my uh my my arrow on my screen to be
fair. So you can good using as a
pointer. Um, so everybody wants the
[17:22 - 18:08] visionary leaders, the people that are
highly intelligent, highly empowered,
and want to be working with this type of
technology on a on a daily basis. But
ultimately, whether you like it or not,
and all the research suggests, 80% of
your organization is more likely to be
closer to the bottom left than they are
to the top right. which means that
you're going to end up in a situation
where you're going technology first, but
it actually ignores the empowerment bit
and the bit that brings them on the
journey to want to use that intelligence
or unlock those constraints. So, I think
what we've got to do is obviously
[18:06 - 18:46] identify those enthusiastic supporters.
they're they're really really important,
but we've got to be able to also un
unlock the power of it with the wider
the wider body of people. So, um if you
are sat here thinking everybody's going
to want to be on that journey, as I've
said a couple of times now, um you are
probably blinded by the fact that data
and AI teams mostly speak to the best of
the business. So you may feel that
you've got lots of enthusiastic
supporters, therefore everybody is.
Actually, the people who speak to you
every day as a data and AI team are the
ones who even if you still think they're
[18:44 - 19:26] not very good or they don't know enough
or aren't they frustrating and
challenging to work with because they
give us crap inputs or whatever it might
be, actually they're the best of the
bunch. It's really the rest of the
organization that you need to worry
about. So, in terms of setting your
managers up to be able to deal with this
new world of of hitting use cases like
Steph described, um you're going to have
to do a lot more work with them to to
make that happen.
Yeah. So, this essentially talks about
how you empower um people to become an
agent boss. So, it's going to turn into
[19:24 - 20:10] a very different like hierarchical
structure. So instead of you know
multiple layers of hierarchy um like it
currently is now in organizations the
hierarchy structure is probably going to
get a lot flatter. So what that means is
there's greater autonomy for people um
to go and set their own strategy for
areas to run an area of the business um
because you a lot more people will be
empowered to make those decisions. But
that can be quite scary for people to
have that level of responsibility and um
not feel that if they're not doing the
doing they're not doing a good job. Um
so in terms of how you are kind of
[20:06 - 20:52] rewarded will shift. Um and this this
model um based a space model so um it's
really well researched. It's not
something specific to the data
environment um because this is just
about how you empower and engage um
colleagues. It it identifies like five
really critical dimensions of how you
empower um colleagues within the the
workplace. So the first of which um
we've touched on is ob obviously the
skills development. So you've really got
to expand your skills base to understand
you know how agents are built. You don't
need to build them, but you need to
understand how they work and how they're
[20:49 - 21:35] built. Um, delegation strategies. So,
how are you going to delegate work
across those agents and what do you want
to retain human and what do you want to
um hand over um to an agent to do um and
AI workflow design. So, um AI workflow
design is completely different to
standard set process management and
we'll go through that at a really
topline level. um later on purpose
alignment. Um so it's really crucial
that as employees transition from tech
um task execution to strategic direction
setting that they're really clear on
what their purpose is in the
organization aligned back to the overall
[21:32 - 22:16] um business strategy.
Autonomy. Um, so autonomy takes on a new
meaning when employees manage both human
team members and AI agents. Um, and to
be honest, this is one area where it's
really um, this hasn't been done before.
This is all new. So, people are learning
as they go. And that level of autonomy
will essentially be set by the risk
level and risk preference within your
business of whether you know you want
everything to be run by AI agents. You
know if you look at the likes of Delingo
and Shopify where you know it's actually
getting a bit out of control and they're
not seeing the results they want. So
[22:14 - 23:05] bringing more people back. Um but it's a
real balance around the risk profile and
what your organization want to take. Um
community building. Um so that's going
to en encompass both human relation
relationships and human AI collaboration
partners um and then engagement
mechanisms. So they need to evolve to
recognize and reward effective agent
leadership. So again, if um if there's
anyone from HR on this call, you really
need to be thinking about how are you
rewarding colleagues for actually going
on this journey with you um and being an
effective agent leader.
Press the wrong button. Um so
[23:03 - 23:52] I guess for from our side so for
everyone's context um the data and our
literacy curve is something that data
literacy academy have created and it's a
people version of the data maturity
curve that we all know and sometimes
love. Um not guaranteed. Um so for us
this is really just trying to trying to
represent the fact that
it is possible for us to win this battle
and it is possible for us to win this
race um as an industry
but I think we are racing so so fast and
we're probably not according to the data
bringing people with us. So um I guess
based on that example of understanding
[23:48 - 24:32] enough to trust it to to then use it or
allow it to run from the e-commerce
pricing example earlier. Um there is
lots of evidence to suggest that if a
person is data or AI literate they're
over 50% more likely to feel empowered
or trust this type of work. And that in
itself is a really important facet of of
getting this to land and getting it to
be used in businesses. But at the same
time, again, over 50% of people say
they've received little to no AI or data
training any time in their entire
career. So if we're aiming to get more
people to that level of empowerment, we
have to do our bit and and and get them
[24:30 - 25:08] get them educated, help them to feel
comfortable with it. and and it's not
going to take all the education in the
world in my opinion. I think what we've
seen quite a lot and we we're about 12
months into having the AI part of this
curve now. Um we announced it September
last year and kind of built this into
the data literacy curve that we already
had. We're only really talking about
needing to get people to this level. I I
think what we're seeing is that the
adoption of generative AI and the
prevalence of generative AI has meant
that actually all you've got to do to
use generative AI well in the right way
[25:06 - 25:44] in a responsible way and and I think
your your phrasing at Pendle around
responsible AI and the focus there is is
really sensible is understand why it's
going to be valuable understand how to
talk about it feel comfortable with how
to use it and all of that can come
before you even get into what would have
traditionally been the more complex data
stuff. So when people have trained
everybody on on the BI tool before,
probably not actually even necessary to
get generative AI and agent AI adopted.
I think as long as we give people a
level of common language, that
governance piece. So what we consider
[25:41 - 26:18] democrac democratized soft skills are
data skills that are specific to data,
but they're not um they're not hard
skills. So governance, quality,
management, that type of thing. Um, and
you are going to need your people to
have that level of understanding,
education, and trust if they're going to
be the people that either even to use
generative in a lot of cases, but
definitely to to engage and to feel
comfortable managing, working, and
trusting with Agentic AI. So, I think
it's to say that we're not necessarily
saying that you have to be up here.
You're going to have some really, really
[26:16 - 26:49] smart people who up here building your
AI agents. But in order to get to this
middle bracket of maturity with your
people, you are going to have to bring
them somewhat on the journey through
common language, common understanding of
the value, common understanding of why
it's safe to use it within their roles.
Getting them to make sure that they're
able to use it responsibly because
otherwise they'll feel like they're
going to make some mistake that is going
to cost the business a load of money or
cost themselves some jobs. So yeah, for
us it's just about saying that we've
we've got to be investing at least in
[26:46 - 27:34] this area of data and AI literacy to to
give people the empowerment to to be
successful with it.
And then to do that I think to come back
to Gartner um but AI literacy is far
more than the the proficiency in the
technical tools. Um it includes even
knowing what types of AI exist. I think
one of the major problems we've got as a
society right now, this is that's a
grand sweeping statement that I actually
maybe regret. There's probably bigger
problems, but if we're going to get AI
to be truly valuable, we have to stop
people seeing AI as just generative AI.
So, people have to know more than that.
[27:31 - 28:09] the big he he hefty problems those and
Steph to your point earlier those phase
three problems whether it's in Agent AI
or whether it's in machine learning or
or whatever it might be they're the
things that are really going to have
material value for the business it's not
going to be somebody being able to write
an email faster or in the right tone of
voice so we've got to help people be
aware of what types exist um we've got
to be working to identify strategic use
cases not just operational use cases and
That's a great point from Garner because
that's ultimately where the num the
numbers come from, the the metrics that
[28:06 - 28:45] pay the bills. Um, using AI safely and
responsibly, developing the technical
skills as well is is definitely part of
it. Um, so I I think we we've got to get
our heads around that and you've got
some options. Of course, you've got your
formal learning paths. um we don't
deliver um apprenticeships as a company,
but I think for certain groups when it
comes to AI and Agentic AI, the
apprenticeship scheme can be a really
good way of them learning. I'd say
that's probably more for your data
professionals who you want to go higher
up that curve I just showed or it might
be for people that are embedded at a
[28:44 - 29:18] local level and kind of want to become
data professionals and and they want
that really deep learning. So that
formal learning whether it's
apprenticeship or whether it's more the
type of course we deliver is one way of
doing it. Um your social learning Steph
mentioned in the in the space model that
community element and how we bring the
community together around the journey of
Agentic AI. If you haven't got
communities of practice in place right
now as part of your data and AI literacy
program you're missing a massive trick.
It's how you're going to reinforce the
engagement, reinforce the learning,
[29:16 - 29:52] apply it more accurately to your to your
space and your your environment. Um, and
then you've got your on the job
experiential learning. And that's where
I think um whether it's done with a
business like Pendle where you can bring
a partner like Steph in to help you with
those experiments and kind of guide you
through your first sets of experiments.
That's one way of doing it. Equally, you
might have teams internally who are able
to do that. But if you are sat in that
data and AI team, you need to be out
there in the business helping people
create those strategic pilots that are
going to help them understand this space
[29:51 - 30:25] because it's going to be when it's
directly applied to their world that
they really get it and they really start
to um understand the value and are able
to evidence the value back to the the
people that have the purse string. So
those are your three options really.
You've got your formal learning, you've
got your social learning, you've got
your on the job experiential learning.
And I think it's really for all the
people in this room to set up programs
that impact all three. And um that's the
only way you're really going to get a
mass majority of people to to engage
with this space.
[30:25 - 31:16] Oh, hang on. Screen.
There you go.
Yep.
Go back on.
Yep. There we go. Um, so I'm going to
switch topic a little bit. Um, so you've
now got the literacy foundations. Um,
but the true transformation you have to
funally um, fundamentally reimagine how
the work gets done. So historically
you've probably moved from left to right
in terms of a process flow. There's a
start and an end. um with aentic
solutions and hybrid in intelligence
um it's um circular. So essentially you
start to radically reimagine how things
[31:12 - 31:56] get done. So um an example here is
essentially showing if you were going to
build an aentic solution with lots of um
agents how that could possibly work. And
the example that I like to use that can
really bring it to life for people is um
credit card fraud. Um so apologies if
you've heard this before, but I think
it's a really simple easy way to
understand how the solution works and
where the human comes in the loop. So a
bank will set the strategy. So, who
whoever is in charge of um credit card
fraud will say um I'm only willing to
accept a tolerance level of credit card
fraud um under £5,000 for any um account
[31:54 - 32:39] because it's not worth us chasing after
that. It would cost us more to recover
the money than it would to just take
that as a loss. So then how the credit
card fraud detection system would work
is you would have a monitoring agent. So
that would essentially be looking at all
your spend information. It might be
looking at the location that you are in,
who um what you're going on on website.
So as much data as it could possibly
take around you um in line with um GDPR
protections of course um and then it
would use that and pass that to um an
analyst agent. And so what that would be
looking at is it goes right, you've
[32:35 - 33:18] spent some money on your um card. How
likely is it that that might be fraud?
And it will go through a number of
different machine learning models that
they've devised to come up with a
probability score of it being fraud
based on what what it's learned
previously and what they know um or what
the what the agent knows. It would then
move to execution. So, if it's low risk,
low risk of probability that it's fraud,
it will let the transaction go through.
If it's a medium risk, it might send you
a text message on your phone and ask you
to verify that is this is this you? Are
you um actually doing this transaction?
[33:16 - 33:54] And you can press yes or no. Or
actually, if it's really high risk that
it's not you, it will block the payment
and it will go to a you know, someone in
the bank, an actual call handler who
will say contact me. And then you would
ring up the bank to either release the
payment um or you know they will say
someone's trying to be access your card
is it true or not. And then the
optimizer is almost like the
reinforcement learning element. It would
then take the learnings from um whatever
had happened and then feed back into the
circular process. So again this doesn't
end. It's constantly re-evaluating,
[33:53 - 34:39] re-undering,
changing those materiality rules within
the analyst agent. So if you were
designing a solution like this, if you
did it today, you know, in the left to
right, it just wouldn't work. It's got
to be this human strategy in the center
setting the guard rails setting the
overarching strategy and then really
sensible steps round in terms of how do
we monitor and capture how do we analyze
how do we execute and how do we optimize
and this is the framework that we use
actually to build the majority of the AI
solutions or aentic solutions for
clients because they pretty much follow
[34:36 - 35:24] the same thing. There are some nuances,
but it's always essentially information
gathering and monitoring, analysis,
execution, and optimizing with the
overall um strategy. So, the key message
on this is don't think about adopting
identic um AI as just a you're going to
fix an element of your current process.
you have to reimagine
um the whole process and how you want it
to work which again can be quite scary.
Um so as um Greg said you know do an MV
MVP test and iterate you know it might
never get to a level of accuracy right
now that you want it to. So you know the
technology is always going to improve so
[35:23 - 36:09] it might be one that you just put on
holder for now. Um but yeah don't don't
plunk it on top. um it won't work.
Nice.
Um and then if we move on to um the five
pillars
um of how you get ready for agentic
ready operations. So the first one is
codifying the knowledge of your workers.
So what I mean by this is you know what
are the processes they follow but more
importantly what is that real tacet
knowledge that only they know you know
the small nuances in how things are
operated or how the market works they
are so important to getting the output
[36:06 - 36:51] correct in these aentic solutions.
The next thing of course is um systems
integration and architecture. So, you
know, can do the agents work? Can they
access multiple systems seamlessly? Do
they have robust APIs that work? Do they
have right data quality standards? Um,
so that's the the next thing of more a
technical um element that you'd look
for. Um, we just touched on it before,
but what are the hybrid workflow designs
that you need? So you know how much
element of human interaction do you want
versus agent execution? What are the
handoff protocols um between the
strategist and the agent operator? What
[36:49 - 37:28] are your feedback loops and how do they
work for continuous process optimization
and then um you know everybody loves
governance but what is your governance
process around that? So you're
constantly monitor monitoring not just
the performance of these AI agents for
model drift and things like that but um
that they're actually doing what you
want them to do um you know all of the
time.
Um risk management framework. So again
stepping into that government uh
governance space. So what are your
failsafe triggers, human oversight
mechanisms?
[37:25 - 38:08] um what are your interpretability
standards for agent decision- making?
So, how transparent do you want them to
be? Um we're a responsible obviously AI
provider. So, everything that we do is
transparent and um that then enables
true accountability on an individual or
of the owner of the Gentai solution
because they really can see and
understand how it works. And if you can
undersea and see it, that's when you can
build trust and take accountability for
something. Um, and then the last thing,
but definitely not least, is performance
measurement evolution because like
anything in business, if it doesn't
[38:06 - 38:47] return give a return on investment,
whether that's, you know, for profit um
or, you know, for social good, if you're
in a charity sector, it's highly
unlikely you won't be able to invest in
future solutions. So, you know, what are
your performance metrics um going to be?
And I would set those up right from the
very start. So, what does good look like
in your process today? And what do you
want good to look like once you've
implemented your identic solutions? Um
you need to then track, you know, what
how that productivity and how this
process is working. Um so, is it
actually taking us longer to deliver
[38:45 - 39:29] than it would have previously?
um you know how productive are the teams
being um so you can really understand
the the improvements that the agents are
having and again monitoring the human
agent collaboration quality and
outcomes. Um so that's really important
again for building trust um and
empowerment and again you know once
you've got those communities kind of
once someone's done it and they can see
the benefits understand the benefits and
they've got someone in their own
organization talking about it that the
the hype then starts and the the rest
follow. Um so it's really important
[39:27 - 40:12] you've got those KPI managements up
front. I think what's exciting about um
this because point five, pillar five is
for me the bit that the data industry
has always been really bad at is how do
we set ironically given what the data
industry does for a living. How do we
set up the measurement and the process
to prove the value? This should be much
more measurable, much more evidencable,
and therefore it'd be absolutely remiss
if people were building these, starting
their pilots without actually having a
plan for the ROI. So, um, we did a not
to plug another webinar we did, but we
did, and you can get it on our website,
[40:09 - 40:49] there's a um, a webinar on the ROI of of
data literacy, but it really was the ROI
of data and and building data products
was what it actually came down to. So I
think that is going to be the thing that
will make your experiments successful or
not and um it's really good. I think the
the pend as a commercial person um
originally the pendle lens of give us
like let us work with you to make sure
we've got metrics in place because this
isn't going to work for either of us if
it if it doesn't actually that knowledge
of metrics and measurement is is going
to be the differentiator I'd say. So
yeah I really like that.
[40:46 - 41:27] Yeah definitely. Um then for me and I'm
very conscious that I said that this
would take 25 minutes and I'm just a
outright liar because we are without the
panic of being in front of 300 people.
We're moving much slower than we were
last time we did this. Um but I think
this is a a real misnomer in in data and
business generally that that there's
like no cost of inaction. Well, well,
actually here there is a cost of
inaction that the competitive advantage
is absolutely massive if we get this
right and therefore doing nothing about
it, not starting to pilot things, not
starting to bring people on the journey,
[41:25 - 42:07] not starting to help the business
through this lens is is really almost
unacceptable from a from a business
perspective. Um and I've been doing a
lot of looking at at the moment and
thinking about um the kind of I guess
what I would consider the main strategic
decision around governance. Are we
taking an offensive or a defensive
approach? I I think this type of
opportunity is the foundational like the
foundations have to be there which means
that if you are the most governance
defensive company in the world this is
the time to stop being that a little bit
and do it in the right way absolutely do
[42:05 - 42:45] it a responsible way but you're not
going to be able to innovate in these
spaces if you can't let the guard rails
out a little bit and let people start to
experiment and test. So I think that's a
big one. But um some some like data
points for you. Um companies that get
stuck in legacy models um and that's
probably a human-led model actually
where there's no augmentation of humans
going on. Um are going to be facing up
to 40% higher operational costs by the
end of next year. Uh I think we see that
as a business. I always say it's a a bit
of an ivory tower when you work for a
company like ours. But we use data and
[42:43 - 43:24] AI every single day on mass and it means
we have to hire less people and all
those types of things. It just does work
if you get it right. Um competitors with
agentpowered teams um are going to be
making decisions 50% faster because that
human intervention as the the guard rail
and the direction is a lot faster once
the agents get going than if you um if
you have to keep having the human in the
loop. Um, and then this is a really
interesting one. So I think this is an
unseen challenge at the moment.
Traditional firms who aren't taking this
on, and I know some of the big firms who
are not um, embracing this at all,
[43:22 - 43:59] especially in professional services. Um,
we're very fortunate to work with some
that are taking this very seriously, but
some of their competitors are not. Um,
and the employee engagement is just
going to drop off massively. These stats
reckon 35%. I think it could be even
more as people realize that the the
future of their role is to have these
skills to be able to use these
technologies to to operate in a
different way. Um we've got to be
rewarding the employees at a a very like
local level by empowering them and and
and that for me is the um the big
opportunity here. So there's just no
[43:56 - 44:48] excuse for inaction at this point. Do
what you can within the constraints you
face, but push really hard for those
constraints to be increasingly unlocked
because otherwise you're just going to
fall behind from a competitive
perspective.
Um so really just to summarize um we see
really it's a three-step process outside
of you know the actual technical build
in terms of how you uh create that path
um to become an agent boss. So as we've
spoke through it's definitely launched a
comprehensive uh literacy program um
looking at assessments of the
organization across all levels and the
[44:46 - 45:31] gap analysis and again this isn't to
create um people who understand the
technology inside and out it can build
it. It's just understanding how these
work, how this changes their role and
responsibilities potentially in the
future um and how they can become
empowered um to use the technology.
Next is all about obviously redesigning
the core operational processes. Um, and
I would say if you know if you really
are interested in this, you know, look
at three critical business processes
that you might have at the moment and
really think about um how an agent um
could help um with that process and
[45:28 - 46:12] start to map out that workflow. You
know, looking at that circular process
that we've been before. Definitely if
you then want to go and MVP it um put
those performance metrics up front, get
them agreed with your um stakeholder um
if they've got someone sponsoring your
your MVP project so you're absolutely
sure that you can measure what comes out
at the end um and you've got that
baseline of performance um and implement
those five pillars of agent ready
operations um you know then to unlock
the value and in terms terms of MVP,
you're not talking about like hundreds
of thousands of pounds anymore to get
[46:09 - 46:52] one of these um you know up and running
to test the viability of the solution.
Um so really it's not the cost isn't
blocker um to a lot of people anymore um
just because of the technology that's
now available. So you know some of you
might even be building things through
vibe coding and things like that. Um so
the there really is no excuse just to
kind of give it a go um before before
you go all in. Um and then the last bit
is around creating those agent boss
career pathways. So, what's the formal
recognition for effective agent
leadership, integrating agent management
skills into the job descriptions,
[46:49 - 47:28] developing your internal comms
processes, um, and building advancement
opportunities that again reward AI
leadership capabilities, um, because
that will be a key skill of the of the
future.
100%.
And then the call to action for people
in this room is you have to be the so so
cliche you have to be the change you
want to see. Um but ultimately you have
to be the change you want to see. If if
you are sat here thinking that you want
this to work in your business because
you see the potential ROI, you see the
success it could bring to you, your data
[47:26 - 48:07] office, your AI office, whatever
whatever part of the organization you
work in, you've got to be leading the
charge on creating the business case,
proving the value, bringing the people
on the journey, telling the right story.
Be compelling. like don't don't just
think it's going to tell its own story
cuz it makes not much sense to most
people. So you have to get your head
around what the story is. Have those
example use cases like Steph did earlier
that just bring it to life for the type
of business you work in that actually
puts it in the context of where
decisions will be made 50% faster, will
[48:05 - 48:50] be right more often than they're wrong
from a pricing perspective in that
example. And it's really about being the
person that can can learn enough and
know enough to host that journey and
drive that transformation. Um, so that
is everything from Steph and I. I I
think there's a couple of questions that
have come in to the chat. Um, and I will
have a look at them. Oh no, that's uh
ROI webinar. Um,
okay. We had some questions in advance,
I think. So, um, that came in via email
when people signed up. So, um, Jamaima
or Sarah, have you got any of those
questions that we'd already answered?
[48:49 - 49:20] I'm hoping some of the people will be in
the room, but if they're if they're not
and they're watching on a recording
later, then hopefully they'll still get
their question answered.
So, Greg, we got a question from Laura,
um, Laura Fernside. So she says in
policing we are getting our policies in
place around datadriven technology lots
of guidance and frameworks to adhere to
but it's proving to be a really good way
of engaging with the wider business.
What would you say the key priority
knowledge points are for senior leaders
to know about AI in order to drive
innovation safely?
[49:18 - 49:57] Great question. I'm going to take the
first bit of that and then I'm going to
hand over to Steph for the more
responsible bit. Um, I think for for me,
we've still got to remember that
senior leaders don't really care about
data or AI. They're probably excited by
the concept of AI, but what they really
care about in policing world is how we
going to arrest more people, keep more
people safe. What can it help us do? I I
think clearly in the public sector and
and because we should be trying to be as
efficient with um people's money as
possible, what can we do through this
type of technology that is cheaper to
[49:55 - 50:33] run, requires less people, all those
types of things. Those are the stories
we have to build, not necessarily just
the kind of glorious sexy nature of of
the phrase AI. And and getting those
stories right is going to be a key
element of it. um how we temper that and
make sure that it's being done in the
right way because the worst thing that
could happen to a public service like a
police force is that they got this wrong
and it it went publicly wrong. Um I
think Steph, what where where can you
help from? Uh what what do we need to be
communicating to leaders about
responsibility and and doing it the
[50:31 - 51:15] right way whilst also achieving those
value points?
Yeah. So I think it's really having like
a really good risk framework. So when
you're thinking about all these things,
it's really through the whole life cycle
of your AI development right from the
like data collection really all the way
through to model retirement and really
thinking about those things. So the
things that I um would always
communicate is around you know bias in
the data. Um so you know what what do
you have within your data that is making
it bias and um you will never um be able
to get rid of all that bias. Um no
[51:13 - 51:55] matter kind of if any consultant comes
in and tells you um they can't you can't
always get rid of it. Um but there are
things that you can do to mitigate bias
within your data. And again that is a
choice um for your leaders around um
what level of bias mitigation do they
want to put in that um um because that
really drives then true accountability
for the data that you have and how
you've kind of manipulate that data to
offset the balance. The other would be
how I spoke about transparency. So if
you want to drive trust and again
accountability of the AI, the people
using it or the people um who who are
[51:53 - 52:32] using it on a on a day-to-day basis
really not the the whole organization
have to understand how it works at a
really topline level so that they really
trust it and they understand if it's not
you know producing the right
information. The next would be I've
touched on it um loads is
accountability. You know, I see it all
the time that um the AI goes wrong or
you know, like the example in the police
or Amazon um where they had like a
hiring al algorithm and it was just
hiring men because of biased data set
over women. Um but who's accountable?
Who made that decision around that bias
[52:29 - 53:19] data set to use around how they
configured the model? Um you know, there
has to be someone accountable at each
step of that process. um you know not to
chastise but just so you know how to go
who to go to once to get it kind of
fixed and why that decision was made. Um
security obviously standard you know not
going to go into loads but you know with
with AI being how it is and the you know
what hackers can now do with AI
capabilities definitely um security is a
massive one and you'll see a lot about
quantum computing come over the next
couple of years to support with that. Um
and then um data access. So making sure
[53:15 - 53:55] that you're only using data that has
actually been agreed. Um so again, you
know, what data can you use from your
customers? Um what have you said to them
about how you're going to use their data
for AI? Again, it's building trust in
your brand, building trust in how you're
using their information to inform your
products. Um and then um yeah, really
understanding the risk. So, where are
you on that risk profile in terms of um
if you're building something internally,
what's the level of risk versus putting
it out there in the market? And what
level of risk are you really willing to
take?
[53:53 - 54:28] I think that based on um based on what
you said there about the accountability
piece and and obviously the the big
topic of conversation for a lot of
people is, oh, we we can't get anybody
to be a data owner. So the idea of
people being an agent boss and an agent
owner is probably even more scary and I
I think it's going to come down
especially with those senior leaders who
are going to take that accountability.
Um literacy is one thing and if they
don't understand it they won't trust it
and if they won't trust it they won't
let you use it. Um, but I think it's
going to take a lot of one-to-one time
[54:25 - 55:05] with senior leaders actually to to
really really make sure they get it and
it's being communicate the communicated
to them in a in a sensible way that they
can understand so that they are willing
to sign off on on being accountable for
it. So, don't overlook the the need to
get proper kind of one-to-one engagement
and coaching because I think that's the
only way they're going to actually um be
willing to be owners uh moving forward.
So yeah, I'd say that's a big one in
achieving that ownership piece.
Yeah, I I would say as well that um
we're actually getting a lot of requests
from non-executive directors who are
[55:02 - 55:41] really keen to understand this space um
in terms of you know if you think what
non-executive directors are there to do
manage the risk of the business and be
in sh be you know ensuring that the the
business will survive and it'll deliver
for its shareholders. they're actively
having conversations with us about
exactly what we've just spoke about. So
there will be a push downwards soon in
terms of these questions happening at a
board level. Um which is very very
positive. So don't be surprised if they
start coming to ask about you know all
these different questions it will come
um because I think the non-executive
[55:39 - 56:13] director level are now really starting
to take an interest in what this means
for them as a business. quite quite
natural, right? Lots of really really
smart people with more time on their
hands than the average person because
they're a non-exec and a load of
responsibility. I'd be also quite
worried about this space and wanting to
get to know it. So, yeah, that that does
make a lot of sense. Um, have we got any
other quick questions because I know
it's obviously coming up to the hour,
but um, anything else that we want to
ask uh, just before we we wrap up?
Yeah, I think we have time for one more
[56:10 - 56:50] question. So um we got someone sharing
uh hi Greg stuff you shared the AI
literacy curve and how teams individuals
can evolve through maturity but how have
you seen organizations benchmarking or
measuring this?
Yeah I think this is always a really
tough question from a people
perspective. So the first thing that
everyone needs to understand and I saw a
post earlier around how you measure um
and I on LinkedIn I fundamentally
disagreed with it. it was it was someone
else who works in space and it it's just
incorrect. Um but your big barrier here
is that there's a concept called the
[56:48 - 57:24] Dunning Krueger effect. It's actually a
kind of um psychology um uh theory and
it's the idea that the less we know
about a subject, the more we think we
know about it. And that goes from
everything around data and AI really.
Most people in the business if you ask
them will think they're as data literate
or AI literate as they need to be
because some of them will not think they
need to be at all and that'll be enough
for them and therefore brilliant they're
as happy as Larry. Um then you've got
people who well now they can use chat
GPT or co-pilot so they must be AI
literate. So actually if you do and I
[57:23 - 58:08] say this as somebody who sells an
enterprisewide baseline assessment of
people just so you know I think data
literacy AI literacy orwide assessments
um we do do them but for me that level
of subjectivity just makes it quite
difficult to trust the data. It's quite
a biased data set by nature of Dunning
Krueger. Um and and therefore you're
probably better to do more targeted
interventions in key areas where you can
actually like baseline the true
competency of people and spend more time
doing it objectively rather than trying
to go orgwide and subjectively um test
the organization. So um I I think there
[58:06 - 58:46] are lots of different ways. I'm happy to
have that conversation with whoever
asked the question because it's a a
really important question. But you could
also spend a lot of money trying to work
out what is never really going to give
you the actual trustworthy insight you
wanted. So um yeah, do feel free if you
want to ping me on LinkedIn and I'll
have that conversation with you and give
you more more information about it. More
than happy to do that.
Okay. So um we are at time um and really
appreciate you joining today. Um, if
you've got any further questions, you
can ping them over to us. We'll try and
[58:44 - 59:10] answer them. Sure. Neither Steph or I
would mind you connecting with us on
LinkedIn. Uh, and we look forward to
seeing you on the next webinar. My next
webinar is actually tomorrow with um,
Synure and we're talking about AI again.
So, if you want to do that, you can join
us there. Otherwise, um, feel free to
join the next day's literary academy
webinar. Speak to you soon. Bye-bye.
Thanks very much.
Unlock the power of your data
Speak with us to learn how you can embed org-wide data literacy today.