The player is loading ...
Delegate to AI: The Skill Every Pro Now Needs

Delegate to AI: The Skill Every Pro Now Needs
Mark Smith
Meg Smith

👉Full Show notes
https://www.microsoftinnovationpodcast.com/751

Hosts: Mark Smith, Meg Smith

Humans need sharper collaboration and delegation to work with AI agents as digital labour. The episode maps how to brief agents with clear goals, definition of done, checkpoints, and feedback loops, and when to keep humans in the loop by risk. Expect interfaces like Microsoft Teams surfacing agent actions while BPM and process mining reshape workflows. The hosts weigh responsibility, repeatability, and trust, noting variability in large language models and law such as the EU AI Act. Practical exercises and consumer examples show how to build these skills now.

Join the private WhatsApp group for Q&A and community: https://chat.whatsapp.com/E0iyXcUVhpl9um7DuKLYEz

🎙️ What you’ll learn

  • Define goals, constraints, and a clear definition of done for agent tasks.
  • Set feedback loops, checkpoints, and timeframes to prevent rework.
  • Calibrate human-in-the-loop oversight by risk tier and impact.
  • Use BPM and process mining to prepare and optimise workflows for agents.
  • Practise explicit, testable instructions to improve delegation and collaboration.

Highlights

  • “what are the human skills that we need to learn to work with AI better.”
  • “software-based labor that partners with humans on specific processes.”
  • “We can't be future-proofed anymore. The best we can hope for is future ready.”
  • “delegation being an important skill for people who are going to be working with and instructing agents to do tasks.”
  • “I think that the human to human collaboration is going to be such an important part.”
  • “What does the definition of done look like?”
  • “Parkinson's Law states that you will stretch out the completion of your task until they fill the time available to complete them.”
  • “how should responsibility be managed when these agents make mistakes?”
  • “communication happens at the listener's ear, not at the speaker's mouth.”

🧰 Mentioned

  • Business Process Management (BPM) https://en.wikipedia.org/wiki/Business_process_management
  • Bon Appétit YouTube channel https://www.youtube.com/channel/UCbpMy0Fg74eXXkvxJrtEn3w
  • Lisa Crosbie YouTube channel https://www.youtube.com/c/lisacrosbie
  • Parkinson's Law https://en.wikipedia.org/wiki/Parkinson%27s_law
  • The Four Hour Work Week https://en.wikipedia.org/wiki/The_4-Hour_Workweek

Connect with the hosts

Mark Smith:  
Blog https://www.nz365guy.com
LinkedIn https://www.linkedin.com/in/nz365guy

Meg Smith:  
Blog https://www.megsmith.nz
LinkedIn https://www.linkedin.com/in/megsmithnz

Subscribe, rate, and share with someone who wants to be future ready. Drop your questions in the comments or the WhatsApp group, and we may feature them in an upcoming episode.

✅Keywords: collaboration, delegation, digital labour, ai agents, human in the loop, business process management, process mining, microsoft teams, copilot, entra id, parkinson's law, eu ai act

Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

00:00 - Introduction to AI and Human Skills

02:50 - The Rise of Digital Labor

05:30 - Collaboration in the Age of AI

07:45 - The Importance of Delegation

10:39 - Human-AI Interaction and Trust

14:06 - Responsibility and Accountability in AI

16:38 - Evaluating AI Performance

19:24 - Consumer Experiences with AI

22:12 - Conclusion and Future Topics

Mark Smith (00:11)
Hey, welcome back to the AI Advantage. My name is Mark and I'm here with Meg, my lovely wife. Hey Meg.

Meg Smith (00:17)
Hey, how's it going? I do that every time.

Mark Smith (00:19)
So good, so good. Yeah, you do that everytime. You know, you've seen me today already. We've set the kids off to daycare. ⁓ We're on day two of the week. It's Tuesday for us because we had a public holiday yesterday. So doing this recording. And we're excited to talk about the AI advantage skills to thrive in the intelligence age. Right. And what we're trying to do, what we're trying to piece together.

Meg Smith (00:25)
At least once or twice.

Mark Smith (00:44)
through this series and the work that we're doing at the moment is what are the human skills that we need to learn to work with AI better. And so that the more we lean into those very human skills, it will allow us to perhaps get the most from AI, work most effectively with AI. And today's an interesting topic because it's not like we have answers today, but it is around more questions that we have and really we're open to the community.

you know, about your thoughts in this and I'm going to do a lot more research on this because it's one of the, it's kind of like 12 subjects that I'm really focused on at the moment and this is one of them. In fact, both of us are and we're doing a lot of research and trying to understand what are the skills that I need to learn rather than knowledge I need to learn to thrive in the age of AI. And so today is around collaboration and delegation.

and how human and AI teams will work together. And so when we think if we, and something I've seen coming out of Microsoft the last about nine months or so is this lean into digital labor and digital labor being people, not being people, but being, you know, artificial intelligence that will become part of the labor force.

And I don't know if you've looked at things like Entra ID recently, I heard announcements, just like we have an authenticated credential to log into a system and use it in business, agents are gonna have the same thing. They're gonna have which agent it was, there's gonna be audit trials, et cetera, and we're going to understand what they do, how much they do. And...

And I think that why there's this massive lean into this, particularly
From a business perspective and the ability to make money and have resources that can make you money that don't take leave, they don't get sick, they work 24 hour days. I think there's a compelling reason why companies are going to lean more and more into this concept of digital labor. for me, that says to me, okay then, so what do we do?
 
And all through this AI journey, I'm saying, so what do we do as humans? What are we going to do differently? How are we going to operate differently? When I think of a digital worker, I did some research and that's the concept is it's a software agent that can take goals. can plan. It can call tools that it wants to use. In other words, other software applications, look into databases and complete tasks or with an audit trial.

and all that adheres to, let's say, the organizational policy and perhaps legal, accounting rigor, you name it, it can have that as a basis and a grounding point for it. So it's software-based labor that partners with humans on specific processes. Distinct from chat box and classic RPA, robotic process automation, it plans and decides within the policy of the organization, the steps that it must complete, the objectives that it must achieve.

Meg Smith (03:47)
The timing of this episode is so ironic to me because we've just had the public holiday that we had yesterday is Labor Day, right? It's the, I know that there's one in Australia as well. I think there's one in the UK as well. you know, the celebration or the recognition of the workers, right? The people workers, that's what the day is for. And now we're talking about digital labor and digital workers.

We also in New Zealand last week had the largest scale strike action in one day in I think 40 years, more than a hundred thousand workers across ⁓ education, healthcare. They were, they striked because their deals on the table are not keeping up with the cost of living. So these are really real issues that people have. ⁓ I come back to a game that

We can't be future-proofed anymore. The best we can hope for is future ready. So the skills here that we're talking about are collaboration and delegation. Those skills actually have already been present in the workplace for a long time. It was our friend Lisa Crosby, who has, she has to be one of the best educators when it comes to how can you use Copilot? Check out her YouTube channel. But I remember having a conversation with her earlier this year.

And she made the point about delegation being an important skill for people who are going to be working with and instructing agents to do tasks. And she said, how many of our workforce today have never experienced delegating something to someone else? And equally more important, and I'm thinking as a millennial, yeah, actually I've had peers who I have mentored, but I have not managed teams. And she was sort of saying that that group of,

professionals who have become skilled delegators are going to have a leg up when it comes to incorporating agents into their workforce and into their working teams. But I also, as you're talking Mark, I had this real like, don't you get like angels and demons vibes when it comes to how people talk about agents? There's this like amazing like marketing spin that like when I hear the word AI agent now.

I'm like, wait, which, which, which shoulder are we going to here? Is it the one that's like, Hey, this is amazing. Everything's so good. ⁓ everyone should have lots of agents working for them doing this great work. Or is it the like the devil on the shoulder going, is bad. you know, they're going to make decisions on your behalf. You're not going to know what's going to happen. and I feel like I whiplash going back and forth between them. I don't know. Is that just me?

Mark Smith (06:17)
No, it's in, the marketing hype is crazy, right? And I think that the machine and the explosion of AI needs this kind of marketing to get people bought in so that investment flows. And if you look at the economies around the world, a lot of it has been buoyed along at the moment by investment in AI.

so, and when I say investments now, I'm talking about infrastructure, know, power stations, new data centers, GPUs, the actual, foundation, let alone, of course, what all we hear about happening, what's happening with LLMs and the advancements in that space. What's very clear is that this thing is only going to get bigger at the moment.

People are saying is there an AI bubble? I don't know that that is the case at all, but let's get back to. Collaborating you know what? What would that look like on a daily basis? Will it mean that we? Let's say we take an interface like Microsoft Teams. could be slack for others.

more Slack or Teams actually is probably the interface I'm thinking and then what's the signal is another one that's know, rising prominence at the moment. Will we just have an interface and anybody that's a, you know, let's take your role right now. Will it be a case of you really sitting in front of a screen that is popping up, let's say cards like in Teams, you know how you get an information card pop up.

and perhaps we'll have cues of those that we need to respond to and that's a an agent has done something and it needs that kind of human in the loop to click and onto the next thing that you're just you know you're going to have just the bit that needs that human input and of course my question then is how long till the validation of correctors that's at 99.999 % that they go well we don't need that to be fed to a human anymore

we've got that one under control. I think, you know, is it going to be that? is it then basically we're going to be the master conductor of routing scenarios that fit outside the norm because all the routing of all approved scenarios inside an organization I think will work automatically. And so when I think about this and it's interesting, a couple of weeks ago I did a podcast on business process.

a BPM, business process management. And I think business process management is really critical in this because it is how your company carries out and does processes. And in more traditional companies, that is highly documented. There's ISO standards that your organization comply with to show it has the rigor around that, to show that customers are supported in a rigorous way, meaning that the way we do it is

is documented, we have flags in place to check that it's been done, the definition of done has been achieved. So I'm wondering, will our roles involve more of us being involved in business process optimization when we're feeding in or delegating to a new digital worker, the looking at a process and let's say needing it to be updated. And then I can see in time, and when we look into process mining and these tools that have been around for a while, but if we look at the advancements of them,

You know, I can imagine AI, there's gonna be an AI that's role inside the organization is process optimization, right? That's gonna look for going, how can I improve the process? And then perhaps in real time, if it gets enough validating factors to upgrade a process in real time. So you get these processes becoming super efficient and then I go, so what do we do as humans and what's our role? And I'm wondering if it's, know, and I'm just spitballing here off the top of my head if it's going to involve us doing a lot more human to human communication and collaboration.

Meg Smith (10:10)
Well, so when I think about collaboration, the first thing that comes to mind, and this is not AI human collaboration, this is just collaboration as I'm used to it in my world and my life, right? With between people. The number one thing that I have found is most effective when collaborating with people is getting really clear on where this Venn diagram of what you want and what they want overlaps. And if you can understand where that overlap is and

hone in on that, that's where you get really great collaboration because everybody knows what's in it for them and they know why they're there and what they're trying to achieve. So if you take that principle and you add in an AI agent, and I'm thinking, I'm just thinking of one scenario where we have a fortnightly call with people that are doing our 90 day mentoring challenge. And in that call, we often

You know, that is exactly as you just said, it's human to human collaborations. Kind of that what we keep hearing is the magic of the program, right? People have the opportunity to connect with others who are facing similar challenges to them. Now in the future, we could have an agent in that call whose job is to capture frequently asked questions and be constantly updating our knowledge base on what are the things that people are asking and what's the latest. And that

goes to things like, you know, a classic one we talk about a lot is the T consultant and how you as a consultant should try to be broad in one, sorry, broad and a lot of key technologies and deep in one. And actually one of our alumni, Houdang, he's done a really cool graphic on that. And straight away in the comments, one of our other alumni, Sharon, she was like, actually, Mark's talked about the consultant as well. And we're hearing this idea that actually,

You need to be deep in a couple of things like, you know, one technology like power apps and copilot, because these things are all important. Now that's quite specific in the context of, our work there. But I think that this idea that you could have an agent that's part of the conversation and doing something valuable that is valuable to everyone that then allows us to start to go, Hey, if there's something useful here, how can we make sure that

the people in that call feel as comfortable to share as they would if they knew it was just sharing with other humans. If they don't feel that they understand how that information and their stories and their experiences are going to be used. If it was me, I might find myself shutting up, right? Like, I'm not sure if I want to share as openly as I would, you know, so that's part of it too, it's trust.

Mark Smith (12:42)
Yeah. Yeah. It's interesting then moving to the topic of delegation. And as you said before, most of, there's a lot of people in their career that's never had a management type role and they've never needed to delegate to somebody else. And delegation is not about telling somebody to do something, go and do this, know, go and take the trash out or kind of is, but it's not delegation in a business context, right? In a business context, you need to be clear about what is the goal?

What does the definition of done look like? Right? And that needs to be part of the delegation communication. This is what I will consider a successful outcome if this task that I'm allocating to you is done correctly. Two, what's the feedback mechanism for that delegation? And I suppose what's the checkpoint?

for it. You know, I always found when you take on interns or grads into a business, you have to be much more hands on as a manager as opposed to somebody further along in their career where in other words, you might need to spell out this might be the first time they're doing a task. And so you explicitly explain it and then you might get them to explain it back to you. And so we've got clear understanding.

And then if anything goes wrong, not to flounder around for hours trying to work it out. Like I had a policy in my Power Platform and Dynamics teams is that if you couldn't solve a problem as a functional consultant or developer in two hours, you had two hours to work it out yourself. If you couldn't in that two hours, you had to broadcast it to everybody in the team for a possible solution. So we can bring minds together.

and collaboratively solve because you often will have other people that have come from different career backgrounds, et cetera, will have an answer and we'll be able to solve it. So you've got to have a framework, you know, to operate it. And then you have to have a timeframe, right? Because people will take any time allocated based on the timeframe given. In other words, you know, when you're at university, you're set an assignment.

And when did it get done? It got done in the last 24 hours or 48 hours, right? It was not done.

Meg Smith (14:50)
I can't

Mark Smith (14:54)
Carry on.

Meg Smith (14:54)
tell you how many assignments I submitted at like 11.59pm. And actually my most embarrassing one ever was once I worked out I was doing distance learning. So one of the universities in New Zealand, the one that I went to, meant that I could do it anywhere. But this is when I could fax my assignment right up until the deadline.

If I had to post my assignment, my physical copy of my document away, I needed to be done a day or two before the deadline, right? But once I worked out that I could go down to my local post shop and use the guy's FAX to actually send my assignment. I had so much more time up my sleeve. I know we work so differently, Mark. I know it bothers you that I'm like a last minute pressure kind of person, but yeah, I'd use all the time given to me.

Mark Smith (15:38)
There's a concept for this which is called Parkinson's Law. Parkinson's Law states that you will stretch out the completion of your task until they fill the time available to complete them. Or explain more simply, if you had 20 hours available for a task, that can be completed in 10 hours. It's quite possible it'll take you 20 hours to complete it. Right, and it's been proved, and I first came across Parkinson's principle I think with,

Timothy Ferris had it in the four hour work week that he found that whatever, you know, any time you allocate to something, it'll take about that much time because of the way human nature runs. And so once we come back to delegation, we're gonna be very specific around these things. And these are skills that a lot of people haven't learned in their career. And so I suggest that you go, okay, how do I start developing my delegation skills? How do you start flexing that muscle? How do you start developing

the skill and the art of delegation and how do you validate with yourself that you get better at it? I'd love to hear your feedback on that. And also on collaboration. How do you think collaboration is going to change? Because I think that, you know, with the agents, it will be quite different than with humans. And I think that the human to human collaboration is going to be such an important part.

of what we do and who we are in business moving forward, not just a collaboration with tech.

Meg Smith (17:03)
My favourite piece of advice you've given on communication is that communication happens at the listener's ear, not at the speaker's mouth. And it really puts the onus on the communicator to be clear. There's a really great series on the Bon Appetit YouTube channel. I think there was some controversy about how they were treating their staff, but there was a series before that that was amazing. They had one of their chefs, Carla,

who would, was this amazing recipe developer. And the way that the video worked was she would be facing one way with a workstation of all the same equipment and all the same ingredients. And a celebrity guest would be facing a camera the other way at a workstation with all the same things and only using vocal instructions. She had to teach that person or get them to follow through the process to ultimately create the same dish. And it was

are amazing to watch as a communication exercise. And this is how clear you have to be in your delegation with when you are giving natural language instructions to an agent. This is the kind of thing that you can practice with your friends or with your colleagues to go just how good are you at communicating explicitly what you want done with someone who doesn't have the same experience as you.

Mark Smith (18:21)
I love it. Okay, let's go and take some messages from the folks that have submitted via WhatsApp. We've got this one here from Danny Cahill. If AI agents are embedded members of our team performing not just repetitive automation, but higher order decision support, how should responsibility be managed when these agents make mistakes? Humans can be trained, disciplined, or even dismissed for errors, but AI agents can only be retrained or re-promoted.

So if an AI agent takes an action that produces harm, who's responsible? The designer, the operator, or the organization that deployed it? At the same time, if we require humans to review every AI generated action to assume responsibility, we lose much of the efficiency AI was meant to deliver. So where is the balance between oversight and autonomy? Now I think this is a really cool... ⁓

question from Danny because I think one, the EU AI Act has answered a lot of who is responsible. There's actually legislation now around who is responsible in this and we're seeing more more legislation rolled out in various geographies around the world around who is responsible because definitely organizations are going to be held responsible for the actions and of course that will pass on to humans etc around it.

I don't think ever that AI will ever be responsible because I think that will create too much legal risk. mean, and how would you penalize it? You know, we're going to reduce your power supply. So I think that will be difficult, but I think people will definitely be responsible. The second part of the question is around if humans are in the loop so much, how are we going to gain the efficiencies? Well, I think humans in the loop are only going to be to the point.

that the percentage accuracy is so high of the AI that humans will be removed from the loop. And I don't think we're there yet, but I definitely see that coming, that when the margin of error becomes so small from the AI that it becomes a calculated risk for the organization to take. And so if you think of three orders of magnitude around risk profile, no risk, medium risk,

Meg Smith (20:17)
talked to

Mark Smith (20:34)
high risk and let's say high risk involves legal risk to your organization, you're going to handle that differently than you are going to the medium and the low risk type ⁓ magnitude where AI is used in the mix.

Meg Smith (20:47)
Yeah, actually, I think our next question or comment from Alex is also part of the same discussion. So maybe bring that one up. I really like too, though, out of Danny's one, this idea that how do we evaluate performance of people versus agents? That's a great question. And what biases do we have in terms of areas that we kind of think are okay for AI agents to make, but we judge people really harshly for?

Or the opposite, right? That we think humans make those errors all the time and we throw out an AI agent's value because we think it's horrible when it does something that humans do too, when it's trained on what we do. Okay, so this is the question from Alex McLaughlin. My concern is about the consistency and repeatability of AI actions. Since AI can provide different answers to the same prompt at different times, how can we build trust in an autonomous large language model based agent?

Currently it seems there is no way to fully understand the decision making process of a large language model. In contrast, working with human team members involves ensuring a shared understanding of the project, which is more symbolic and less about matching patterns and data. That's a really good point, right? Cause you're always saying that like at the point that we can be sure that the agent is going to accurately give the right answer. ⁓

then we will see humans in the loop less. But in the current model or in the current way that we're building agents based on large language models, that is, you are going to get a different answer every time. And there are things you can do to, you know, train it and ground it. But we also even know that over time there's, I can't think of the right term for it now, but you know, there is,

the accuracy drops and they need to be retrained. it's like the, maybe that's why we're getting this like angel demon experience at the moment about agents, because there's an extra layer of sort of development that needs to happen to get that kind of accuracy.

Mark Smith (22:43)
Yeah. Also, think that LLMs won't be used for everything, but AI is broadly handling every situation. But that's not saying AI won't be. So I think that things like machine learning, pattern recognition,

Meg Smith (23:00)
computer vision.

Mark Smith (23:01)
which is about four others outside of LLM, I think they will still coexist. And so therefore, like if you take a look in Copilot Studio, for example, you can still do all the workflow. If then then that automation, etc. through the agent you can recreate, you can still use things like machine learning as part of that. But then maybe just the anomalies that you can dip out to LLM there where

Just like with humans, the human might make a snap decision based on all the data they have with them. So I don't think a lot of business process will all be LLM centric. I think it will be AI centric, but it'll be leaning more on those traditional pattern recognition, repeatability, all of different type of testing you get with machine learning and the other forms of AI would apply in that situation.

Meg Smith (23:52)
So where I want to wrap for today actually is a challenge to, as a consumer, observe where you're coming in contact with chatbots that may or may not be powered by large language models. They might be powered by AI agents and observe where you like that experience and where you find that experience frustrating. I had an example recently when I was working with, trying to get a refund or trying to understand my options within New Zealand.

feel like we also shared a different story about that. But they ended, I ended up having a conversation through WhatsApp with an AI agent of theirs. And then the handoff to the actual person was wonderful. And it meant that I was able to quite quickly navigate that system. But it was because I knew I was working with AI in the first bit. I wasn't responding to a real human. I had also used my AI, my chat GPT to understand the process, the possible refund structures available to me and my rights.

So that meant by the time that I spoke to a person in five minutes or in a few minutes, I was able to get the answer that I wanted that they could give me. So as a consumer, where are you coming across this? Where is it working and where is it being a bad experience? Cause that can help us to sort of ground our understanding of the use of these things. Now next week, we're going to be talking about solving problems. So complex problem solving. How can AI help us with that?

And yeah, so if you've got questions about that or examples, join our WhatsApp group. We'd love to have you. There's some brilliant and encouraging discussions happening over there. And also if you were listening to this episode, you can watch it if you prefer to see our, I don't know, was going to say our lovely faces. Join us over on YouTube as well for that or on Spotify if that's where you prefer. But other than that, we hope you have just a fantastic week.

and, you know, turn towards the angel more often than the demon, suppose would be my final call to action. Take care everyone.

Mark Smith Profile Photo

Mark Smith

Mark Smith is known online as nz365guy and has a unique talent for merging technical acumen with business strategy. Mark has been a Microsoft Certified Trainer for 15 years and has been awarded a Microsoft MVP for the past 12.

Throughout his 20+ year career, he has been deeply involved with Microsoft technologies, particularly the Power Platform, advocating for its transformative capabilities.

Mark created the 90 Day Mentoring Challenge to help people reach their full potential with Dynamics 365 & the Power Platform. Running since 2018, the challenge has impacted the lives of over 900 people from 67 countries.

Meg Smith Profile Photo

Meg Smith

I’m a digital strategist, author, and purpose-driven entrepreneur. After spending a decade at Google, I left to co-found Cloverbase, an AI adoption and skills company that creates AI literacy and tech enablement programmes. Our flagship program, the 90 Day Mentoring Challenge helps people reach their full potential and has already impacted more then 1,400 people from more than 70 countries.

My career experience spans roles in media and a life-changing sabbatical that included walking the Camino de Santiago, where I gathered inspiration for my first book, Lost Heart Found. Now I write about my personal sustainability journey on the blog HiTech Hippies. I’ve also co-authored a book for Microsoft Press about Copilot Adoption.

I serve on the boards of Fertility New Zealand and Localised, adopting a learn-it-all approach to technology and strategy in aid of balancing family, community service, and entrepreneurship.

Drawing on my design thinking and change management skills, I now develop courses and learning programmes to help people use AI in their day to day work, always centering the human impact of technological innovation.