Copilot Chaos: A Simple Map of Microsoft's AI
The player is loading ...
Copilot Chaos: A Simple Map of Microsoft's AI
Spotify podcast player badge
Apple Podcasts podcast player badge
YouTube podcast player badge
Amazon Music podcast player badge
RSS Feed podcast player badge
Spotify podcast player iconApple Podcasts podcast player iconYouTube podcast player iconAmazon Music podcast player iconRSS Feed podcast player icon

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM

Dani Kahil breaks down the growing complexity of Copilot and AI agents in the Microsoft ecosystem and how practitioners can make sense of it. The conversation focuses on practical mental models, minimum viable agents, and real-world use cases, including document-heavy processes in higher education. The core insight is that successful AI adoption depends less on tools and more on clear roles, scoped responsibilities, feedback loops, and realistic expectations of non-deterministic systems.

👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/813

🎙️ What you’ll learn

  • How to distinguish between Copilot experiences, products, and build tools across Microsoft platforms
  • Why diagrams and visual models help reduce AI and Copilot confusion for teams and leaders
  • How to define a minimum viable agent to prevent scope creep
  • Why treating agents like junior co-workers improves outcomes
  • How feedback loops and incremental task expansion make agents useful in production

Highlights

  • “It took me a lot of time to kind of process the information.”
  • “I always like visuals and kind of diagrams.”
  • “There are so many different versions of the different copilots.”
  • “These are completely separate products.”
  • “I started looking at them as roles and job functions.”
  • “Treat your agent like a co-worker.”
  • “In two minutes, you can build an agent, and it’s useless.”
  • “Start with a very simple instruction at the beginning.”
  • “It will never be 100%. That’s the nature of generative AI.”

🧰 Mentioned

✅Keywords
copilot, ai agents, power platform, microsoft 365, copilot studio, agent builder, azure foundry, power apps, automation, generative ai, minimum viable agent, ai adoption

Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

00:02 - Why Everyone Is Confused About Copilot (And Why That’s the Real Problem)

02:45 - The Diagram That Finally Makes Sense of Microsoft’s AI Landscape

06:40 - The Hidden Risk of Using the Wrong Copilot at Work

11:00 - Stop Thinking About Agents as Automation. Start Thinking About Roles.

14:00 - How to Build an Agent Like You’d Onboard a New Hire

18:24 - A Real‑World Agent Use Case That Actually Delivers Results

23:04 - The Minimum Viable Agent: How to Avoid Scope Creep and AI Failure

00:00:01 Mark Smith
Welcome to the Power Platform Show. Thanks for joining me today. I hope today's guest inspires and educates you on the possibilities of the Microsoft Power Platform. Now, let's get on with the show. Welcome back to the Power Platform Show. Today, I'm joined with Danny Cahill, the legend. Been on the show many times. He's joining me from Mermaid Beach in Queensland, Australia, where the sun never goes down. Danny, welcome to the show.

00:00:37 Dani Kahil
Thank you, Mark. Thank you for having me the third or the fourth time. I stopped counting on this time, but thank you. Yeah, it's a pleasure always to be back.

00:00:45 Mark Smith
It's always good to chat with you. What's top of mind for you right now as we kick off 2026? Well, we're two months in already. Well, we're in month three of 2026, March now. What's top of mind for you in 2026?

00:01:00 Dani Kahil
Yeah, look, a lot going on since the beginning of the year. Last year was a bit small, kind of slow in work activities. So I took a good rest, spent time with family here, as you said. on the Gold Coast, Mermaid Beach, lots of fun activities, water activities. We can go hiking and took the boat with the kids. So that was amazing. And then the year started pretty strong with a lot of work happening in the Power Platform, but AI as well, like AI and the Power Platform. So that was pretty good. And now my head is starting to get ready for MVP Summit slowly, right? So I'm flying in two weeks presenting at the Canadian Summit.

00:01:45 Mark Smith
Yeah, always a big highlight of the year, right?

00:01:48 Dani Kahil
Yeah, exactly. Seeing the other colleagues with whom I interact online all over the year. And I'm also presenting at the Canadian Summit with Hamish Shield. So that will be great. We start in Canada, then we take the bus. Did you do that last year? Yeah, it was amazing. Yeah, it's great. They come and pick you up. We're all in the bus.

00:02:13 Mark Smith
I was on the bus last year.

00:02:14 Dani Kahil
Yeah, all drive to Seattle. They drop us at the hotel. Like, it's great. Yeah.

00:02:21 Mark Smith
Very nice. Very nice. Give me an overview of the AI landscape from your perspective. with the context of the Power Platform, the context of M365 copilot, and where do you see it coming together? Where do you see it fragmented? How are you looking at all those moving parts working together in the work you do for customers?

00:02:45 Dani Kahil
Yeah. So look, over the past two years, it has been an evolving beast. Let's call it that way, right? Where lots of moving bards, a lot of different names, a lot of, so Microsoft is only one of the players, but there is also the other players as well. And all this noise on, the social media, LinkedIn. And so it was very hard for me to make sense of it all. I like to kind of have a bit of a high level view of the technology landscape where I operate. So I operate mostly in the power platform, Dynamics 365. So what is Copilot around these technology, right? So that when I discuss with my clients, my customer, my colleague, I kind of know a bit what's happening in that world about AI and Copilot. So it took me a lot of time to kind of process the information. I always like visuals and kind of diagrams. And when I do those diagram is what I learn. So I'm a visual learner.So doing a diagram with as many details as possible helps me learn and quickly remember where things fit in that ecosystem. So I started doing a diagram. It took me quite some time. One of the diagrams that took me the longest, I think, to create. It took me like months, I think. So it started with this draft. And then you kind of iterate. And when you iterate, Microsoft announces new things and so forth. So it's a constantly moving beast, right? So I created a diagram where I started that looking. So what are the experiences that, like, what is Copilot?What are the experiences that a user, a normal user will experience using what we call Copilot, right? So I started. With that top layer, the experience is, well, they can chat with a bot, chat with an agent, right? That's the kind of the copilot with whom they chat. They can click a button. In Outlook, there is a copilot button to summarize, you know, an e-mail you want to send, right? So there is that user-triggered button. So that's another experience, right? You have your autonomous agent, they do something in the background for you, right? And they do the whole work for you. And another type of experience that I see quite a lot is AI insights. you finish your Teams meetings, it gives you a nice summary, or you are in Power Apps, it summarize, you know, the highlight of your timeline and so forth. So those are kind of the type of experiences that you can get from your co-pilots and AI agents within the Microsoft ecosystem, right? So I started creating that kind of layer. And then I looked at the tools that are available, which is your Microsoft copilot So there's also a distinction, which is quite a bit, it can throw people off, right? You have your Windows Microsoft Copilot, and then you have your M365 Copilot, right? And a lot of people don't, I myself didn't really realize the difference until someone It was even during a call, I was explaining how I was using M365 Copilot to brainstorm idea, upload documents and kind of chat with an agent. And someone pinged me after the call said, Danny, make sure that you use the M365 Copilot to do your work and don't use the Copilot on your PC, on your laptop, where this is your non-work copilot, and this will share data with the OpenAI models. It's not secured, right? And until someone told me this, I was not really realizing that, oh, actually there are so many different versions of the different co-pilots. Like there is the normal copilot for the personal user, and then you have your N265 copilot for your professional user, right? So again, a difference there when I laid out on the diagram, what is the Windows normal kind of personal copilot versus what is the M365 copilot, the feature they provide. I was hearing in that ecosystem diagram that I created, I was hearing also on the social media about the specific role-based copilot, copilot for sales, copilot for customer service. What are these? Is it part of M365 copilot or not? So I started kind of looking at that and trying to understand where are they coming from, what are they doing. And this, you actually, I find out that these are completely separate products. They have separate licenses. They work separately than the M365 copilot. So that's another kind of type of product, right? And then you have your co-pilots living within the different apps. In the Microsoft ecosystem, you have your Copilot in Word, you have a Copilot in Azure, you have a Copilot in Power Apps, in Power Automate, in Power Pages. And that's why it took me so long just to map out where are all the different tools that you can use that we call Copilots. And then after that, I thought, well, let's go one layer deeper and try to understand how do you then build or tweak or configure those co-pilots, right? So Microsoft give you a whole series of co-pilots. So M365 copilot, your copilot, your role-based copilot, your copilot within the specific apps. But they also talk about, Microsoft is also talking about how you can create your own co-pilots and create your own agents. But again, there are different tools to build them, right? You have your M365 copilot agent builder. That's kind of your no-code, agent builder. Then you go to Copilot Studio, which is your low-code agent builder, right? And then you can even go to Azure Foundry. So again, mapping that out so that I can understand when I discuss topics about Copilot and AI, I can understand a bit where those feature and those tools sit in that ecosystem. So yeah, the diagram, I use it quite a lot just to kind of before I jump on specific AI calls with my client, just to kind of remember or remind myself where the tools sit, what do they offer. Yeah, so that's kind of where I see the landscape. It's constantly evolving. So like any of these diagrams that I create, I kind of iterate over the time. And I shared some, like I shared the diagram with the community on LinkedIn. got quite a bit of interest and I created the video as well explaining it, how I understand the diagram and the tools. So yeah, so this is how I see it. It's a constantly evolving beast, as we said. But yeah, quite interesting to see where that's going.

00:09:40 Mark Smith
Let's get the links in the show notes for this show to one year video and also that, because sometimes I wonder if Microsoft doesn't create a rod for their own back in that this started This started back and there used to be a product that Microsoft had. I don't know if you remember. It was called Link. Do you remember?

00:10:02 Dani Kahil
Yeah, before Teams. Before Teams. Keep going. Before Skype.

00:10:05 Dani Kahil
Before Skype.

00:10:06 Mark Smith
Before Skype, right?

00:10:07 Dani Kahil
Yes. Yeah.

00:10:09 Mark Smith
And so they buy Skype, a consumer-based product, and then what did they do to Link? What did they rename it? Skype for Business. Right? Created a whole bunch of confusion. It was interesting. I got notified maybe at the tail end of last year, that Skype was being closed down as we know it. And then, you get a new Windows 11 installed and what has it got on it? It's got Teams, but oh no, it doesn't take a commercial, sorry, your business e-mail address. You need an account. Like I have an account. I just gave you my M365. Oh no, because you got Teams consumer and Teams for business, like confusion. And then We have a million co-pilots. And the fact that you've had to draw a detailed diagram frustrates me so much because you're just constantly explaining to customers, no, sorry, you're on chat. You're on M365 chat. You're not actually on M365 Pro.

00:11:06 Mark Smith
Oh, no, sorry, you're on copilot consumer. You know, and then I tell you what, it just ruins my gears in the last couple of weeks, is that on my phone, maybe two months ago, I removed the consumer version of Copilot because I'm just like, just confusing, right? I'm just using M365. And then my notifications, I'm got, I get these notifications that are, hey, tell your colleague a joke with Copilot. You know that you can do a drawing, you know? And I'm just like, none of this is business related and you're advertising. Stupid use cases. And you wonder why. And because I just presented to the Australian user group on copilot. And when I was doing my research, the global adoption of copilot is 3.3%.

00:11:58 Dani Kahil
Yeah, I saw this LinkedIn. I think you posted on LinkedIn as well.

00:12:01 Mark Smith
I didn't post it. The. Oh, it wasn't you. I didn't post it as an I. might have shared it or had a comment on it. Yeah.

00:12:09 Dani Kahil
Oh, yeah.

00:12:09 Mark Smith
And I'm like, you've created that. Beast, you've created that reason because people are just absolutely confused. Someone had to tell you, hey, Danny, you're using the wrong copilot, like, because you might, like, we shouldn't have to give that level of instruction to everybody because we've just named everything the same thing. And like, the fact that there's all these individual licenses for, you know, copilot for sales and stuff, it's like, why aren't we just licensing tokens? That's it. just everything, like forget the UI layer. And I think it's, as you were describing that diagram, I was like, yeah, that's absolutely what's needed. I could see me standing in front of CEOs with your diagram up just to explain the bad marketing message, just to get them clarity before we actually look at, hey, how do we solve some of your business challenges? I find that interesting. And also what I found interesting, what you said, you didn't mention the word agents. And that's another whole can of wounds now that we have to look at in this context. What are you doing in the agent space?

00:13:21 Dani Kahil
Yeah. So I'm doing a lot of work in the AI space. Let's call it that way, right? Whether you call it an agent or not. Like for me, the word agent, it's so, blurry as well. Like people use it all over the place. I found it.

00:13:41 Mark Smith
You need to do a chart. You need to do another chart and work it out.

00:13:46 Dani Kahil
Probably. But there is a scale, like what do you call an agent? When do you start calling an agent? When do you start calling a copilot? When do you, when is it an automation that has a bit of AI in it? Like.

00:14:02 Mark Smith
Yeah. This is so important. This is why you need to do a diagram on it. You know what? It's become clear to me in the last three weeks what I am defining as an agent. One of the things I do when I build an agent, and I've built close to 15 in the last couple of weeks, is that I give it a full job role description. Who do you report to? What's your OKRs? What does good look like? What are you not allowed to do? And this is what I expect from you. This is your job role definition. And that's like two or three pages long. And I'm starting to think about agents much more as a role, not just a workload. And I think that what's happened is that we've been sold agents as doing a single thing, like as you say, a bit of automation, a bit of data enrichment, a bit of, it's like a little step in this cog of what a business process is, where I've started looking at them as roles. and job functions. And that's allowed me to put skin on the bones of an agent much better and get much better outcomes from the agent than when I was just trying to go, hey, what's this business process that I want to apply AI to? I take it back a step and go, what would a person be wanting to do in this situation? And then I build my agent around it.

00:15:31 Dani Kahil
I think the key word that you said, what would a person do in that role for an agent, how I see it as well, is almost looking at, almost treat your agent like a co-worker. Like it's a co-worker you have to give, as you said, a role, the detailed instructions, what kind of What can they do?Like the tools they can use, the boundaries within the operator, what they're not supposed to do. Another thing I think that we may add to agents, like any junior co-worker that you on board, they need supervision and feedback and evaluation, right? I think those are all the components where, and it starts to be pretty pretty broad, like you start to see an agent is not a simple little automation in your workflow. It's, you have to spend time defining, evaluating, testing, hearing the feedback from your user using your, or kind of using and collaborating with your agent. You need to spend time defining all this. And it's not, unfortunately, The frustration I have also sometimes with Microsoft is how they sell those tools. in 2 minutes, you can build an agent, right? And that has been for years this way. Like in 2 minutes, you can build an app. In 2 minutes, you can build an agent now. And it's just, it's not the reality. Like, I mean, it is, but it's useless. It's a useless agent.

00:17:09 Mark Smith
It's great as long as you've got lots of smoke and lots of mirrors. right? The two-minute build something. Because what they're saying is that, oh, just this is what I take away as reading between the lines. You don't have to worry about ALM. You don't have to worry about release rigor because you can just build it in 2 minutes. And it's just like, that's not the reality of business.Even if you could build it in 2 minutes, it'll probably take two weeks to release because you're going to do security. validation. You're going to, do all your standard, type of tests that it's not going to break anything. It's not going to kick a can down the road in your infrastructure. And you're going to go through a release cycle, et cetera. And so it is, but it is a common conference demo, right?

00:17:57 Dani Kahil
Yep, Exactly. So this is where, and look, the work we are doing, so we work a lot with universities and higher education in Australia, right? So there is a use case we adopted since last year already when we started really testing those agents is a lot of processing of documents. especially in applications of students. So students, they do all kind of applications, right? They apply to study, they apply to get credits when they move from one unit to another. They apply for scholarship, they apply for government supported payments that the unit will offer. It's all kind of different type of applications. And guess what? They have to fill details and submit documents, all kind of documents, right? So a use case we found very useful and very quickly was a lot of people like normal users who are spending a lot of time reading those documents from all shapes and forms, all kind of data, and spending time understanding, reasoning to the documents, retyping that in Excel sheet or in a Power App. So a great use case that we found is having very dedicated I would call it agents, but it's kind of very tailored, narrow agents where an agent will be very specific, I would call it, an AI agent would be very specific on reading, understanding, and reasoning over one type of document. That is one agent, right? So you have an agent, extract the data, classify the data, the next, and once we master this, Once this agent is built and we tested it out, works well, we added small extra piece of, an extra task to the agent. It's almost like an onboarding of a coworker, right? You start small, you give him a small piece of work. Once he mastered that, you add another small piece of work on top of it, right? So the next piece of work was all this data, this is a matrix of, to determine the genuinity of the document. match it versus that matrix to determine how genuine is the documents, to make sure that the student didn't forge a document or whatever, right? So that was the extra layer or the extra task that we added. And the final or third task that we asked the agent to do is copying the data, creating records or filling, updating records already for a human so that they can, you know, power up, capture the data. So instead of retyping what is in the document, everything is kind of in fields in PowerApp, right? And that was the extent of the agent, to be fair. And we added a feedback loop in the UI. Whatever the agent was making a mistake, wrong name, or providing the wrong recommendation or whatever, the user was able to provide a feedback to say, you know, the AI agent made this mistake, like the AI agent feedback was incorrect and a box so that they can provide feedback for us, right? So, and over a period of time, over really six months, we collected all the feedback from the unis. And we were always very, from the very beginning, we set the expectation very upfront with the unis, right? We said, look, this is still experimental AI agents are there, but they knew we need your feedback all the way, right? And as we set the expectation this way, we really asked, you need to provide us feedback in that mechanism, that description box. And over the time, you managed to really fine tune those AI agents, but fine tuning really the prompting and the instruction for them. Because that's what you do, right? You fine tune the instruction, you add more details, you remove some of the instruction, like to a human, right? The human will do a mistake, and then you fine-tune the instruction. You will tell him, like, here is what I asked you to do, but maybe do it a little bit differently here, or you add a little bit more details. Once they have done through a first series of tasks, you can then give a feedback and a recommendation and fine-tune their instructions. Look, that's what we did. It worked pretty well. So yeah, we're looking at exploring other areas within the unis this year where we can use AI agent in that space. Yeah.

00:22:10 Mark Smith
It's such a good practical use case that's obviously, because I know you've spread this out across multiple universities now, because it's getting results, right? People are going to buy results when you can deliver them. One of the things that I've heard you talk about is minimum viable agent, whether it's Agent Builder, Copilot Studio, Foundry. How do you prevent scope creep with agents and by its very non-deterministic nature that you keep things on point.

00:22:38 Dani Kahil
Yeah. So a little bit back to what I just said, right? Start with a very simple instruction at the beginning. Very simple instruction, very simple task to accomplish first. Like break down your job description in smaller chunks to get started with. So that's my minimal viable agent to start with, right? Once it mastered this well and You have to evaluate how it works in real life situation, evaluate how the agent works, provide feedback. The feedback is in term of fine-tuning the instructions. When it master this. it will never be 100%. That's the nature of generative AI. It's never 100%. Otherwise, you go to automation. Totally. Like this is the expectation we're setting with our clients at least, right? It will never be 100% correct. It can be close to, but there is always a chance that it makes a mistake, right? So once it mastered this to a level of 90, 95% accuracy, and you're pretty confident with what you have, you add another task if possible, right? So another smaller task to the agent. Once you're pretty happy with what the agent does, which is a set of instructions, maybe a tool or two as actions they can do, I would really limit that to that. That's my minimal viable agent. I need something else, then I build another agent. So, and I start having series of different agents working. And look, I haven't really looked at multi-agent orchestration yet, to be fair, but that is probably an area where we want to take a look at this year to see whether, like for now, we're calling those agents with automation, like the automation calls the different agents to do those different pieces of work, right? What I'm hearing is there is value of having a centralized, a master agent that can orchestrate all these agents, right? I haven't managed to make that work in a real life scenario yet.

00:24:55 Mark Smith
Nice. What are you most excited about in 2026 from a, you know, career tech landscape perspective? What, you know, what are your thoughts?

00:25:08 Dani Kahil
Yeah. Look, exploring. Look, I love this new era. I love learning and experimenting with you too. So I'm very excited that that's happening now. I know it's going to be confusing and can be a bit scary to some people. Like sometimes, sometimes I get kind of a bit worried too about where is this all going when you hear, you know, on social media about AI and, you know, the governance of AI and what's happening. But overall, I'm a very optimistic person in nature. I always, I prefer to believe in positives than negatives. So just exciting about, you know, learning where this is all going. But in a sense, I think because we had a few years now, AI agents kind of In we had AI agents, I guess, starting to be part of our work and this whole and the whole world in general, right? I think. We now better understand where are their capabilities, where are their flaws. Like it was a little bit blurry for me last year to really understand like where exactly I want to use agent. Am I using agent in the right use case and so forth? I kind of understand this better this year. So really try to push the boundaries of what I've done last year, experiment a bit more with agents and AI capabilities this year to help in to help my clients, but to help with my work as well. So I know, Mark, you shared with me as well that you using yourself a lot of AI agents internally to help you with your work. I'm experimenting as well. Like how can AI help me in delivering my projects? It's actually one of the topics that Hamish and I will present at the Canadian summit is how to use AI and AI agents to really help with delivering of project, right? Delivery of project, requirements gathering and so forth. So really looking at going a little bit deeper in those topics this year.

00:27:27 Mark Smith
I like it. Danny, it's always great to have you on the show and talk to you. got some awesome insights.

00:27:33 Dani Kahil
Thank you, Mark.

00:27:34 Mark Smith
All the best for your MVP summit coming up.

00:27:37 Dani Kahil 
Thank you. Excellent.

00:27:40 Mark Smith
Hey, thanks for listening. I'm your host, Business Application MVP Mark Smith, otherwise known as the NZ365Guy.If there's a guest you'd like to see on the show, please message me on LinkedIn. If you want to be a supporter of the show, please check out buymeacoffee.com forward slash NZ365Guy. Stay safe out there and shoot for the starts.