

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
This episode explores how AI, agents, and prompting are reshaping how software and business solutions are built. Mark Smith and Keith Atherton discuss the shift beyond traditional low-code towards natural language, agent-driven development, where context, outcomes, and governance matter more than interfaces. They examine why developer fundamentals still matter, how generative AI accelerates delivery, and what new makers should focus on as Microsoft investment, tooling, and certifications pivot towards agentic and AI-first approaches.
👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/818
🎙️ What you’ll learn
- How agent-based development changes the way applications and systems are designed
- Why context engineering and outcomes engineering matter more than short prompts
- When low-code tools help and when direct code or CLI is faster
- How experienced developers gain leverage in an agent-driven world
- Where new makers should focus their learning as platforms and certifications evolve
✅ Highlights
- “Beyond that hype, now organizations and customers, I can see them getting more mature and ready to actually adopt.”
- “We’re designing for API efficiency for AIs, not just human interfaces anymore.”
- “You wouldn’t have to put anything in your system that isn’t explicitly needed.”
- “If you can build something in a few minutes, why spend hours dragging and dropping?”
- “Context engineering is the next step beyond prompt engineering.”
- “My prompts are not one or two sentences, they’re hundreds of lines of data.”
- “It’s almost cumbersome today to go through visual steps when CLI is just speed.”
- “It could be seen as a bit of a superpower having that developer background.”
- “You need to know what good looks like to hold agents to account.”
🧰 Mentioned
- Microsoft Power Platform: https://www.microsoft.com/power-platform
- Copilot Studio: https://www.microsoft.com/microsoft-365-copilot/microsoft-copilot-studio
- ChatGPT: https://chatgpt.com
- Visual Studio Code: https://code.visualstudio.com
- Power Automate: https://www.microsoft.com/power-platform/products/power-automate
- Dataverse: https://www.microsoft.com/power-platform/dataverse
✅Keywords
ai agents, power platform, low code, generative ai, prompt engineering, context engineering, outcomes engineering, dataverse, power automate, software development, cli, microsoft ai
Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption
If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith
01:06 - AI Is Past the Hype - Now ROI Actually Matters
02:05 - The End of “Forms Over Data”: Designing for Agents, Not Humans
03:10 - Prompting a Full CRM Into Existence - With Nothing Extra
08:12 - “Low‑Code Is Dead” And What Replaced It Is Faster Than Ever
09:58 - From Prompt Engineering to Outcomes Engineering
24:23 - Your New Advantage: Acting as a Director of AI Teams
32:47 - The Real Constraint Isn’t Technology - t’s You
00:00:01 Mark Smith
Welcome to the Power Platform Show. Thanks for joining me today. I hope today's guest inspires and educates you on the possibilities of the Microsoft Power Platform. Now, let's get on with the show. Welcome back to the Power Platform Show. Today we're joined by a special guest connecting from Edinburgh in the United Kingdom, where he is moving back to the UK. Well, back to England. All the links for this podcast will be in the show notes. As always, if you want to go deeper, thanks for listening and let's get started. Keith, welcome.
00:00:45 Keith Atherton
Hi Mark, great to be back. Good to see you again. How's things?
00:00:49 Mark Smith
Going well, going well. How's things with you? How's life treating you?
00:00:52 Keith Atherton
Yeah, very good. very busy, some interesting projects at work, trying to keep up to date with all of these AI changes going on. It seems that every week there's something brand new to learn about. It's great.
00:01:06 Mark Smith
Yeah, as in, I don't know, 2026 seems to be in acceleration mode when it comes to AI. It seems like in the first two months of the year, so much momentum has picked up and changed. What are you seeing?
00:01:22 Keith Atherton
Yeah, I think so too. I think, there's a lot of hype to start with. When I say to start with, obviously AI has been around for a long time, but let's say since Copilot Studio really took off, it's one of those early co-pilots, you know, strong use of AI for development and for coding. I think as things have accelerated, then we get ChatGPT and things have just evolved. And beyond that hype, now organizations and customers, I can see them getting more, their readiness is more mature, they're ready to actually adopt. We're actually seeing actual AI projects now, things being put into use and looking into that return on investment. So yeah, it's exciting times because we're getting to play with the toys, but actually see them make a difference as well.
00:02:05 Mark Smith
Yeah, the, I don't know, it was a couple of years ago, I think, when AI first started to be on everybody's lips, you know, in Microsoft. And Satya was on a podcast that kind of shocked everybody because he, I think he inferred that SAS was dying. and SAS was going to be replaced with AI. And for years we've talked about user interfaces and those experiences, but how a system has been redesigned when there's no need for a human to interact with an interface. And what I'm talking about here is you take everything from APIs to just general software design, I feel like we're moving to a paradigm where we're no longer necessary just designing for humans. We're designing for API efficiency for AIs, right? And one of the things I notice with the cluster of AIs I have is sometimes they want to all execute multi-threaded at the same time. And then, of course, you get a choke on the AI because, hey, this looks like, you know, massive spike in behavior. because agents can do things a little faster than us. And so, I see a lot of re-engineering of these things, but for the first time in my life last week, I was building a group of agents to do a business function and there was a need for a CRM. And I've had 23, 24 years now experience with CRMs. I was able to prompt into reality a CRM with my knowledge of CRMs that did only what was required for this task. In other words, only built what was necessary and nothing else. I'm talking about a full CRM from scratch. And my agents could interface with, and then when an individual needed it, the whatever device that they were using for their interface, it would give them the data they needed. to do the next step of their job. But the maintenance, the update of the entire system is happening through enrichment APIs, through various services running in that environment. Checks and balances are in place to make sure record matching, et cetera, is done. But for the first time, I realize you just don't need to have a traditional forms over data, which is something James Phillips said for years. You don't need it really anymore.
00:04:39 Keith Atherton
Yeah, it's really interesting times, isn't it? Is the audience a human? Is it another agent and you're doing some kind of agent to agent? Is it got some direct connection with an API or even an MCP server? There's so many kind of protocols and ways of communicating, which you say, you know, maybe only the humans need this kind of visual abstraction layer that we're seeing in many cases as well. But yeah, it's interesting, very interesting.
00:05:06 Mark Smith
You know, isn't One of the interfaces I was using, tables kept, the agent kept handing back tables that wouldn't format correctly. So I was able to just say to it, no, that's not the best format for me. You know, it truncates around on the screen and it goes away and fixes it. And, you know, I feel it's like we're going into this era of creativity like never before also with agents and that you can really prompt into being so many things that were perhaps hindrances in the past in the way you worked and things like that. We're in interesting times. What are you seeing? You know, is one of the things that, what was it, last year in Vegas, we saw low code is dead. And And I find it interesting because, we've had pro-code, we've had low-code, and we, and then there was always touted no-code. And is it harder to actually use low-code in a gentic world than not? And that a lot of time is burnt on configuring interfaces that now you could command, you know, into existence, and the actual code is written cleanly, It's not written, Dynamics and the Power Platform. So the Power Platform, its predecessor, Dynamics. Microsoft always engineered them to handle every business scenario. So therefore, there's always going to be a lot of legacy debt in there, a lot of stuff that we will never use, never be needed. And I think like as the old Yomni fields and stuff like this that you would see.
00:06:51 Keith Atherton
I thought I heard that for a while. Yeah.
00:06:53 Mark Smith
I never used that in 23, 24 years of ever building a system, a legacy from 20 something years ago in it. Now, you wouldn't have to put anything in your system that isn't, wasn't explicitly needed. So if we're even the footprint, the efficiencies must be so much more when you can create the tech just to do what it needs to do and nothing more.
00:07:14 Keith Atherton
Totally agree. I think you're right. And I've seen this from back in the pro-code days where I was a bit more hands-on with the pro-code. a feeling that there's some systems or some tools, like entity relationship frameworks and things, where there's so much bloat in there. Like, say, the bloat, just in case you might need it, or here's several layers of abstraction to get to the bit you need, and you've kind of got all this unwanted, unneeded stuff that maybe we maybe see with power apps and other things that impact the load time initially as well. You've got the progress bar, you're waiting for it to load. But if you did something with a code app or something these days, or vibe coded, an app using some other framework, you can see many of them just load instantly. And that's the kind of difference you can see, as you say, there's maybe this load in case you need it, or this extra baggage, is it really needed? Many cases it's not, you know.
00:08:04 Mark Smith
So with that, low-code is dead, which I think Charles Lamanna said it on stage or something similar.
00:08:12 Keith Atherton
He did on the keynote, yeah. And I guess low-code, as we know it, had like a little caveat, didn't he? As we know it, is dead, yeah.
00:08:20 Mark Smith
So when you hear that, what do you think and what are you saying? Do you agree, disagree.
00:08:25 Keith Atherton
Yeah, I do agree. And I think time has sort of helped that realize a bit more, certainly over the last few months. So as you say, when you mentioned it at the Power Platform Community Conference in Las Vegas last October, there was certainly a lot more interest in using generative AI to help the making and the development of solutions. And we can see that that's just really accelerated. It's gone wild to the point of you pick this model for doing your plan and this model for your build, and you could use it within Visual Studio Code. You could use this here. You could use the co-pilots, as you mentioned there, to kind of describe what you want that page for the app or the solution as a whole to look like. So I think the maturity of those tools is really coming along. And it just feels like, again, every week, sometimes every day, there's a new bit of news of, oh, watch out for this model because it can do harmful things. Watch out for this model because it doesn't have the same maybe ethical standards as all the models. So there's all these kind of tools and there's so much to keep track of. But yeah, I do think like the way we can build with low code, it may be even that if you describe it with natural language, either by typing or just through a microphone like we're doing now, you know, If you're able to say build something just by describing in a few sentences what you need, that's just a simple one shot that I'm explaining there. Is that going to save you hours of dragging and dropping, which used to be the traditional way we would treat low code? So yeah, I think it is changing.
00:09:58 Mark Smith
Yeah, I think that, you know, there's a prompting is much more, I think, nowadays than, you know, just describing something. I don't know if you've seen, but the next step on from prompt engineering is, how do they describe it? is called context engineering, which is the next step beyond prompt. And it's not that it replaces prompt, but it's where you go to a next level of how your prompts work in the system. And there's actually a level beyond that, which is outcomes engineering. which is a new term that's becoming prevalent just in the last two months. Anthropic has done a bit on this. And it's really, you know, when I look at my prompts these days, they're generally not one or two sentences. I'm talking about prompts that are 600 lines of data because I am, using reverse prompting a lot where I get the AI say, hey, I want to achieve this, but I don't want you to act on anything. And, you know, in other words, we're in plan phase at the moment about what we're going to craft. And I'll go through a series of questions. What are my gaps? What are, where am I thinking about this wrong? What's, then I'll go, what we're discussing, go out and research best practice as of February or March 2026. Because I don't want your original train data to influence because this space is changing so rapidly. What are the latest docs say in this area? And so, and what I'm getting are these massively detailed prompts, but boy, do they create some amazing things because of giving it all that context about what I'm doing. Who's it for? Why is it important? Is there any time constraints kind of Now you're packing that into your prompt and you're really in a way putting guardrails on your prompt so that it doesn't take off in all different directions, but stays on point about what you're wanting to do. And I'm wondering that, you know, you take something like Power Automate and Power Automate has all these visual steps, triggers, APIs, et cetera. And I feel it's almost cumbersome today to go through that process when if I take something as simple as my home automation system where I use Home Assistant to run everything, I can either do everything through the interface or I can switch to YAML and I can just inject the entire YAML requirements. So what I do is go, hey, I go to Copilot, I give it the full YAML directories and then I give it the full Home Assist. And then the other thing I have to do is just give it the full list of all my IDs for all my devices. And then I can just talk into whatever I want, right? And even CLI, I've spent more time in the CLI in the last three weeks than I have in my entire life. I'm loving CLI because it's just speed. You can get so much done so quickly when you don't have the constraints of navigating to wherever that point of change needs to be. And so I've spent a lot of time in PowerShell and a lot of time in Bash on the Linux side in what I'm doing. And I'm so pleased that 30 years ago, I used to teach a one-week course on Linux because I'm fighting all that knowledge is like, she's coming back and using. As a software developer, you know, you've been a developer throughout your career. Do you find it's almost an unfair advantage now to have that experience as a developer and then a new super tool to allow you to develop at speeds unthought of before?
00:13:48 Keith Atherton
Yeah, that's a great way of putting it. I mean, it could be seen as a bit of a superpower as well, because when you're familiar with that, I guess it's if you tinkered with cars or something, and you really knew how traditional petrol driven cars or something works with the mechanics, you can lift the bonnet, you can do the things, you can do it yourself, you can do it directly. Or if you, had some help from a trained mechanic, you could lift it and, you know, verify the work. Yep, that is what I've just asked for. Yep, that's looking good. So I kind of do like that point of view. I could still be the human in the loop to give the review. So I do find it's a useful skill to have, and it's just by sheer fluke of my background that it's useful there. The funny thing, we had a chat before as well that when I first went to low code, and I was kind of new to the platform after years of being a developer, I did find a similar thing, let's say Dataverse. Okay, so I've been used to writing maybe a handful of lines of SQL script in SQL Server. Here's my table with the columns, the data types, the lengths, and everything I need. Here's my metadata because it's my other language that I speak, and it rolls off the tongue that I can do it. And then I went to the database IDE in the Maker portal, and it's very intuitive and easy to use. But as you say, something that would have taken me less than one minute to write, It's now going to be 30 plus clicks and it's going to be several minutes. So there were some tasks I found would take me longer, but it's now an abstracted interface that anyone technical or not can use. So it's democratized it, which is great. But I'm like, oh, just give me the script. I just want to write the script. You know, I'll do that in a fraction of time. And then, you know, it's similar thing with logic caps. Just give me the code view. I'm not just, you know, you should say now you've got these tools, open up the code view, throw it in there, ask for what you want, import it in, paste it in. you're good to go. So it's really useful to have that skill set.
00:15:46 Mark Smith
Canvas apps, as we know them and the process of developing them, has that just totally be, totally transformed now? Once again, the speed, the accuracy that you can build a canvas app, would you want to ever be in the, you know, drag and drop-y type of experience?
00:16:06 Keith Atherton
It's a really good question because I've been watching some videos on the Power Squared channel. They're doing some really cool stuff where they've been diving into code apps. So say it will be a power app, but you've gone away by using Visual Studio Code. You kind of spin a few things up. As you mentioned, there pack CI, there's other CI tools you can use now to create things. And say if you can just describe what you want, it generates it and creates it really quick. you do have to think, well, if I can do that in a few minutes, why would I have spent a few hours or maybe days dragging and drop through the interface? So again, back to Charles Vermana's statement there is the nature of the process of low code as we knew it is dead because there's a new way of low coding it. Or maybe it's just a new way of building. Maybe that's a better phrase. It's an interesting time for sure. There's a lot of changes happening.
00:17:02 Mark Smith
Let's talk about those changes, the broader changes. in my mind, I think of tools that have been around for years that enabled consultants and makers to build solutions. And I'm thinking of XRM Toolbox. Are tools like that becoming less of a priority when you can, once again, create whatever you need, solve whatever problem, run analysis any way you need, shift data all through prompting. What are your thoughts about, you know, are there tools that you're finding you're using less now?
00:17:42 Keith Atherton
That's a good one as well, because as you say, XRM Toolbox is fantastic. There's so many great contributors from the community. I say Fetch XML Builder, wonderful tool. You know, we know the wonderful people involved working on that. But if you could use a tool to just describe what you want, instead of, opening up the toolbox, signing it, there may be ways you could use certain tools where you could maybe could interpret or kind of translate your requirements and build it just within generative AI somewhere. And then they may, I'm trying to think if there's other uses as well. And, you know, I do like things like the mockery data muncher. So that's great because that's now connecting to the systems. It's connecting to the API to kind of mock up the randomized data and then connecting to Dataverse to insert the data. So I've got the integration built in. So there's that convenience of the work it's doing as well as the connections it's got. Of course, you could do it with generative AI, but that's just ready to go. So maybe there's examples where, hey, I just want a natural language, something really quick and something where like this is already hooked up and integrated. I'm just good to go here. So you might find a hybrid approach going forward, you know, using it for the right reasons.
00:18:57 Mark Smith
What's your advice to new makers coming to market now? As in, it's changing so much. Like, where would you advise them to spend their time? Because I suppose how we would advise five years ago, I wouldn't be advising now how people learn the tech or engage with the tech or build solutions on the tech. How would you advise?
00:19:19 Keith Atherton
Yeah, there's so many things, and again, it's changing so quick that I think keeping your finger on the pulse, seeing how these changes are coming out, and what's changing. So you had a good example early on where I used to have quite short prompts as well, but when you develop them further and even. and using things like the copilot prompt coach. there's many tools that can help you develop those prompts yourself. Things like that are really useful to see, well, what is it just included in the prompt when it's actually elaborated more on what I'm looking for. Things like that are useful. How to use these tools to help you with building.Another thing that I found useful as well, there was some news very, very recently. In fact, I think it was today, the blog post came out from Microsoft that many of those Pl. exams are now due for retirement officially, and they're going to be replaced by some AB exams like Intelligent App Builder and so on and so on. So again, that kind of focus on generative AI, whether it be generative pages, code apps, or plan designer, many other ways of describe it, builds it for you, is probably going to be that default way to go because it's so quick to market to get what you need. And then you can tweak and review and do what you need to do after that, or have a conversation with the prompts to refine that solution. So I think if that's where Microsoft investment and exams and certifications is going, that's probably the wind of change that we need to follow to an extent as well.
00:20:44 Mark Smith
Yeah. It's definitely, you know, I've always felt through my career, I've always observed where Microsoft was investing and always, you know, would move my career in that direction because you would always, two things would happen. One, you get trained on the new stuff because it's new for everybody. So there's a lot more How do we bring everybody's skill set up? And then the second thing is, that there's the long tail of money, as in that people that, and it is quite a long tail. Like I found changes that were implemented in the US when I was earlier in my career would take four years before they really became mainstream in my country, in New Zealand. And so I see this, this opportunity, I think, for folks to really go deep now into AI. In 2026, I don't think you can sit on the sidelines anymore. If you're in our space, you need to go deep, right? Because there, and you need to develop those skills that really allow you to move the dial. I think more than ever, you need to have a thorough understanding of software development. life cycles, because if you're using agents to do it, you want to make sure you can hold them to account that you are getting the work that you need done. And just the other day, I built some software via a team of eight software development agents that I have. So I've set up a software team. There's an architect which has a deep architecture grounding. is the developer that has a deep architecture grounding. And by the way, I actually, I have eight different LLM providers tapped in behind this. So I choose which model to use per function. I then have the QA team. They do regression testing, smoke testing, all the different type of test phases. It goes, there's an agent that oversees the entire LLM process and holds everybody to account, like check-ins, repos, et cetera. I then have a release agent, but they actually, none of them speak to each other. They all go through an orchestrator. But it's kind of like, if I hadn't had the experience of working with software teams for years, I wouldn't know what good looks like, and therefore be able to ask that. Heck, I've put in a red teaming agent to look at, and it deployed Pyrite on my server, which is a free testing tool that Microsoft, made available to the public. And I'm just doing that all as a default of creating a form on a website to capture some data. But like I'm putting that rigor in and the quality, that's happening. And that's why I say, I think you have an unfair advantage, Keith, as somebody has a developer background, because you know what good looks like. So that human in the loop piece really comes into play, but you can run at a speed that was just unheard of before.
00:23:49 Keith Atherton
Yeah, it really does. And I love that example as well, because you mentioned almost an approach of using agents to replicate an organization, a traditional organization with those different roles and responsibilities. And people with those roles and responsibilities, let's say me, an older, more experienced pro-code by background, I might be like the Claude Opus of your, you know, your agents there, but you're going to have much smarter people like researchers. They're going to be doing that business analysis work because they're going to be much better than I would be or that particular model. So they've got the right tools to do the right job for themselves. I mean, Probably going to get to the point with naming them and giving them personalities and listening into the conversations.
00:24:36 Mark Smith
Well, see, that's the thing is, that part of engineering them, and this is what I'm noticing with using outside Microsoft tooling, is that Soul.md is a core part of each agent, which is where you do give them their who they are identity. I haven't got to naming them yet, apart from, dev lead and architect lead. But the other thing I've done is built into the cycle is some real good business analysis up front of anything. So when I go and do the requirements gathering, I once again have it automated to switch into, I give all my knowledge and then it comes back to me and goes, hang on a second, we're going to use the double diamond approach, which is really, do we understand what the problem is? And we're going to go through our divergent thinking phase. We'll converge and then we get really crisp on the problem you're trying to solve. You know, the five whys from Toyota. Is this really the right problem that you're trying to solve? And because what it does is stop you building stuff that not needed, right? Is there a product already in existence? Is it available at an economic point? It's not worth us building it. All these type of validations you can do up front before you ever hand over to the dev. part of the team to actually do anything. And so I feel like you spend less time backing out things that you don't need because you've actually done a proper grounded best practice requirements gathering up front. And things like scope, budget, quality, you can now programmatically put that into the mix so that you, those things that human nature kind of let slip and projects blow out in length and scope creep and stuff like that. I mean, you can prompt in scope creep management. How are you going to do it? Like that accountability I had the other day, it refused to do something on ethical grounds because of this whole, you know, it had worked out that what I was doing would breach the T's and C's of a company because it went and read them. And this is where it would breach them. and said, do you really want to do that? Do you want that ethically to be on your conscience? This is the AI said back to me. Because I'd put it into the soul of it, right? That these are the kind of grounding I wanted in there. And yeah, I think we're in interesting times. What does 2026 look like for you? Where do you think you'll be at the end of this year in your career? What do you think will change or stay the same?
00:27:19 Keith Atherton
Yeah, that's a good one. I mean, I love the example there you mentioned there of it actually read the T's and C's. if there's 80 pages of T's and C's, how many humans actually do that? But now we have the tools that can do that, will do it, and can summarize it, or check for conflicts and raise them, or think of workarounds for them. You know, so there's so many things that could be useful where it could pay that attention to detail that feasibly many humans might not have time or ability for. So I think going on that kind of theme, I think going forward, some things that weren't possible will be possible. Some things that we do now will be quicker. And I think, yeah, just as we mentioned there, before, staying up today, seeing where that the zeitgeist is, where are we going next? What is possible? But the frontier, I'll use that term. I know it's used for different terms in Microsoft Frontier, but right at the edge, what is it going to be used for? And it may be that because we can scale up with all these multiple agents, like your example there for the website, for the software, it could be that we almost become like directors of films where you've got a team of 500 people with specialized units and specialized things. But sadly, for things like films, if you could do it with a few tools for a fraction of the cost, maybe things might look different in a year or two as well along those lines. But I think that's it. Maybe we'll just become like, I'm not going to use, Mark, stop me from using the term agent boss because I don't want to use it. But if it really is, you're a director of the scalable amount of agents who can perform these functions, or you can give them direction and say, well, You're a middle manager agent, but I need you to form a team of software developers. And then, as you mentioned before, the double diamond design like thinking and the Microsoft Partner Catalyst training, which is great. If you can feed all that material in, it's going to use that by default. It's going to make apps and solutions secure by default, accessible by default. It's not going to be a conscious process or bolted on at the end. So I think, yeah, just directors shepherding herds of agents doing the work for us.
00:29:31 Mark Smith
A couple of things that you said there about giving them names and stuff. I think that the flip in my thinking came early in, I think, January this year, might have been even the end of December, because I was doing, over the holiday period, I do projects that I just do because I enjoy them, not because I need to get them done, you know. And it became clear to me that we have thought of agents on a too narrow a plane, that they do one task or they're one piece of the business process. And I've started thinking them a lot more around the concept of a role. They have a role inside my organization. There's things that are in my job description that I do. And then there's things that I pick up by being part of this team and, you know, that kind of grounding that's further beyond, you know, the culture of the business. Like I now have, a governance framework which sets the vision and culture of my organization, and all my agents have that in their system prompt. So it means they can't step outside of the kind of ethics, the things that we value in the organization. It actually is in every single system prompt that they pre-run before whatever they're going to do. And that ability to keep true north true north, I think, is pretty amazing.
00:30:54 Keith Atherton
That is really cool, actually. I love the sound of that. It's baked in, isn't it? It's baked in. You've read teams, you can't change it. It's baked in, you're going to use it, and you can always enhance it with things like skills. You know, I've been using some of those recently. Here's my markdown of this skill. Follow this pattern. Do it this way. Yeah, I love that.
00:31:12 Mark Smith
For the first time in my life, my... GitHub, the little green areas of squares that light up on your GitHub profile.
00:31:25 Keith Atherton
Yeah.
00:31:26 Mark Smith
I've got dark green squares.
00:31:29 Keith Atherton
Congrats. So you've got like the bathroom tile effect now, haven't you?
00:31:32 Mark Smith
Yeah. I'm like, you'll see, it's like starting in February, just like, because there's so many, so much of you, like GitHub's just come into its own for me. It's like a super tool.
00:31:44 Keith Atherton
It's incredible, isn't it? And. Yeah, it's incredible how fast we can work and again, what is capable. It could be like complex coding problems I dealt with maybe three years ago. I'd have to sit and ponder and sketch it out and pseudocode or diagram it on paper and work. And that's part of the fun sometimes, the process of coding and troubleshooting and solving the problems. But now, yeah, I guess the good and bad, you can just ask a Gen. AI and it can help us with it.
00:32:13 Mark Smith
Yeah, We're in amazing times. And to really, I feel like I don't have enough time in the day for all the ideas I want to create. And the solutions I want to build, there's just so many. And I feel like in the past, I always had to ask somebody to build something for me. And now that barrier is gone. And so it's now coming down to the hours in my day to the speed I can go at. because there's so much to do and I'm achieving. I feel like I'm accomplishing so much. And so I'm glad of the history of the Power Platform and low code. I think Microsoft's biggest goldmine on the Power Platform is Dataverse.
00:32:55 Keith Atherton
Data is where it's at, isn't it? I mean, that's the hub. That's the intelligence.That's what can be used to drive so many things or be reported on. Yeah, I think so. And I think the fact it's more than just a data store as well and with all the logic and validation and everything. Yeah, there's a lot we can do with it. Yeah.
00:33:12 Mark Smith
Do you think we'll see Microsoft reinvent licensing in 2026? And what I mean by that is that I feel that the only mechanism for licensing nowadays is at a token level. And, you know, how many tokens do I have to what I can do? And I just feel like one of the conversations I had the other day is that there's so many co-pilots and licenses, you know, and it's just getting silly in I think there's going to become a resistance from organizations to just an endless number of skews. And I think people are going to want simplicity.And I think tokens is the simplicity. And it comes down to, you know, that is the measurement gate. And for me, it can't happen fast enough. And I'm wondering if tokens are going to become the new currency of the world.
00:34:04 Keith Atherton
It could well be. Yeah, it's almost like a sci-fi film, isn't it? I did read this morning as well about rumblings about a new SKU, Microsoft 365 E7 license. So yeah, you're many of us familiar with the E5s and E3s that many enterprise organizations have. There was talk of an E7, which is going to be kind of an agentic-based SKU, which I don't know if it would include, the use of those credits. And as you mentioned there, when you use different models and agents, many of us might be familiar if you're using GitHub Copilot and elsewhere, you might have a multiplier on the model that you use. So yeah, you know, we're going to go down that route. Or is it going to be a consumption based like many Azure services are at the moment? You know, you call this 20 times in a month versus 2000 times you pay a different bill. So yeah, I wonder if Yeah. I wonder if you add something there. That does sound like the way forward, with tokens.
00:35:01 Mark Smith
Yeah. I've definitely hit my token limits on many of my different external provider plans as I'd like the amount of times I'm going to extra time. So I buy, I drop another $10 for another 30 minutes type thing. And it's crazy because I'm just like, I'm on the flow. I don't want to stop even though The counter is going to be reset at 4 P.m.or 7 P.m., whatever it is, but I'm just finding that the speed I'm operating at and that context window is growing so large with the prompting you do is that you burn, and that's why it's like tokens are just the gold mine for me. How many tokens do I get?
00:35:45 Keith Atherton
Should maybe ask my agents a bit more about, which model to use based on the multipliers and which is the most appropriate to be cost-effective right now, So that was.
00:35:53 Mark Smith
One of the things I built in. I built a model router and one of the things it does up front, whatever task I'm assigning, it chooses the most cost-effective, but skill, you know, as in smartness effectiveness for the task at hand. And I've built in a running dashboard so I can see every hour my consumption rate in real time and what models that's pulling from. And then I have a nightly job that runs that checks is there any new model's been released by my 8 different providers. I've got 9 providers now connected to the system. And one of them's a provider gateway. So it gives me access to like everything on Hugging Face, you know. So Let's say you're doing something unique with voice. It'll give you the most cost-effective, effective voice LLM, and boom, you can plug into it. But it's building all those frameworks in for your own systems, monitoring costs, monitoring model optimization, security posture. You know, I have a security posture now that runs every 12 hours, and it's analyzing based on what's happening out in the market to is my tech face all hardened down. Like you'd had to employ somebody to do that kind of work in the past, right? And not only be for bigger organizations that could afford somebody that's just doing that. Now it's running in real time. And it's just, yeah, we're in a crazy new world, I feel. 2026, I found in 2025, we sold a lot of the art of the possible, nothing really worked. And now in 2026, things are working at a speed that is just, you know, mount your face type thing.
00:37:43 Keith Atherton
It really is. I've been listening to the NVIDIA podcast for years. I remember even just, a couple of years ago, maybe slightly further back, someone's been working on something to generate stories or short stories and it was all gobbledygook, it didn't make sense and it just sounded like gibberish. And these days it's, you just, you wouldn't think twice about this is just, you could use a free tool to, you know, to bang out a novel or something if you really wanted to. It's just, it's totally changed. And the way you've described again, it's almost going back to replicating an organization, how it works, and almost like choosing the right agents. It's doing recruitment. And then it's picking the right contingent workers for that role, for that job. And then when they're done, they're done. And then on to the next, on to the next. So yeah, I love that approach. Really cool.
00:38:29 Mark Smith
I love it. Keith, we're well over time. Thanks so much for coming on. I always love chatting with you.
00:38:34 Keith Atherton
Likewise. My pleasure. Thanks, Mark. Keep doing what you're doing. I love this podcast.
00:38:39 Mark Smith
Hey, thanks for listening. I'm your host, Business Application MVP Mark Smith, otherwise known as the nz365guy. If there's a guest you'd like to see on the show, please message me on LinkedIn. If you want to be a supporter of the show, please check out buymeacoffee.com forward slash nz365guy. Stay safe out there and shoot for the starts.




