The AI-Security Tradeoff Every Leader Must Solve
The player is loading ...
The AI-Security Tradeoff Every Leader Must Solve

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM 

The episode explores how Chris Stegh sees organisations balancing AI adoption with data security, governance and practical risk management. It covers the real barriers to scaling AI, why perfect data hygiene is unrealistic, and how leaders can use tools like Copilot, Purview and agentic AI to create safe, high‑value use cases while improving long‑term resilience.

🎙️ Full Show Notes
https://www.microsoftinnovationpodcast.com/782  

👉 What you’ll learn      

  • How AI adoption is shaped by data security, quality and internal risk 
  • How to evaluate AI ROI beyond individual productivity gains 
  • How agentic AI supports secure, targeted use cases 
  • How Zero Trust thinking guides safer AI rollout 
  • How enterprise and SMB adoption patterns differ 

✅ Highlights     

  • “Data security is top of everyone’s mind.” 
  • “There hasn’t been a front page news article about data overexposure.” 
  • “AI is a long journey and we are just getting started.” 
  • “They’ll try to clamp down on Shadow AI.” 
  • “No one wants to clean up their data.” 
  • “Agents can give the good, real content front page.” 
  • “Everyone wants to evaluate payback period.” 
  • “Look at the end to end business process.” 
  • “Small and mediums are going to lag in those spaces.” 
  • “Some customers said we don’t see the use.” 

🧰 Mentioned      

✅ Keywords    
ai adoption, data security, governance, copilot, purview, shadow ai, agentic ai, zero trust, productivity, smb, enterprise, risk management 

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

00:00 - Why AI Adoption Starts With Data Security

02:06 - From LCS to Copilot: Chris Stegh’s Microsoft Journey

03:47 - The Real Risk: Internal Overexposure, Not Public Breach

05:58 - Why “Perfect Data Hygiene” Is a Myth

08:25 - Agentic AI as the Fastest Path to Safe, High‑Value Use Cases

11:10 - Red Teaming and the Hidden Security Gaps Leaders Miss

12:39 - The CFO Question: Measuring AI ROI Beyond Productivity

00:00:06 Mark Smith
Welcome to the MVP show. My intention is that you listen to the stories of these MVP guests and are inspired to become an MVP and bring value to the world through your skills. If you have not checked it out already, I do a YouTube series called How to Become an MVP. The link is in the show notes. With that, Let's get on with the show. Welcome to the MVP Show. Today's guest is from Chicago. Chris, welcome to the show.

00:00:40 Chris Stegh
Thank you, Mark. How are you today?

00:00:42 Mark Smith
Really good. Looking forward to this and hearing your story. I always like to start with food, family, and fun. What do they mean to you?

00:00:51 Chris Stegh 
Well, it is about the dinner hour, so you're really putting me on the spot here. Being from Chicago, our famed dishes include things like Chicago-style hot dogs and deep-dish pizza, of which, of course, any visitor to our town, I try to encourage to have a little bit of both. You know, it's ironic, though, family-wise, my daughter likes ketchup on her hot dogs, which is kind of like an anti-Chicago thing. We're only allowed to have mustard and celery salt and hot peppers and tomatoes. So it's a little bit of a dichotomy that I have there. We like to eat, of course, but we're also fans of Chicago sports. My son and I go to Cubs games, concerts, Bears games, and American hockey. Chicago Blackhawks are my passion. So a little bit of this, a little bit of that, but the family is moving and shaking. It's got a college junior and a sophomore in high school who's either driving me crazy or I'm driving her around town.

00:01:57 Mark Smith
Nice, nice. Family, so important. Tell me about what you do in the tech space. What's your focus?

00:02:06 Chris Stegh
Well, for the last 18 or so years, I've been moving and shaking with just about whatever latest and greatest Microsoft's coming out with. It started with live communication server, LCS, back in the... Mid-2000s, I worked for a firm called the E-Group, Enabling Technologies, and luckily as the CTO there, I kind of get to be our chief evangelist and roadmapper. So I'm out front with Microsoft services after Lync, of course, LCS became OCS, Lync, Skype for Business, and then Teams, and then... About 2015, we started to get the idea that security was going to be a thing. So we started to do Microsoft Intune and then Defender Stack and its various names and incarnations. And then, of course, as everyone had in early 2023, I caught the potential about what ChatGPT and large language models was able to do, got on an early access program with Microsoft with Copilot. So these days, as a copilot MVP, I live at the intersection of data security and cyber hygiene and the potential of AI, which is a double-edged sword, as you know, where you get a lot of upside from AI, but a lot of potential risks and downsides. So I try to help CISOs and CIOs balance that risk, but also make sure that whoever is running their adoption program realize AI is a long journey and we are just getting started.

00:03:32 Mark Smith
How much is the adoption of AI being affected by the perceived and the risk of data quality and data security within the organization.

00:03:47 Chris Stegh
Quite a bit. That's a good question. I do surveys whenever I get the chance of the potential or I guess the priority of data security versus other types of cybersecurity, identity management, device management, endpoint management, and the like. And it's a landslide victory in 2025. Data security is top of everyone's mind. However, I'm also aware that there hasn't been a front page news article or bleeping computer article about any sort of data overexposure in Gen. AI, except for one. early on where I think someone uploaded something from Samsung and the internet found out about it. So I always try to coach CISOs to be practical. Yes, it's a broad surface area. It's likely to happen, but as it stands against other sorts of incidents like business e-mail compromise and ransomware, it's a small overall financial risk. So where we see kind of the perfect storm of you know, end user adoption is where they'll allow it, Copilot and other AI tools to be used by certain segments of their organization. They'll try to clamp down on Shadow AI, which is the public models and the free tools. But at the same time, try to get things like Microsoft Purview implemented, which help stabilize the sensitive information, as well as lately, quite a lot of interest in SharePoint advanced management and content assessments so that you can really see the overexposed links in the organization, which then can be kind of handled in parallel. You can either lock out the SharePoint sites that you see that you don't want in the surface area of the Discovery and Copilot semantic index, or you can start to really take the approach of looking at every file and what's in the file and using Purview to protect it. So those are longer journeys. And so, as I said, there's an adoption curve where people are starting to use it anyway, but also in parallel, taking those pragmatic long-term steps.

00:05:58 Mark Smith
When we think of the security perspective, often organizations, I'll give you an example. One CIO said, We're not starting a copilot rollout, for example. or an AI rollout because we know our data estate is in a shocking state of affairs. And it's gonna take us about two years to get that to a point that that we'll be comfortable with. And he was explicitly meaning not what he, he wasn't worried about data being exposed online or to the internet or to any public models. He was worried about data being exposed internally From one employee to another that didn't have the role privilege to access information was worrying that AI will light that all up and allow everybody to see everything.

00:06:53 Chris Stegh
Yeah, and that's similar to the risks I hear a lot and why I try to get the CISO and CIO to understand that, yes, that's likely going to happen, but what is the overall catastrophic effect on the organization? I don't dispute that risk, and I do think that that parallel motion is important. But I think those that just wait for X years to clean up their data state are setting false hopes because no one wants to clean up their data. And IT can't do it. If they could do it, they would do it in an archive or some flash backup copy and move it all offline. But it's the business that has to decide what's relevant, what's the source of truth, and there is no such thing. There's probably multiple. So I think that's a good zero trust state of mind. But I also think it's impractical when it comes time to actually getting value from AI because I don't buy into the hype that you might hear about how we need to move very, very fast because we're losing ground left and right. I think that's overhyped. But I also think that if you're expecting a world of perfection, then you're going to be the enemy of good.

00:08:04 Mark Smith
You mentioned things like purview there and and SharePoint Advanced Manager. What are the other Microsoft tools are you using just from a security lens when you're looking at getting their data estate into a usable by AI?

00:08:25 Chris Stegh
That's a great question. I'll go in maybe a different direction than the Microsoft security tools, and that's to use Agentic AI. The fact of the matter is, if you've got good, known content that you want to expose to a certain app or use case, then using an agent to do so will allow good, viable access to that legitimate data source, but it will... effectively ignore everything else, to be frank. So instead of worrying about every nook and cranny in site and folder structure in SharePoint, you could release an agent that's maybe a retrieval agent for HR policies or for a regulated industry to give people access to the latest and greatest regulatory documents that they can do in a chatbot manner. with good content without overexposing everything else. And so there's a crawl, walk, run approach there. People think agents are a walk kind of, or maybe even a run technology. But if there's a short-term use case, they're quite simple to set up. If it's a simple retrieval agent, especially with SharePoint agents and the like, and very secure. Meanwhile, as I said, get your other harder work done on the retention and DLP rules around the other pieces of the state. So that's one. And then just back to Zero Trust in general, I think, you know, permissions and identity management are key. The real risk, I think, of Gen. AI is someone being pwned and having their account stolen, and then the attacker has such easy, unfettered access to whatever that user could see in SharePoint. That's the exfiltration risk that might make the front page news, not some person who sees an offer letter or an HR file. that happened to be overexposed. So I think you can handle most of that general purpose risk with an AI policy and an AI acceptable use document and some red team. Like there's nothing wrong with a red teamer going in and saying, okay, let's poke around for all the things that may be there that don't hit the sensitive information label or that might be hidden somewhere. There are a lot, potentially, but those are probably low risk overall. In the meantime, agents can give the good, real content front page for those that need it.

00:10:53 Mark Smith
You mentioned red teaming there. Are you seeing that as a Microsoft partner, that that's got to be part and parcel of the, if you're involved in supporting customers around AI, that you need to build a red teaming skill within your organization?

00:11:10 Chris Stegh
I advise it. I love that kind of feeling of the thrill of the chase myself. But no, we don't see it as a services opportunity, nor are the customers going, oh yes, tell me more. It's always a... say, if you're really worried about this kind of overexposure and you've taken some steps to clamp down, here's another best practice. Just go around. Because a good example of what they'll find in a venture like that is to search for information about salaries. It's not like it's unlikely that a big database or spreadsheet of all the executive salaries exists. on an unprotected SharePoint site. But it is likely that somewhere in your Microsoft Teams environment, hidden of course underneath is a SharePoint site, that there's an offer letter or some benefits information, particularly to an individual, that may well have been... put someplace that's not locked down from years and years ago. So it's that kind of due diligence that a red teamer would possibly turn up something interesting, but it's not at all top of mind. It's more of a unique frame of reference that I have as a security guy.

00:12:22 Mark Smith
What are the typical conversations you're having with the C-suite in organizations? Anybody in the CXO type of role? What are the patterns that you're seeing repeated, particularly in the, the first half of 2025.

00:12:39 Chris Stegh
Top of mind is what we've been talking about, this trade-off between security and productivity. Secondarily, it's ROI. And it's interesting that most everyone wants some hard metrics. I've talked to some C-suite members who said, our CFO will only do AFI if it quote-unquote bends the cost curve. I've talked to others who have, by virtue of their leader, many cases, the CEO buying in and saying, we're just going to do it. Let's do it right. Let's do it. I've seen organizations as large as 2000 users get fully funded for everybody to use a licensed copilot service. I think that's 60,000 U.S. a month. So not a small investment. And so it really varies. But to a man, everyone wants to evaluate payback period. And so we do surveys such as how long does it take you to do X before you used Copilot? And now how long does it take you to do X after Copilot? And with some fudge factor, if you will, there's a time savings there. They all want to talk about that to a certain extent. What I try to also coach them on is to look at in the end-to-end business process more than just the individual time savings. Because if you're in an industry that has a repeatable process, a good example I use is the mortgage industry, how many cycles and days it takes to start a mortgage to the day that you close it. If you can look at an end-to-end business process and give everyone in the business process Copilot, let them bang away at the prompts of their choice, and then start looking at how long it now takes for that end-to-end business process, then you've got a new point of reference for the time savings and the potential business impact. Because the first thing a CFO says is after, or any economist would say, okay, if you save me 10% of your time, what do you do with that 10% of your time? No one can really say. I've gotten anecdotal evidence from case study customers who said, well, they're just doing more of what they want to do, mentor or be with the customers or apply their brains to more high-level tasks. You get that kind of stuff. But for a CFO, I think if you look at the end-to-end business process, the time shrink that you can get from AI, then you really can say, okay, mortgages tend to cost us, I don't know, $2,000 to process. I've saved $500 per mortgage because I've saved 25 is all my time. There's a different metric there that I think is valuable to think about.

00:15:14 Mark Smith
On that pushback of what people are doing, because I've heard, folks say, hey, I don't want to provide my staff more time around the water cooler, so to speak, right? Like, where's my return on that? But what I took from your saying, if you're taking a $2,000, you know, cost process and reducing it to 1500, You're already making money there. Do you need to make it on the, are they getting a higher throughput, you know, somewhere else?

00:15:47 Chris Stegh
Yeah, I think that, again, depends on the CFO and maybe the CEO's outlook. I think if you're in a business that values the employee and the stakeholders, then you don't quite look at it with that lens. Good case in point, we have a nonprofit, not a multinational non-government organization. I can publicly reference it's Children International. They're the firm, one of the firms that sponsors children who are indigent in third-world countries with benefactors in the G7 communities that sponsor a child. They interact with the child. They send letters back and forth. What a great example of what they did with AI was they sped up the process of this this adult-child relationship where an adult would write a letter to this child and the child would write back. Well, they're in different languages. So back in the day, they had to use a letter. a translator and someone scribing the thing, it would then get sent, you know, via electronic transmission to the middleman and then be the weeks of time between the time a child sends a letter to an adult and back and forth. Now they use AI. So they upload the child's penned letter to a OCR system in Azure. They translate it, they send an e-mail to the adult, and then the adult gets that in seconds rather than in weeks. And what that not only did for the adult-child relationship to speed it up and improve the mentorship and the transactions frequency, just got people out of the loop. And so what the CIO at Children International tells me is that's really given the employees a newfound ability to do more with the children, to think about now what can they do for these stakeholders and these relationships. So I think if you're in the right culture, then this... nickel and diming of what they'll do next is irrelevant. You trust that they're doing the things that you want them to. I think some organizations, Mark, might need this maybe Google mentality. I mean, if you think about what some are prognosticating that we'll figure out in years, if you think about these AI labs, decades, if you're more kind of maybe conservative, is there's going to be maybe a four day work week or a three day work week for many people. And so we either have to figure out universal basic income and things like that to help these people survive in their smaller work world, or we have to give them bigger, better, high-level things to do. And I hope it's the latter, because I don't think our governments are going to figure out the former in any time soon.

00:18:33 Mark Smith
What's your views in your exposure to this whole AI space since it's taken off? Where do you see us at the end of 26, 2027? Like, what do you see the progress or speed, particularly around as adoption of business and changing, because it's a fundamental shift in the way business is done. What do you see happening in that kind of two-year timeframe?

00:19:06 Chris Stegh 
I think it's going to be very dependent on the size of the business and the amount of resources they're committing to it. I work in the small and medium business. I do surveys of our customers most frequently when we do webinars about Gen. AI, Copilot, and the like. And I just released a white paper that articulated that their culture, these early adopters, their culture is good. Their technologies are simple, you know, enable Copilot, put in some agents. But what really lags are data hygiene, to your points earlier, and the trustworthiness of the content and the governance of it. I think small, mediums are going to lag in those spaces. I think they'll always lag the larger enterprise. And so my point is to question where we'll be in two to three years is going to be Large enterprises will have fast adoption and lead the way. I read recently that Walmart and OpenAI are already working on a way for you to transact via the ChatGPT interface to buy stuff from Walmart. So they're out there. The big banks and the like have millions to spend and frankly are putting way more AI research into their own internal departments than my company, who is an AI service systems integrator. Having said that, I think the small and mediums are going to be lagging. They're not going to have the clout, the buyers, or excuse me, the buying intent. And so I think there's going to be a large differentiation. I think the larger companies who are investing are going to continue to be the big, the rich getting richer. I think the small and mediums are at risk because you hear about this frontier firm and other things Microsoft is touting. I think the small and mediums are the ones that are going to get pumped by the upstarts. They don't have the resources, nor the marketing, nor the clout, the base of business that the Walmarts of the world are. They're going to be fine because they're already way ahead. But I think the small and mediums can be far behind in this space because I've ran into some customers who just said, we don't see the use. Nobody's asked us, so we're not going to do anything. And I think unless they have an individual, self-starting, kind of the curious leader, and there are some, but I mentioned the one who's invested for everybody, then I think those organizations will find themselves flat-footed at some point.

00:21:37 Mark Smith
Yeah. Chris, it's been really good talking to you. You got some great insights of what's going on in the market. Thank you so much for coming on the show.

00:21:45 Chris Stegh
You bet, Mark. It's been a pleasure. Thank you.

00:21:51 Mark Smith 
Hey, thanks for listening. I'm your host, Business Application MVP, Mark Smith, otherwise known as the nz365guy. If you like the show and want to be a supporter, check out buymeacoffee.com forward slash nz365guy. Thanks again and see you next time.