

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
This episode breaks down why AI governance must evolve alongside agentic AI, drawing on the insights of Matthias Darblade. The conversation explores the EU AI Act, continuous compliance, and why the biggest business value often sits in high‑risk AI use cases. For organisations adopting agents, governance becomes a live system, not a one‑time checkbox, balancing innovation, responsibility, and trust at scale.
👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/823
🎙️ What you’ll learn
- Why agentic AI turns governance into a continuous process
- How the EU AI Act affects organisations inside and outside Europe
- Where the real business value lies in high‑risk AI systems
- Why static compliance models fail for AI and agents
- How organisations can govern AI without slowing innovation
✅ Highlights
- “With this black box that needs to be governed and controlled.”
- “It gives you a certain sense of being safe regarding the regulators.”
- “It starts to become a lot, even for a company as big as Airbus.”
- “It’s definitely a live system.”
- “For AI, it will be a bit more complex.”
- “High risk is where most of the value lies for companies.”
- “You’re leaving a lot of money on the table.”
- “It needs to be running and monitoring in real time.”
- “You need to be respecting the law and proceed in a certain way.”
🧰 Mentioned
- ISO 24001: https://www.iso.org/standard/81230.html
- AI governance: https://www.oecd.org/ai/governance/
- Agentic AI: https://www.ibm.com/topics/agentic-ai
- Copilot: https://www.microsoft.com/microsoft-copilot
- OpenAI: https://openai.com/
- Gemini: https://ai.google/gemini/
✅Keywords
ai governance, eu ai act, agentic ai, ai compliance, high risk ai, ai regulation, autonomous agents, responsible ai, continuous monitoring, enterprise ai, ai oversight, ai strategy
Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption
If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith
01:53 - Why Agentic AI Forces a New Governance Model
03:17 - The EU AI Act and ISO 24001 in Practice
05:24 - When Compliance Becomes an Innovation Bottleneck
07:33 - Governance Is No Longer One‑and‑Done
10:24 - Why the Biggest AI Value Lives in High‑Risk Systems
12:06 - Why Humans Cannot Keep Up with Model Change
19:16 - Governing AI Without Slowing Innovation
00:00:07 Mark Smith
Welcome to AI Unfiltered, the show that cuts through the hype and brings you the authentic side of artificial intelligence. I'm your host, Mark Smith, and in each episode, I sit down one-on-one with AI innovators and industry leaders from around the world. Together, we explore real-world AI applications, share practical insights, and discuss how businesses are implementing responsible, ethical, and trustworthy AI. Let's dive into the conversation and see how AI can transform your business today. Hello, and welcome to the AI Unfiltered show. Today, I'm joined by Matthias, who's joining me from Monaco, that flash country next to France. We're going to talk about a lot of things today, particularly around the EU AI Act and the impact of governance in AI. Matthias, welcome to the show.
00:01:02 Matthias Darblade
Thank you. Thank you for having me.
00:01:05 Mark Smith
I always love starting with food, family, and fun. As my opening question, so we can get to know you, like, what are you into when it comes to food and family and fun?
00:01:18 Matthias Darblade
Oh, food. I just came back from address now, so meats all the way, grilled barbecue, all of this kind of stuff. Family has two kids, whose two kids are four, so a lot of work and fun. I don't know, before the kids, maybe playing golf from time to time, but currently it's mostly work and in family. The family will come later on, I guess.
00:01:45 Mark Smith
Yeah, that makes sense. Tell me about the area of AI you're specializing in and why it's important.
00:01:53 Matthias Darblade
So we mostly specialize on AI governance, particularly for agentic AI. We quickly saw over the past years the importance of the agents and the responsibility that they gain and the amount of tasks you can delegate to them. If you think about it before, you had all of those workflows that were pretty complex to automate tasks. And slowly, the boxes in your workflow start to reduce to a point where some of the automation is just one single box in your new workflow that's have instruction to an agent, tools connected to the agent, and then execution happening inside of it. So you remove kind of the cognitive load of creating all of this, but you realize with this black box that needs to be governed and controlled. And so that's exactly where we realized that there was currently a gap and somewhere to implement a new technology.
00:02:59 Mark Smith
Okay, so I suppose Europe, when it comes to AI, is known for two major contributions that I think of in the space is the EU AI Act, and then there's ISO 24001. What's your experience with both these things?
00:03:17 Matthias Darblade
We started participating actually to the work of normalization, which means like you have the acts happening and along with the acts, or Europe is creating those norms, which tells you kind of like implementation details around the area act. And if you follow those implementation, you have like the, you have deemed that you are compliance until they figure out that they have proof that you are not. which means like it gives you a certain sense of being safe regarding the regulators. But at the same time, there's a lot of delays around that. Like currently, we were waiting, I think, this week for a better definition of what's high risk, what's low risk. And it didn't come in. And also looking at a couple of the previous tests, some missing information that would be needed, which makes this tight schedule a bit at risk or definitely at risk scheduling. So I think from our perspective, we are very curious about that because it's one of the things that we want to provide to all of our customers, like a solution around all of those requirements. But at the same time, it's constantly changing now and there's deadlines that are coming very fast in August 2006. which might get changed, but so far it has not. So we'll see. But there's a lot of unknown greatly on this.
00:04:51 Mark Smith
It's interesting that you said the deadlines might change because that's been said quite a few times about the EU AI Act, and yet they seem to not change them and keep saying, no, these dates are set, right? Even though I feel sometimes the market want them to be a bit more flexible because there isn't, you know, full ratification of things like that severity. But what's the biggest concern that you're getting from customers in regards to their positioning with the EU AI Act?
00:05:24 Matthias Darblade
So the biggest and loudest complaint that we had was actually from the French CEO of Targus that went not testifying to Congress, but like an equivalence and complaining against Congress as would be. around the fact that when you want to make a piece of software, you want to implement AI in that, you have the cloud act, you have the data act, you have the AUA act, and it kind of becomes a lot of work, particularly for the governance part of the company. And you go to the lawyers and they have some, you go to the lawyer of the company, they start receiving requests around the implementation that you want to do a sentence in the team. And those binders start to pile up and they don't have time to answer the question and at the same time look at the new regulation and at the same time. Like it starts to become a lot, even for a company as big as Arbus. So most of the rest is in a similar situation. Like it's tricky to innovate, particularly in big companies.
00:06:33 Mark Smith
So what's your advice to them and particularly around? I would have thought that a lot of companies, in the way they address governance, might be in a semi-static format in that they don't, it's kind of like they do an assessment, they move on until something flags that they need to do a reassessment. In the age that we're in, with more and more intelligence systems potentially to becoming autonomous to some degree, do you see a future where we could ever be in a situation where it's one and done, or is it going to be a constant evolution of refining, updating your governing principles, particularly around AI, and reassessment of whether, one, your staff are capable, trained on the technology, and potentially your customers and how it affects them? How are you advising organizations from how they handle governance in regards to AI?
00:07:33 Matthias Darblade
Yeah, it's a good point. Like, if you look at the acts and also the California build around AI, there's a strong push for doing pre-deployment monitoring and then strength that your model respectively and has the minimum baseline and guard range that would be needed. And there's also a strong post-monitoring assessment that needs to be done from time to time. We don't have a definition here in terms of exact time. that it needs to happen, but that's documentation that needs to be kept to date. Assessments, people need to know about AI, that the person, the human oversight needs to know about AI, and be trained to understand the systems. So it's definitely a live system. And if you look at the previous governance push that we've had over the past 20 years, I'd say, even in the data part, more than AI, And in balance, it has to do with that and kind of satisfy we're pushing for compliance and ensuring that the infrastructure behind your compliance is sound. We're not looking at your PDF documentary, we're looking at your actual processes and other processes. But the difference is that data pipeline, data pipeline is kind of deterministic. You set up the data pipeline, make some monitoring obviously for data quality, but You don't have stochastic components onto that. Yes. For agents and LLM, it gives you different results. But on top of that, there's also all of the components that you have in traditional AI systems that makes it more complex to ensure the quality of the outputs. Because it not only depends on the inputs, but also on the distribution of this inputs and the quality of this inputs. And so So that's why we're seeing that a lot of compliance software on the market and traditional GRC that typically handles compliance as a form where you check some of the boxes around your checks and compliance and implementation. But for AI, it will be a bit more complex and it's something actually running and monitoring in real time and ready to block action when something wrong goes on. Specifically, if you want to implement this into a high-risk system, we can talk about this. But for me, high risk is where most of the value lies for companies. So it's definitely still not something that you can say, oh, it's high risk. I don't want to do that because you're missing, you're leaving a lot of money on the table too.
00:10:16 Mark Smith
So that's an interesting statement you've said that most of the opportunity sits in the high-risk related. Can you give me some examples?
00:10:24 Matthias Darblade
Yeah, typically, If you look at the implementation of AI, typically the processes are, we're looking at Copilot for users. We kind of prepare some drafts of emails to customers. Maybe you save half of the time for an analyst to prepare some of the documentation, but you're not automating workflows. You're not replacing human in 50% of the volume of the work that they get. But If you manage to automate the AML check and review that people are doing inside the bank and make it so that instead of spending 5 to 10 minutes of a human time, you spend 5 to 10 minutes of the most powerful AI in reviewing all of the detail about the person, then you have a process that's more exhaustive than the current process that you have. And that's also way cheaper. 15 to 20 minutes of a human might cost you between $20 and $100, depending on where you are based. But for machine learning model, it's like 5 cents. Even for the best model on OpenAI, processing one document or one step will cost you between 5 to 10 cents if you use the latest thinking model, all of the more powerful parts, because it's not that many token. that the value added is strategic. We're talking about 100 to 1 ratio, of course.
00:11:50 Mark Smith
Yeah. So in this constant living governance system, I would expect that it's quite likely that agents are then going to be needed to maintain that system because humans won't be able to keep up with the evolution. I mean, just today,
00:12:06 Mark Smith
I think Gemini released a new model after only a few weeks ago releasing another. And let's say you're an organization that uses models from Google, you're going to need to, of course, refactor every. And what happened if model releases come down to weekly or even more frequently? They almost are now. What would, you know, the overhead to an organization from that, if they didn't go to an agentic way of handling it, would be insurmountable, wouldn't it?
00:12:34 Matthias Darblade
Yes, it will, definitely. And there's a strong risk of also being out and innovative if you don't do that. And the interesting part, and that has been my experience in building those AI and automation software over the past years, is that you test out solving processes and processes and try to automate it with one model. Then it doesn't exactly work, or you have to do like prompt engineering to make it work. And there's some issues like Historically, AI was very bad at reading check boxes and getting, if you had a check box in a file, then it was tricky to exactly attract this information. So you had some vendors that we don't talk about that, that's why doing that. And you had to combine those vendors with multiple AI. And we really used two AI systems, Gemini and OpenAI, to get the results. But now with the latest model, you don't need to do that. And the infrastructure is way simpler. And So you can think about that to most of your implementation. If what you're doing with the current generation of more that doesn't work, it's very probable that the next generation will be able to be used. to this power for a fraction of the cost that you're looking at. So it's not only on the existing process that you need to be on top of all of the release, but also like your previous phase experiments, you need to go back to them, everything there's a new model, and brand them again to see if that works.
00:13:58 Mark Smith
Yeah. When companies come to you and they're like, hey, I've got concerned at the level of compliance we have regarding the EU AI Act or Is there a risk that it's going to slow us down from an innovation perspective? How do you go about justifying the EUA Act to customers and why it's important? Like, let's say we put the fines to one side for breach. How do you go about explaining the purpose and how it covers customer expectations? Probably the person out in the market that doesn't even understand that a lot of the systems they're already working with have got a lot of AI in them. and have had done for many years, even pre-generative AI. But it's now on their radar. People are concerned about how their data is going to be used by the companies they work with. I was speaking at a conference in Vegas recently, and one of my colleagues said, if clients knew the way you treated their data, they probably wouldn't be your clients. And so with all that, Backstory, how do you land the EU AI Act and the importance of it to business?
00:15:13 Matthias Darblade
It's actually very timely with all of the debate over the weekend through Anthropic and OpenAI and the Ministry of War of the US.
00:15:21 Mark Smith
Yes.
00:15:22 Matthias Darblade
So, and you see the big backslash regarding those, like, I think Anthropic wants to be #1. On the Play Store.
00:15:35 Mark Smith
The App Store.
00:15:36 Matthias Darblade
Yeah, the App Store. You had, who was at Lady Gaga who tweeted a lot, spelling the max account for Anthropic. Not Lady Gaga, I don't remember.
00:15:46 Mark Smith
I don't know who, but they doubled their subscriber base in the last couple of weeks, haven't they? And then I think OpenAI has dropped, some of the stats is like 750,000 subscribers have closed their accounts.
00:16:01 Matthias Darblade
It's huge. It's companies that are built and working at the age of AI, which means being at the age of your capital usage and cash flow management, because you want to be the best, you need to be tight on your cash flow management. And then you have this sudden drop. But it's not related directly to your question, but that's seriously considering for that. And there's a huge risk of a big backslash. could even be existential if you extrapolate a bit the consideration and make it a bit more... You could imagine OpenAI going bankrupt because of a student very big drop and the cash flow issue. But yeah, so within this backslash and the public knowledge and understanding and curiousness about how they use, how AI is being used, There's the AUAX. I don't remember if there's things around the military usage of AI inside of the AX because I'm mostly focused on business. But I run the AX and you have some probability usage that are kind of normal, like it's premium surveillance and scoring of users of persons like you have in China or in Europe, China, for example. That's prohibited. But apart from that, it's also like ensuring that you don't deceive people. You don't say that it's a human or it's AI. Yes. You don't shift black box systems into this vision that could have an impact on your life without the proper control. I wouldn't want to have like my credit score being handled by AI, a system that no one understands and no one has other sight on. So it's pretty reasonable of tasks. It's a lot of burden on top of companies. That's true, but it's also their job. And you need to be able to respect the law and proceed in a certain way that's responsible. You're getting a lot of efficiency through your business by using AI. So you need to be respecting of the reasons because we're not very powerful from those. science fiction example that you think about. If I choose OpenAI in your phone 10 years ago, that would definitely be science fiction and truly something of the future. So we're kind of leading in there, but at the same time, that justifies all of the production and requirements.
00:18:42 Mark Smith
Yeah. How prepared or how unprepared do you feel businesses are? in regards to the EU AI Act. Do you think that there's a big disparity between compliance and non-compliance? Do you think that, you know, people went through the whole process with GDPR? I was just talking to somebody earlier today in America that was processing EU citizen data in the US and weren't doing anything about from a GDPR perspective and go, well, how does it apply to us? We're not in the US. Well, that lack of understanding that the EUA Act reaches, just like GDPR does, way beyond Europe in its implication, right? So when you think about compliance, it might be that do you think Europe is well on the way to be covered, but then a lot of, you know, companies doing business with European businesses and European citizens, are not addressing this as seriously as they should be?
00:19:44 Matthias Darblade
Yeah, funnily enough, we receive a couple of rebounds from Japanese companies because of this reason. They might have factories inside Europe. And so if they implement AI or use the same for management of their employees, they will have to respect the act even if they are a Japanese company. But I think the I think there's a lot of freak out, I'd say. The people have been concerned about that. But if you look at the start of GDPR, I think the estimation by the FTC, I think it was, in the US was that the cost of GDPR compliance for a small company, a small business, will be around $1 million or north of $1 million. But if you look into it 10 years later, it's actually roughly between 10 to $100,000 in actual cost that we need to do one. It's less expensive than based service compliance to the GDPR compliant, no. Yeah. So we will see a lot of overestimation, I think, of the actual costs that will be our products. But there's some valid thoughts and the delay of those normalization and the delay of the guidance on implementation of the acts definitely impacts businesses. And you're starting having like this northern guideline that you need to follow without explicit implementation details on what you need to actually do.
00:21:21 Mark Smith
Tell me a bit about what your company does then with clients in regards to all this.
00:21:27 Matthias Darblade
So of course we runtime governance for AI. So it's a control plane for your agents that you deploy. And we will control, we sit between the person calling your AI and your AI system. And we will control everything that your AI does. We control that the input is correct based on your performance. We control that the output of the model is correct. That within the model of the tool usage are also within your requirements. And the way you build up this requirement is a mix between normal LLM categories that you can implement as well as policy has got type of categories for deterministic check. So, our objective is to be the platform that you can use to implement those high-vis system and specifically that works for banks, all of the regulated industry for pharma. So, in bands, for example, we're currently working on the AML section of the antimonial dream, where you usually have alerts that are created automatically for any antimonial dream. And those alerts have traditionally net 5% false positive rates. So you have a human always going through them and validating those. And so we're pushing regulators across Europe to ask, like, if we put an agent that works better than a human would you'll be able to validate this implementation of automatically closing some of our lives. And if not, what is the reason? And if we add all of the features that can have earnings, would you allow it? So it's kind of a discussion that we currently have with which is explicitly validate the third party vendor that will not promote our company to banks. But we try to push those, and the same goes for pharma companies. while they are the pharmacovigilance process. I don't know if you're familiar, but usually when you put a drug on the market, you'll have feedbacks from medical doctors everywhere, telling you that's how I give this to my patients and then asthma test effects. And then as a pharma company, I'm responsible of collecting all of those reports and then updating my my side effects on the notice of the remedy of the threat. Yes. Or even removing these from the markets. So it's all very high risk systems, very manually intense systems. And human are not historically perfect out those tasks as well. So can we go further than human goes through AI just because we can spend the equivalent of powers of human reasoning for structure of the cost? and doing those implementation in the most critical functions of the society. And it seems very interesting because it both gives a benefit for the banks, gives a benefit for the society as well in improving those critical processes, and it pushed the regulators. The regulator is actually pushing for implementation for banks, for example, because they want the banks to have operation and support and avoid just about saying, I don't have the workforce to do the implementation that you're asking. Now the regulator can say, we have AI, figure it out and implement it.
00:24:57 Mark Smith
Yeah, My last question as we wrap up. What's been the outcome? Have you got any success stories with the companies that you've worked through without naming names? But what have been some of your kind of case studies, reference points that you've been able to deliver on thus far?
00:25:15 Matthias Darblade
Well, currently, early in the process, we studied, we finished the platform in November and started looking for customers actually in December of last year. So we're talking with a lot of prospects. We come from, we are both the founder and coming from a bank and insurance background, but we are very surprised by the amount of inbound that we had from pharmaceutical companies across Europe. So we had to figure out a lot of those processes. So far we don't have a customer success story, but what we're doing is trying to bring the regulators together and figure out those actual use case and implementation that can be done.
00:25:57 Mark Smith
Yeah.
00:25:58 Matthias Darblade
And we see different degree of maturity in the market on AI usage, but there's a lot of companies, even those who never jumped onto the machine learning program, that are looking 2025, 2026, that's implementing agents just because of noise, just because of the progress, the fact that it's everywhere. And the fact is, It's kind of magical. Like if you put an executive who never implemented AI in that companies and make them use OpenAI for a bit outside of PC, they'll quickly realize that part of the performance they could gain is pretty significant. And it can be on development of software, it can be on automation of tasks. So yeah, it's a pretty interesting time to be working on AI.
00:26:50 Mark Smith
Matthias, it's been great talking to you. Thank you so much for coming on the show.
00:26:53 Matthias Darblade
Thank you. Thank you so much for having me.
00:26:56 Mark Smith
You've been listening to AI Unfiltered with me, Mark Smith. If you enjoyed this episode and want to share a little kindness, please leave a review. To learn more or connect with today's guest, check out the show notes. Thank you for tuning in. I'll see you next time where we'll continue to uncover AI's true potential one conversation at a time.




