Why AI Compliance Alone Won’t Save Your Business
The player is loading ...
Why AI Compliance Alone Won’t Save Your Business

Why AI Compliance Alone Won’t Save Your Business
Santosh Kaveti

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM

AI adoption is accelerating, but compliance alone won’t protect your business. In this episode, Santosh Kaveti explores practical strategies for managing AI risk, securing data, and scaling responsibly while unlocking new revenue streams.

👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/771

🎙️ What you’ll learn 

  • How to move beyond compliance to build security-first AI systems 
  • Four stages of AI adoption and where your organisation should be now 
  • Why data quality and governance are critical for AI success 
  • How to manage cultural change and workforce fears around AI 
  • Practical steps to create agents and leverage multi-agent ecosystems

👉 Chapters

  • 00:07 Cutting Through AI Hype: The Real-World Impact
  • 03:42 AI Compliance: Why Security Must Come First
  • 6:46 The Four Stages of Enterprise AI Adoption
  • 11:42 Data as the Foundation: From IT Silos to Business Transformation
  • 17:26 AI Governance: Cross-Functional Buy-In and Organizational Reinvention
  • 21:48 The Human Opportunity: Adapting to AI’s Cultural Shift
  • 31:07 The Future: Multi-Agent Ecosystems and World Models

Highlights 

  • “Compliance is a result of really good security hygiene.” 
  • “Organisations will lose more by not embracing AI than worrying about what it might do.” 
  • “Most companies are still illiterate when it comes to wrapping their head around AI risks.” 
  • “Regulation is not moving as fast as it should. And it won’t.” 
  • “Security has to be the foundation and basis for everything that happens in the AI world.” 
  • “Most organisations should be in the low-code, no-code phase by now.” 
  • “You cannot bolt on security after the fact in AI.” 
  • “Everybody in every organisation should have created at least one agent for themselves.” 
  • “AI is not going to work just because you have an AI team deployed.” 
  • “You have to become AI native and then you have a huge opportunity in front of you.” 

🧰 Mentioned 

✅Keywords 
ai compliance, security hygiene, data governance, multi-agent systems, low-code platforms, ai risk management, responsible ai, enterprise ai, agentic frameworks, synthetic data, ai adoption, ai literacy

Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

00:00:07 Mark Smith
Welcome to AI Unfiltered, the show that cuts through the hype and brings you the authentic side of artificial intelligence. I'm your host, Mark Smith, and in each episode, I sit down one-on-one with AI innovators and industry leaders from around the world. Together, we explore real-world AI applications, share practical insights, and discuss how businesses are implementing responsible, ethical, and trustworthy AI. Let's dive into the conversation and see how AI can transform your business today. Welcome, everyone, to AI Unfiltered. Today's guest comes from the US, and he's the CEO of ProAsh.

00:00:49 Mark Smith
He's always helping enterprises govern their AI infrastructure, looking at compliance and innovation. Full links are available in the show notes for this episode. Santosh, welcome.

00:01:01 Santosh Kaveti
Thank you, Mark. Thanks for having me. One just And normally this happens to me all the time or happens to us all the time. Even though we're spelled literally as P-R-O-A-R-C-H, we're pronounced as pro-arc.

00:01:15 Mark Smith
Ah, pro-arc. Okay, excellent. Good to know. Good to know. We'll make sure we get the correct links in the show notes. Thank you.

00:01:21 Mark Smith 
As well for this episode. I always like to start with food, family, and fun. What do they mean to you?

00:01:27 Santosh Kaveti
Oh, I'm a foodie, so food means a lot. I love food. So I would travel for food. I've gone to places for food. Food means a lot. Family is everything. And I'm happily married, two kids, you know, so they're young in high school, in middle school So it's fun, you know, and fun. I travel a lot. I love traveling. So I travel according to my timeline. I traveled apparently 43 countries. It's a lot more to go. About 280 cities, apparently. So that's my record. But I love to travel. That's my fun.

00:02:07 Mark Smith
I love it that you know those numbers because I'm the same. I've got an exact record of all the countries I've been to. I'm at a similar number to you. And I always track how many air miles I'm doing a year across, doesn't matter, multiple airlines. I like to, I love data and to just personal data, I find it's interesting to know that. Tell me about ProArc. What do you do? What is the organization about?

00:02:34 Santosh Kaveti
So we're a technology services company based out of Atlanta. And we really are committed to making sure that our customers' investments into data and AI are successful. That's the bottom line. I guess the unique thing about us is we bring infrastructure expertise, we bring security expertise, we bring compliance expertise. So we really can look at from all the way from advising to getting it done and being through the entire journey.

00:03:06 Mark Smith
Yeah. Tell me about AI compliance and particularly, there's my observation in the market there is, in some cases, companies have FOMO, their fear of missing out what AI will bring to the table for their organizations. I won't use the explicit Vanier, but there's also the fear of ****** up. right, that a lot of companies have. They see that this could potentially create a high degree of risk for them, particularly around this era of AI compliance. What are you seeing in that AI compliance space?

00:03:42 Santosh Kaveti
Oh, that's a topic of discussion internally for us and with our customers every day. You know, we have to understand that, yes, AI can magnify the risks, definitely exasperate, both bad and good. That's what it's supposed to do. And that's what it really is good at doing. So end of the day, you need to be very mindful of your risks. So before we go to compliance, I know some organizations, some industries, they're driven by compliance. They live and die by compliance. But the approach that we recommend we take is, well, if compliance is a result of really good security hygiene, that's the way you should look at it. You can't organize everything by compliance. You organize everything by what you're supposed to do to keep your, you know, understand your security risks and have controls in place to make sure that you mitigate those risks. Now, when it comes to AI, you know, look, end of the day, everyone, there will be, they will lose more. Organizations will lose more by not embracing AI than worrying about what it might do. mainly because of how fast it's evolving, how fast the technology has evolved. Eight months ago, we were barely hearing about agentic framework. Now there are applications of agentic frameworks, multi-agentic frameworks available. So they'll miss out. So it's not a matter of if, when, and that needs to be fast too, because you're missing out on a lot. The fear really comes from not knowing your risks. Most companies, unfortunately, even enterprise customers, they are still illiterate when it comes to wrapping their head around AI risks and how to manage those AI risks, what to look at the AI risks. Now, I have to admit and say, look, AI risks are also evolving. As the technology is evolving, there are new risks, new attack surfaces, new way of hacking is emerging. So this is a continuous process. This evolution will continue. And we also have to recognize that regulation will catch up. Regulation is not moving as fast as it should. And it won't. That's the bottom line. It can't. And therefore, compliance, if you're just trying to meet the regulation, isn't really going to protect. In the AI native world, complying is not enough because of how fast the technology moves and how much damage it could do. Yes, if you screw things up and if you don't understand your risk, it could create a huge, huge level of damage. Now, the way I see it, Mark, is there are four stages of AI applications. I'm going to stick to Gen. AI, you know, because everybody, for most people, AI these days is Gen. AI, you know, even though that's not really the case, you know?

00:06:45 Mark Smith
Yes.

00:06:46 Santosh Kaveti
The first is You are using your off-the-shelf products, mainly human AI assistants like Copilot, ChatGPT, whatever cloud. Then you move into your low-code, no-code platforms. You're trying to do certain things like getting some tasks done, like using Copilot Studio, you know, some agent did work. Then the third phase, I would call it, okay, you're now beginning to create multiple agents. You're thinking about MCP servers. You're thinking about orchestration of some sort, RAG of some sort. Then I would say the last one to us is enterprise grade, where you're actually worried about responsible scalability matters to you. Doing it securely matters to you. When I say responsible AI frameworks, now you can evaluate metrics, you can look at, there's so many benchmarks and metrics available based on what matters to you. You can look at those, you can establish quality gates, you can actually design for accuracy or creativity, however you want to do it, but do it in a way you can secure it both proactively, reactively. But security has to be the foundation. So in these three, 4 phases, right, every phase, wherever you are in, typically everyone starts, you know, somewhere. Most organizations, honestly, by now should be low-code, no-code. At least that's where they should be. But unfortunately, they're still in the phase of, okay, I'm still going to use individual productivity gains by using Copilot or ChatGPT. But most organizations, irrespective of their size, they at least should be right now in that low-code, no-code phase. And then you eventually get to that advanced enterprise-grade stage. But in every stage, you have risks that you have to understand, and there are controls available to mitigate those risks. But if you don't bake security into your processes, unlike before, where bolting on security as an after the fact, a very active measure worked, okay? To a certain extent, it's not going to work in AI world. You can't possibly get, you know, deploy an advanced enterprise grid in AI and then say, okay, how am I going to secure this? No, that's not how this is going to work. You have to bake in security into everything that you do at every level. From data level, of course, data in, data is the prerequisites. You start there, all the way to the risk that AI itself is going to pose. And there are many, many risks that AI itself is going to pose, and they're evolving. So security has to be the foundation and has to be the basis for everything that happens in the AI world. Otherwise, it's going to be much worse than before when something happens.

00:09:29 Mark Smith
Yeah, I couldn't agree with you more at the importance of that layer and the way people think about AI being addressed. You mentioned data there, and I have been involved in conversations where The IT departments don't want to implement AI or make it broadly available inside their organization, something like a, as you say, a chat bot of some sort, like a Copilot. They don't want to implement it because they know that their data estate is in shambles. They know it's been almost an accumulated, like, if we go back 10, 15 years, Microsoft used to call about the four megatrends that were going to affect business. And one of those four was big data, right? And so organizations have gone into, almost in the last 10 years, producing a heck of a lot of data. Not really usable, but they produced a lot of it. Nothing ever got archived off, nothing. It just got, let's save it. Now you've got a scenario of where the security posture and the position of a lot of these organizations, I call a walled garden, right? As long as you can't get in, everything is safe. But if somebody gets in, You've got the keys to the kingdom, right? You have every access to everything. And I think that now the security by obscurity no longer works in an AI world, right, inside an organization. Just because your staff didn't know how to find the information doesn't mean it was secure. And so when you have your conversations with the customers you're working with,

00:11:04 Mark Smith
How do you handle that? And I'm talking more from an executive rather than the tech stack detail. How do you have those conversations around the data they have, the data they don't have that they need to get, and then even where the synthetic data into the picture and how they need to really realize that data is critically important for anything that business is going to be able to leverage whether they be around innovation, it's going to be a differentiator, those potential data sets, especially if they've got IP tied up in it and what they go to market with. How do you have those data conversations?

00:11:42 Santosh Kaveti
Wow, there's a lot to unpack there. Really good question. So you started with IT. Look, I want to make a comment there. One good thing that AI did as we began to live in the AI native world more and more, is it really brought that security conversation into business units. Yeah. I've realized that business units now are worried about security. They want to talk about the data security they want, because they're realizing that, oh my God, I can use AI, but I'm only as good as my data is. And I now need to really, really worry about my data and the quality of data. the quantity of data, the relevancy of data, the governance on data matters. That's a really good thing, because otherwise these conversations were had with like IT or security teams, and they used to kind of manage all of these conversations. Another comment I'll make on that is, we see this all the time, where there is an internal conflict. So now you have your IT teams and you have your business teams, and most organizations came up with this brilliant idea. I'm saying, I'm going to have AI team deployed here somewhere in the middle, and we're going to call it whatever, you know. And now you have your security team, and they obviously cannot get along. They cannot agree. They cannot, you know, really try to come together and say, what is the problem we're trying to solve? And we run into that quite often. So we end up becoming that the middleman to manage all the stakeholders to say, let's get this down to a simple problem that we're trying to solve. Most often, lack of, again, understanding data risks. What risks do my data pose to AI? Whether it is data retention, data privacy, data security, data poisoning, you know, all of these things matter. It starts there again. that education is super important. AI is not going to work just because you have an AI team deployed or you have a data science team deployed. AI has very little to do with data science and impractical applications. It's more to do with your day-to-day business users becoming AI savvy. And that education, that literacy right now is not that great. Now, there's a lot of work to do there. But it starts with understanding your data risks. And that basically means you have to understand first What is the problem that you're trying to solve? And what I've seen a lot of times is that's a simple question, but it's very tough to answer from most business functions. And it goes back to what you said. They've accumulated this tons and tons of data. They think that's gold, but they don't know what to solve. So getting that top five business use cases down, I think with a good start from there, it actually is not that bad. Because once you can get people on to, okay, these are the five things we're trying to solve. Now let's look at your data quality. I know once you go into data strategy and data governance, there's a lot you could do. But we keep it simple. Let's try to define who the users are, who the owners are, who is responsible for the quality of the data, and the lineage of data. Why does it matter, really? That's where we start for the most part. I hope it makes sense.

00:15:14 Mark Smith 
It does, it does. But it leads to another question around the, and I try not to be too hard on IT departments because my 30 years of my career have been spent inside those and being part of them. But one of the things with AI, it's technology, right? And so therefore it's, in some sense, it gravitates to the IT department in an organization. But because it's almost AI is going to be a reinvention of how any organization is ultimately going to run. In other words, it's too big a thing to leave inside a subset of the organization having supreme authority over it. And so what I have found more and more conversations around what's going to happen in AI has moved right up to the executive suite and or the board inside an organization because they see it as that strategic. What have you found around those conversations? Where are you typically seeing the buy-in? And I'll give you an example. A company that I spoke to the person that oversaw a lot of AI tech being rolled into. And just for context, it's an Indian company and we're talking about, it's in India and it does about 40 billion a year in turnover. So it's not small, right? What they found is that it obviously has made-up a lot of sub-companies, is that the board got the CEOs on board. And then it just flowed through the organization.

00:16:49 Mark Smith
In other words, some people say, well, you got to start at the ground and work up, where they were like, no, we need to get every CEO using, and you talk about literacy before, becoming literate themselves, using it themselves, getting the EAs on board and becoming that it was how they did their day-to-day, and it created this massive flow-on effect inside that organization. I come back to the question, who's the executive layer that you're getting, if you like, the biggest buy-in traction from when dealing with customers?

00:17:26 Santosh Kaveti
Wow, that's a really good question again. So both top-down approach or purely bottom-up approach is not going to work. This is a big deal for organizations is they're having to reinvent how they operate internally, how they make decisions internally because of AI. And I think that's a good thing, not a bad thing. I think they're having to completely rethink how they make their decisions. Where it has worked really well for us is when AI is used as not just an efficiency tool, Okay, I'm trying to save some hours here. Show me the ROI. I'm not going to do those things are important. I'm not going to sit here and say those are not important, very important. But when organizations look at AI and go, what else can we do for our customers? Is there a new service that I can create? We ourselves probably created 10 new revenue streams leveraging AI. That's the beauty of AI. Okay, that's one thing. So again, it's truly multifunctional. One unit, one department really cannot control this. It's too much for any one department or role. So this AI governance requires a cross-functional team. Three, for us at this point, I'm going to be very bold and say this, everybody in every organization should be at least to the point where they've created at least one agent for themselves. And if you haven't gotten that far, Today, you are already behind.

00:19:11 Mark Smith
Yeah, interesting.

00:19:13 Santosh Kaveti
In my opinion, you're already behind. Whether it's board member, whether it is CEO, it doesn't matter. If you haven't created an agent for yourself, you're missing out on a lot, and you're behind, is what I would say.

00:19:25 Mark Smith
Yeah, that's very interesting, because I think a lot of people would find they're nowhere near that, right? You know, they're not even, they're not even moving They have not even moved beyond what, let's say, we call one-shot prompting to conversational engagement with AI. Then it brings me to culture and organizational culture and dealing with, and I heard this from somebody at Microsoft. When Microsoft Teams went through a massive adoption and utilization spike was because of COVID, right? Everyone's world changed dramatically. You wouldn't have a single employee go, I'm worried about using Teams because it's going to take my job. Yet the media is constantly iterating. You know, I saw another stat yesterday from the former CEO of Google, Schmidt, was saying that, you know, by 2026, mid 2026, 90, almost 99% of all code will be AI written. It's getting that good. and what it's doing. And so people, there is a fear in the world that you're asking me to adopt this technology in my business, but am I training my replacement without me knowing I'm training my replacement, right? How do you have those conversations and how should organizations be having those conversations to bring people on the journey? Because it is a massive cultural change that work is going through and how we do work. how perhaps even employment is going to work in the future, there's an element of fear and unknown and therefore perceived risk of getting too involved in it, and it could impact me. How do you have these conversations?

00:21:14 Santosh Kaveti
Yeah, those are not easy conversations to have, Mark. But we need to be realistic and practical. Today, pro org, are we hiring fewer programmers because AI and automation can do a good bit of what the programmer would have done? The answer is yes, that's the reality. And we have to accept that there's, you know, there's no going around it. Everybody in the technology space should understand that they have an opportunity here to transform themselves.

00:21:51 Mark Smith
Yes.

00:21:52 Santosh Kaveti 
That's what we try to tell our teams and our customers is you have a tool that's now intelligent and can help you. You can co-create so much with that tool that you can become superhero. You know, there's my CEO, my productivity as CEO probably went up at least 50% to 60% because I use AI every day, day in, day out in my personal life as well as in professionally. You know, and that's the opportunity. Now, if you're going to resist and worry and fear. You can do all of those things, but you'll be left behind. And yes, you will have an impact. But I also see new roles, new positions being created that we haven't thought of before.

00:22:44 Santosh Kaveti
We're trying to say, okay, in order for us to really embrace AI, now that everybody's using AI in the company, they're beginning to create agents, we're beginning to create, in fact, we are actually at a state where we have a tool called AI examine that basically helps companies who want to deploy enterprise-grade applications responsibly. And we have a lot of metrics that we can look at from that perspective. So we're doing some really cool advanced stuff. But at every phase, for us, we're having to create new roles and new ways of thinking as, hey, we need someone to do this for us. And this is becoming increasingly important. Now, those who want to get into technology space, They have to understand that in this era, you cannot make a career out of 1 technology or one. There was a time where, hey, I know Java. I'm set for my life. I can become Java, you know, I can become, you know, whatever Java expert in the day, or I know.net or whatever, you know. No, that's no longer the case. You have to be very adaptable. You have to be nimble. You have to become AI native. And then you have a huge opportunity in front of you. So there's a real impact, but there's also a real opportunity that's emerging.

00:23:58 Santosh Kaveti
So both of things are happening. Who will win? We don't know. I mean, time will tell. But this is also a huge, huge opportunity. I mean, every day, just today, we thought of, we're creating a unified AI platform for companies to fast track their AI deployments. We're thinking beyond AI applications. We're thinking AI ecosystems. What do I mean by that? We're creating, for example, I'll give you one simple example. We're creating an agent called customer service agent. It uses every principle that you can think of, even responsible AI, explainable AI, governed well, scale well. Now that ecosystem, we can launch a CSR into healthcare and customize that. We can launch that CSR into whatever, wherever, you know, utility space, insurance space, And that's what we'll see happening in the next evolution is this multi-agents will be created and they will talk to each other. They'll be very intelligent and these ecosystems will begin to form across every enterprise. So imagine the world of all these agents. Of course, you know, we prefer today that they cannot be autonomous yet, fully autonomous yet. We definitely need a human in the loop today. But as they become intelligent and we let them do more and more, It will be interesting. I mean, I'm fascinated to the possibilities of what that are.

00:25:24 Mark Smith
Yeah, I like that. I like that. Very positive. And there's a lot of opportunity, right? If you really lean into what AI can amplify of you as a person and already the skills that you have at this point, massive amplication. Tell me about driving innovation and particularly around where things like acquisition, partnerships, I know that you do investment as well. You're an angel investor. How are you seeing that come together in the evolution and really, and how is it driving innovation that you're seeing at the moment?

00:26:00 Santosh Kaveti
I was lucky to be able to invest in some really, really good companies. One example is Mondra. It's based out of UK, by the way. And they use AI in the sustainability space. And their objective is to bring the entire supply chain, not the last mile, not just the retailers, but the entire food supply chain into the net zero and make it economically viable.

00:26:27 Santosh Kaveti
And that's how that's what they're using the AI for. They're doing really great. As far as acquisitions and partnerships goes, look, for us, we're a strategic Microsoft partner. Everything that we do, you know, is in that ecosystem and ecosphere. So we value that partnership quite a bit. And we do look for acquisitions, but culture now matters more than ever before. Because of how we are transforming internally, the result of becoming AI native company, we can't just simply bolt on another company that is traditional in thinking. It's going to be very difficult culturally. And most of the acquisitions that I see happening today good or bad is because of lack of growth in technology services space as the growth has slowed down because everybody's rethinking their priorities, right? With AI, everybody's like, okay, what am I supposed to invest on? So they're slowing down the technology investment. Some are taking the watch, wait and watch approach. Some are being innovative. And that's prompting a lot of M&A activity as well. Just consolidation, you know.

00:27:38 Mark Smith
Yeah. Yeah, it's interesting. As I saw a post on Sam Altman saying something to the effect that a lot of the startups in this space are actually just expensive demos. And there is this whole need of coming together and actually you need to sell a business outside of just some good demo type tech. The concept of how business runs is still in play, it's still in flight, it still needs to be part of the offering. The last question I have that I have for you is around philanthropy in the tech space. What are your thoughts there?

00:28:17 Santosh Kaveti 
So very close to a couple of foundations that I help out with. What's interesting is the foundations that I work with, they mainly focus on educating underprivileged children. And it's amazing how we're able to teach them AI One of the child from one of my foundations recently demonstrated, what she did was she basically made two AIs talk to each other and experimented and then figured out which one is better and why, and analyzed what are they really talking about. So they were asked to feed a story into ChatGPT and then take that into Meta's Llama. And I'm talking about someone who's like in the middle school.

00:29:09 Mark Smith
Yeah.

00:29:10 Santosh Kaveti
So, but that's opening up, it's opening up their creativity. They're able to think, okay, this is what I could do. There is possibility of thinking like this. And so I can talk to, I can work with ChatGPT, you know, and accomplish certain things. At the same time, we're having to teach them not to become complacent, not to become too reliable on the technology. how they can really protect themselves because AI is not a safe space yet, and so on. Now, beyond this, AI can really revolutionize so many areas of just ethical, civic responsibilities that we all have as humans. There's so many other areas where AI could be influenced, really influential.

00:29:53 Mark Smith 
I love that. I love that. And I just love that even though AI is so advanced from a technology perspective, It's almost has the potential to become a massive amplifier of humanity and the best of what we have and we can bring forward. Santosh, it's been so good having you on. I want to wrap with this. As you look at the next 12 months ahead, maybe 18th at an out point, right? Because I don't think anybody has crystal balls beyond this type of time frame nowadays. What are you most excited about? in the near future.

00:30:33 Santosh Kaveti
Two things, actually. One, I am really interested to see full-on multi-agent tech platforms creating these ecosystems, transforming the organization inside out. I think that will unlock another level of economy that we are not realizing today. That's one thing. The second thing is I would love to see some of these work going on in the world models come to for you and come to the enterprise space. I'm really fascinated by what these world models can potentially do because of the way they can think like humans and position like humans. What happens if that level of intelligence really comes to enterprises? I'm fascinated by AGI and all that. I know we're not quite there yet, in my opinion, at least. But the world models, I think they're practical. I think the way they're being created is I'm really interested to see the applications of world models in the enterprise space.

00:31:35 Mark Smith
Yeah, I love it. If you're listening, you haven't heard world models before, just go and do a quick plug into whatever AI tool you use and find out about it. is amazing, and I think that... For all the talk that we've reached, we're in a bubble or this is reaching the end point, I do not think we've even really started the race yet to be talking about that type of thing. Santosh, it's been fantastic to have you on. Thank you so much for sharing your insights. Thank you so much for having me, Mark. It's been fun. You've been listening to AI Unfiltered with me, Mark Smith. If you enjoyed this episode and want to share a little kindness, please leave a review. To learn more or connect with today's guest, check out the show notes. Thank you for tuning in. I'll see you next time, where we'll continue to uncover AI's true potential one conversation at a time.

Santosh Kaveti Profile Photo

Santosh Kaveti

With over 18 years of experience as a technologist, entrepreneur, investor, and advisor, Santosh Kaveti is the CEO and Founder of ProArch, a purpose-driven enterprise that accelerates value and increases resilience for its clients with consulting and technology services, enabled by cloud, guided by data, fueled by apps, and secured by design.

Santosh’s vision and leadership have propelled ProArch to become a dominant force in key industry verticals, such as Energy, Healthcare & Lifesciences, and Manufacturing, where he leverages his expertise in manufacturing process improvement, mentoring, and consulting.