Governance That Accelerates Innovation
John Rood
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
John Rood shares how organisations can unlock real value from AI by balancing innovation, governance, and compliance. Learn why robust frameworks, practical training, and a bottom-up approach are key to sustainable AI adoption and risk management.
👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/792
🎙️ What you’ll learn
- How to implement effective AI governance without stifling innovation
- Practical steps for building an AI management system
- The role of ISO 42001 and the EU AI Act in compliance
- Strategies to drive AI adoption and avoid shadow AI
- How to design ongoing AI literacy programmes for all staff
✅ Highlights
- “A poorly designed policy, I think, does stifle innovation. I think a well-conceived policy manages that trade-off.”
- “Shadow AI happens because organisations go buy an AI product and then lock it down.”
- “Our first recommendation… is that you’ve got to have someone to champion AI initiatives.”
- “Most organisations will start either from ISO 42001, or… the NIST AI risk management framework.”
- “The idea is we’re not just trying to put together a set of policies… What we’re trying to create is a living process.”
- “A great AI management system defines who has to get trained in what, and then make sure that actually happens on a regular basis.”
- “If your customers knew how you treat their data, they might not be your customers anymore.”
- “The top-down programmes tend to go poorly, whereas the… bottom-up programmes tend to do much better.”
- “When we are able to empower more people… we start to build the organisation’s muscle.”
- “The first step… is always regulatory.”
- “EU AI Act is written… to be extraordinarily broad.”
- “At the top of the pyramid, there’s a certain set of fairly robust training or literacy requirements that should be for whoever’s actually making the AI.”
🧰 Mentioned
- ISO 42001: https://www.iso.org/standard/42001
- EU AI Act: https://artificialintelligenceact.eu/the-act/
- NIST AI risk management framework: https://www.nist.gov/itl/ai-risk-management-framework
- GDPR: https://gdpr.eu/
✅Keywords
ai governance, iso 42001, eu ai act, compliance, shadow ai, risk management, ai management system, ai literacy, bottom-up adoption, regulatory, data privacy, nist framework
Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption
If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith
00:00:07 Mark Smith
Welcome to AI Unfiltered, the show that cuts through the hype and brings you the authentic side of artificial intelligence. I'm your host, Mark Smith, and in each episode, I sit down one-on-one with AI innovators and industry leaders from around the world. Together, we explore real-world AI applications, share practical insights, and discuss how businesses are implementing responsible, ethical, and trustworthy AI. Let's dive into the conversation and see how AI can transform your business today. Welcome to AI Unfiltered. Our guest is from Chicago in the US. He splits his time between work for Proceptual and Michigan State University. As always, links will be in the show notes for today's discussion. John, welcome.
00:00:58 John Rood
Hey, Mark, happy to be here.
00:00:59 Mark Smith
I am so looking forward to this discussion with you around AI governance because there's so much information around whether it's restrictive or whether we need it or the absolute critical importance of it. And it all depends on which side of the media you're listening to. Everyone has an opinion around this. But before we get started, I always love to ask about food, family, and fun. What do they mean for you living in Chicago?
00:01:24 John Rood
Sure, absolutely. So food got harder because we moved to the suburbs. We were in the city for 17 years and now we're 45 minutes N. So it's It's harder up here, which was surprising to us, but we're doing the best we can. In terms of fun, I love to play tennis, love to play pickleball, anything with a racket, we love to do it. And then family is what we're all here for. I'm married, I've got two boys in elementary school age, and we're having a great time.
00:01:55 Mark Smith
Nice. Tell me a bit about Perceptual and what you do there and even what you do at Michigan State.
00:02:03 John Rood
Yeah, absolutely. So In 2022, we started Perceptual towards the end of the year. And the idea behind the company was that we were going to work in the world of compliance and regulation. So we started doing compliance audits on AI systems, particularly in human resources, mostly around a law that had been passed in New York City, which was one of the first, kind of globally one of the first regulations around the world of AI. Over the next couple of years, that has kind of blossomed into what we now would talk about as more generalized AI governance. So for most of our clients, we're putting together kind of a custom program for them. Usually that's implementing one of the leading AI governance frameworks. Oftentimes there's a regulatory component with EU AI Act or one of the regulations here in the States. And almost always there is a sales enablement component to it as well.
00:03:00 Mark Smith
Yeah, that makes sense. What about Michigan State University?
00:03:04 John Rood
So there was a program we put together. We started teaching it this year on AI strategy and governance. That's with their graduate business school, the Broad College, which has been, it's been great. I'm an alumni for Michigan State. And then we're going to launch our eight-week program in AI governance, hopefully in January, fingers crossed for January next year.
00:03:27 Mark Smith
Yeah, I like this. Why I like the subject of governance, because I think a lot of us would be dead if there was no governance, right? And what I mean by that is that when I look at AI, I see a lot of parallels to electricity, right? And how we use electricity in our lives in that it is a massive life-saving piece of technology and people in health situations, they need electricity to keep them alive. We use it to power our homes, cook, eat, everything, right? Electricity, though, without regulation, if we just had bare wires running everywhere, a lot of us would get electrocuted and would be deceased. And so I'm glad that electricity is a regulated thing no matter where you go in the world, right? Because of the level of safety, and then through that safety allows massive practical application of the tool. When you talk about governance in organizations, is it something that can be, one of the perceptions that we see coming from media is that governance restricts innovation. How do you handle that discussion that governance stops innovation and why can't both of them actually work together?
00:04:46 John Rood
So the way that we think about it in the world of really putting these programs into effect in working companies is that it's all about risk management. So it's not the case, I don't think, that every company has to do governance this way or that way or any particular way. Now, certainly there's some regulatory burdens that companies have where they have to have something specific if they're, in the EU, for example. I'm sure we'll talk a lot about that today. But different organizations of different sizes have different risk tolerances. Different industries will have very different risk tolerances as well. So the kind of program that we might put together in a healthcare organization or a, you know, a defense contractor or something like that will look very different from, you know, a children's toy maker, for example.
So when we think about that question of, does governments trade off with innovation? I don't know that it has to. I think that's a very industry dependent question, I guess I would say. I think that for sure it's the case that more regulation and more governance in the organization creates more process. Larger organizations oftentimes have more process and, you know, that can be hard to route around. And oftentimes that's where companies will kind of come to us and they'll come to us and say, one of the prototypical calls that we get is someone will call and say, hey, we rolled out AI in our organization. Now everyone in the organization has Microsoft copilot. It's always Microsoft copilot in this situation. But then our InfoSec team locked it down so far and so hard that everyone, instead of using copilot, is just taking out their phone and using the free version of ChatGPT instead. Right? So that's where, you know, there is some trade-off there where A poorly designed policy, I think, does stifle innovation. I think a well-conceived policy manages that trade-off.
00:06:41 Mark Smith
Isn't it interesting that with that scenario you just gave then, it puts the organization at a higher degree of risk. In fact, if you look at the 20 big headline cases around the world where companies put either customer data into ChatGPT in the early days, and then that code from certain companies was placed and validated. I think Apple had staff that put code into. The risk is, right, if you don't have a robust tool that people can use, they'll find workarounds, you'll get shadow AI. And therefore, that does put the organization a high degree of risk because you've got no control now over how that AI was being used in the organization. At the end of the day, right, they can say, well, we can stop it on the desktop, we can lock it down. But the example you gave, someone can just pick up their phone and away you go.
00:07:37 John Rood
Yeah, that's exactly the challenge. I think you articulated it really well. And when companies talk to us about shadow AI, they're oftentimes surprised when leaders talk about it that way, because I think that most leaders, if you ask them where shadow AI comes from, I think that they would say it comes from not having AI in the organization. And in my experience, it's exactly the opposite. Shadow AI happens because organizations go buy an AI product and then lock it down.
00:08:06 Mark Smith
Yeah, interesting. Interesting. So how do you take that or facilitate that discussion in such a way that adequate guardrails are put in place but not onerous guardrails.
00:08:20 John Rood
So it's, first of all, it's difficult, right? And this is, I oftentimes will relate it to something like a sexual harassment policy internally, where for a sexual harassment policy, there's something bad. And so we need to define the bad thing and then stop people from doing it. And I think that if we look at a lot of organizations, maybe 18 months ago, I think a lot of organizations thought about their AI policy in their internal policies, especially kind of the same way where they're saying, we need to figure out what's bad and then ban it. I think that we're now getting a little bit more nuance to that. So I think that when we go into an organization and start talking about, how do we balance more process, more steps, more timelines, more staffing on governance against some of the harms or the challenges and the regulatory issues that we always have to think about, we do a couple of things. Number one is we have to get stakeholders from throughout the organization to step in and give their perspectives on it. So one of the worst ways I've seen this done is, you know, AI governance is a new project. We have to do it in the organization and they say, okay, you're the, you're the privacy guy, so you're gonna, you're gonna do it. And then the privacy guy is like, well, what am I supposed to do with the work that I'm already doing from, you know, from 8 to 7 during my normal day? So first of all, it requires staffing. And our first recommendation for organizations of really kind of like even like medium size and up is that you've got to have someone to champion AI initiatives. And if you're not doing that internally, it kind of falls where it may and just like it doesn't get done properly. So we always think about having either a chief AI officer or an AI champion who's then their job is to go throughout the organization and start to gather information about how AI is actually being used, how AI kind of coincides with the strategic roadmap or the organization's goals. And then it's starting to be able to kind of plant a flag and draft some policies.
00:10:21 Mark Smith
Yeah, I like that. I like that. So you do have a framework that you kind of step them through to get an outcome.
00:10:27 John Rood
Yeah, absolutely. So, and the place where it starts is existing frameworks, right? So frankly, like the less that we make up from scratch, the better everyone is. So most organizations will start either from ISO 42001, or here in the States, oftentimes we see organizations, and generally like smaller organizations will start with our NIST AI risk management framework. And we can talk about those two all day, but we always want to start by saying, why are we doing this, right? What are the regulatory pressures that we know we have to kind of check those boxes? What framework aligns with that? And then we can put that into the organization.
00:11:05 Mark Smith
So let's talk about ISO 42001 from the, first of all, what is that standard?
00:11:13 John Rood
So the 42,001 standard, a lot of people will be familiar with ISO 27,001, which is around data security and privacy. This is the next generation of that. So if the audience is familiar with that, they'll know what it is. If not, it's a set of standards and controls that organizations would need to follow to eventually get certified as being compliant with ISO 42,001. I think that certification for that standard is the gold standard globally today. I don't think that every organization has to do it that way. I think that full ISO compliance is an expensive process. It's a long process. So I think that startups, for example, that's not where I tend to start with small organizations, but it is the place where enterprise organizations will tend to end up.
00:12:04 Mark Smith
One of the things that standard talks about is an AI management system. What are your thoughts there? Are you starting to see that as a systematized way of adopting it? And then if you combine it with the EU AI Act, where there's a lot of onus on businesses to make sure they're training their staff around AI systems, that they're not, you know, that the human factor, if you like, is considered also in the AI development process. What are you seeing?
00:12:35 John Rood
So the AI management system is really, I think, at the heart of what the ISO standard is trying to get across. And the idea is we're not just trying to put together a set of policies, take the policies, put them in a binder, and then put them in a desk, under a desk, in the closet, right? What we're trying to create is a living process that helps us manage AI throughout our organization. That idea of continual improvement is at the absolute core of, I think, of any of the ISO, the major ISO standards, right? So the idea is we're not doing this one time. We are setting, putting together the right system to make this work for many quarters, many years, theoretically, many decades. One of the things that you'd mentioned is training and the human component. And that's absolutely critical. In fact, most of the regulations that are coming down the pipeline now, so for example, EU AI Act, being the most germane, will require some level of AI literacy training throughout the organization. So a great AI management system defines who has to get trained in what, and then make sure that actually happens on a regular basis.
00:13:46 Mark Smith
Yeah, and should have an auditable trail, right? So if you ever do get flagged, you can point to this person was trained in this area and on the state, and was there refreshes and things like that. So important. What else does that standard cover? What else, like if you were to categorize, hey, three to five things that's important about that standard, if you were having a discussion with a business stakeholder and let's say they're a size of organization that needs to get serious around that, what are your kind of, you know, do they get an AI council in place? What do they do? What would you recommend?
00:14:24 John Rood
Sure. So I think about it when we try to communicate succinctly what the standard means. And of course, I mean, it's substantial. It's not a breezy read. It's really three things. So the first is people. And we talked about people a little bit already. So how do we appoint the right people to manage the system, to create the system? And we can go into detail about that. The second big chunk for me is data. So how do we get our data? How do we manage our data? Again, for those of you in the audience who are familiar with information security and privacy standards, a lot of this will be similar, just kind of more in the AI flavor, right? So how do we get our training data? Are we allowed to, for example, use live data that our customers send us to improve the algorithm? Can we use customer A's data to improve the algorithm for customer B? Those are all the relevant questions that we have to think about as we think about data. And then the third big chunk for me is transparency. So how in the organization do we establish and then communicate to all the different stakeholders in our value chain, how we're thinking about AI governance and safety? And so this is so important because if you look, for example, at EU AI Act, there's very clear delineations between the responsibilities of AI developers and AI deployers, as well as like resellers and a couple other sorts of parties. But fundamentally, who makes the AI, who uses the AI, and then And then those people also have to think about their customers, right? So how do we think for us, if we're the developer of AI, what information do we have to clearly communicate to our customers such that A, they can feel good about how they're using our products, and then B, how can they communicate transparently with the end user, the actual human at the end of that value chain?
00:16:13 Mark Smith
Yeah. It's interesting. What came to mind, I remember being at a conference in Vegas last year and the comment came out, if your customers knew how you treat their data, they might not be your customers anymore. And I wondered if a lot of organizations will perceive that this even more so, you know, as a risk. I mean, I hadn't even thought about using one company's data, you know, A, you know, to train on B, et cetera, and what the impact of that and even the ability to opt out. One of the things we discussed was around 5% of companies are seeing real ROI from AI. And, there's been a lot of reports in the last 12 to 18 months of POCs that didn't become production. And, you know, my distilling down of that is there was never a business case for it. In other words, when I say a business case, a financial output or justification, it was more like, let's try it. And the fear of missing out is a motivator for a lot of businesses to do something. In that 5%, like obviously all the hyperscalers that make money off selling compute, and AI compute specifically, as well as all the AI companies in their frontier, you know, models, They're wanting to massively increase that. What are we getting wrong that has that number, after what? We've been three to four years in the Gen. AI space now. Why is that number so low?
00:17:51 John Rood
To me, it's two things. And these come both out of the MIT report that you would reference, which is kind of a bombshell in our shared industry. And then secondly, it's what I've seen out actually in practice in the field in our own work. So #1, it's the idea that we are going to do AI from the top down. And the way that this usually works is the board or the investors say to some mid-sized organization, right, like a lower middle market, private equity-owned company, it's time to do AI, right? So the CEO says we have to do AI, and then they tell the CTO or whoever, okay, we have to do AI, and we have to have some metrics. And so then the metric that's easiest to measure is some idea around usage or adaptation of that in the organization. So then, as we'd referenced, they go by copilot, right? And then roll out through the organization, do a 30-minute training once at the end of an all-hands. 17% of the company uses it once and they say like, okay, now we've got some usage metrics and that's kind of it. So the top-down programs tend to go poorly, whereas the quite basic, I think, bottom-up programs tend to do much better. And that's, I think, the second big point, which is when we think about how organizations really get value out of AI, Rarely is it we're going to put in a new system and it's going to work fantastic like the first day. Generally, it's kind of like a partnership between the people in the organization, the vendor, and the AI. And oftentimes I'll talk about it as hiring an AI product to do a job is a lot like creating a new relationship with like a business process outsourcer, right? So if we said, you know, we have, you know, 100 customer service people and we're going to move that operation to, you know, to India or to the Philippines, for example, we wouldn't expect that was going to perform exactly right on the first day. There's a lot of training, there's a lot of mistakes, there's a lot of mis-hires that go through that process. And that's kind of what we see in the world of AI. So if we see some, if the expectation is the technology is going to work perfectly out-of-the-box, it's likely to fail. Whereas if the expectation is, you know, this is a process that we're going to be working through for months and years, that's a more reasonable expectation.
00:20:11 Mark Smith
So when we look at practical steps of, let's say, driving AI, is it that the horizon window for it just needs to be extended? Or is it something else? As in the horizon window for the return.
00:20:25 John Rood
Yeah, I think that's a great step as we think about the expectations. And then the other step is, I think, pushing AI adaptation further down in the organization. So as opposed to a top-down initiative, like we bought everyone copilot, what I see in the field is that the organizations who get the most use out of AI, oftentimes there are kind of like departmental level AI operators or AI enthusiasts who say, you know, hey, we've, you know, we've mapped out this workflow and there's a little piece of our workflow that right now is a couple people's job and we think we can find a way to automate that accurately. Right. So is it, we're going to save $50 million and that'll drop the bottom line. Like probably not in that instance. But #1, we're going to do a lot of those, right? We're going to find a lot of those. And then #2, when we are able to empower more people throughout the organization to find kind of like basic and repeatable automations, we start to build the organization's muscle to go and be able to do that in future years.
00:21:30 Mark Smith
Yeah. Nice. So then with the FOMO on one side, right, and the need for innovation and the need for adoption and the need for safety and ethics, how is an organization to balance these things from your point of view?
00:21:48 John Rood
So the first step, I think for us, is always regulatory. in the discussion, because if we have a client of ours that does substantial work in the EU or that has substantial customers in the EU, EU AI Act is going to be a thing. And we, the audience may or may not be aware that the implementation deadline for that has been moved back. So there's, I think, a little bit of backpedaling, but I don't believe that it's going to go away. For folks listening in the states, state of Colorado, state of California, even Texas unusually has passed AI regulations. So the place to start is like, what do we need to do to be compliant? That's kind of step one. Step 2 is, what are our customers going to expect from us in terms of AI governance and safety? The most popular call that we get is we get a call from, again, usually a mid-sized organization that says, we're trying to sell into enterprise and we're used to getting, the checklist of safety and regulatory issues. And now AI governance is at the top, right? So what do we do? Not so that, you know, the EU doesn't come after us and fine us, although that is a risk, but so we can go out and have our sales process run correctly. So the first step is, What's the baseline that we need to put together in order to be able to run our business and go out and sell? I'll stop there because it's a big question, but that's kind of the first line of defense.
00:23:17 Mark Smith
Okay, so if that's the first, and obviously it's different between geo. One thing that out of what you said there, you talked about different state bodies putting in regulations in the US. And I know the Trump administration has wanted to heavily not regulated at a state level, but regulated at a federal level. But there doesn't seem to be, and I might have missed it, but there doesn't seem to be that federal regulation coming out, or have I missed something there?
00:23:46 John Rood
No, that's correct.
00:23:47 Mark Smith
So it hasn't come out.
00:23:48 John Rood
Right. It hasn't come out, and I don't, I'm not holding my breath for that to come out.
00:23:53 Mark Smith
Right. Very interesting. Some people say that the EU AI Act is kind of the gold standard, which a lot of other countries are either taking elements out of that, but most countries are working on some variant of that. Would you agree or is there, because you're broadly across a lot of the different regulations out there, what do you see as the most robust? And one of the things in the EU AI Act, it's If you look at the jurisdiction, I forget the exact phrase of it, but it's like multi-territorial. In other words, let's say like I'm in New Zealand, but if I was selling a product that, or my systems, et cetera, into the EU, I am also needing to be compliant under that. And then I saw another example, and the illustration was given to me, is if an EU citizen goes to another country and falls afoul of something that caused them to get, let's say, an injury or something. And the use case was very simple. It was, let's say, they're walking down a street and their council in that area was using an AI management system to detect defect on the pavement, and it didn't detect or detected wrongly, and the person broke an arm or a leg. That because AI had been involved in the management of that asset, They also then come under the EU AI Act, even if they're in a country that is not the EU. Now, how do you see that? How do you see the restrictive nature or the all-encompassing nature of those regulations out there?
00:25:31 John Rood
Well, so EU AI Act is written, exactly as you had said, to be extraordinarily broad. And this is not new for the style of the EU. So we saw this with GDPR, where GDPR has become at least parts of GDPR has become almost a global standard. And they have a name for it. They call it the Brussels effect in the academic literature. And so you're exactly right for how EU AI Act is written. So it's written that if basically any part of the value chain of an AI system touches the EU market, then the law technically applies, right? And then we can get into lots of questions about, you know, how likely are you to actually be the subject of enforcement action, et cetera. But technically, That's the case. So it's a very challenging standard. Now, the other side of that, the side that makes it a little bit easier, is that the EU has promised us that they're going to release a set of what they call harmonization standards. Basically, that'll be a set of standards that if they are met, that there's a presumption that we've also met the standards of the EU AI Act. We expect ISO 4 2001 to be on that list. I don't know, we expect that probably some of the, there's a OECD framework that may be on that list, possibly the US NIST framework may be on that list. That list of harmonized standards was supposed to be in our hands many months ago. And it's one of the key reasons why, you know, when we talk to our clients, they say, how can we comply with the EU AI Act? The actual answer is that you can't because it requires filing paperwork at offices that doesn't exist yet. So that's part of the challenge of it. But when we think about all these different global regulations, ultimately it's coalescing around kind of this idea that you have to have a relatively standardized system of AI management or AI quality management. And the key frameworks, including ISO, is the way that you get there in a relatively assured way.
00:27:29 Mark Smith
Yeah. Let's wrap up with the human component of AI, which is, and once again, the EU AI talks about the need to educate on and create a level of AI literacy. How do you see that be practically rolled out? Because AI is not really a one and done. It's changing constantly, right? So how do you see organizations meeting the need to provide AI literacy that potentially should, one's literacy should increase each year, right? So as I say, an ongoing literacy program that elevates, and of course, one of the things in the literacy programs that we saw the hyperscalers roll out at day one, tended to be more technical. You know, what's AI like? What's machine learning? What's, and like Joe Blogs on the street probably doesn't have to know all the different tech infrastructure behind AI. They're like, How do I use it? How does it enable me? What do I need to do about using it safely as well, I suppose, as a key element? How do you see that being rolled out at scale in organizations?
00:28:38 John Rood
So we think of it as there's, it's kind of a pyramid, right? So at the top of the pyramid, there's a certain set of fairly robust training or literacy requirements that should be for whoever's actually making the AI, right? So that should be, you know, the CTO, the technical people actually use the AI. And there's some technical work there that should be taught, but usually CTOs already know that at this point in their career. A lot A lot of it is around the world of regulation and the world of the governance standards, right? So how do we build governance into what we're actually building is the key question, I think, for the top of the pyramid. I think that there's kind of this big middle of the pyramid, which is, I would say, kind of like manager and director level in most organizations. Oftentimes that is, first of all, a safety component, and then secondly, a function-specific component. And when we do training, we do a lot of trainings. When we do training in an organization, oftentimes people will come to us and say, Hey, we need you to come do an AI literacy and safety training, but also while you're here, why don't you teach our finance team how they should be using AI to actually to get value out of it? And then the base of the pyramid is like... everyone else. It's probably 80% of the people in most organizations. And it's relatively basic safety standards. It's things like, don't upload personal identifiable information to ChatGPT. It's things like if you get a link from a Gmail and it's an invitation to a video meeting with the CEO of your organization, like you got to ask someone if that's the real thing. It's kind of those things where, similar to the world of data privacy or phishing prevention, it's thinking about what is the regular person working in the organization need to know to not put themselves or the customers or the organization at regulatory or safety risk.
00:30:37 Mark Smith
Yeah, I like it. John, it's been great talking to you. If people want to reach out and get in touch, what's the best way for them to do that?
00:30:44 John Rood
Two ways. So they can come to our website, which is perceptual.com. I'm sure you'll put it in the show notes. And I'll leave my LinkedIn in the show notes, hopefully as well. I love to get LinkedIn messages. Come to our website. We also do have a free AI literacy training available now. Folks can sign up for that. We just love to hear from people. This is a fun time to be doing AI work. I think it's exciting, so we're energized.
00:31:10 Mark Smith
You've been listening to AI Unfiltered with me, Mark Smith. If you enjoyed this episode and want to share a little kindness, please leave a review. To learn more or connect with today's guest, check out the show notes. Thanks for tuning in. I'll see you next time, where we'll continue to uncover AI's true potential, one conversation at a time.
John Rood is an entrepreneur who loves building companies.
Currently, he is working on Proceptual, a platform focused on compliance with emerging regulations in Artificial Intelligence and Machine Learning.
John is the co-founder and CEO of Next Step Test Preparation, which grew from a bootstrapped two-person tutoring operation into a national leader in high-stakes exam preparation in the health sciences. In 2018, the company was acquired by private equity.
He recruits great teams and sets them up for success through strategic clarity, operational excellence, and a focus on results.