AI Risks You’re Ignoring and How to Fix Them
The player is loading ...
AI Risks You’re Ignoring and How to Fix Them

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM 
 
Craig Taylor shares practical, real‑world guidance on cybersecurity, AI risks, and behaviour change inside organisations. He explains why positive reinforcement outperforms punishment, how biases appear in AI systems, and why zero‑trust matters for companies of all sizes. The conversation offers pragmatic, people‑centred steps to strengthen cyber literacy, reduce insider risk, and navigate emerging threats such as deepfakes and social engineering. 

👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/797   

🎙️ What you’ll learn  

  • How to strengthen cyber hygiene using behaviour‑based training 
  • How AI bias emerges and why validation and critical thinking matter 
  • How to reduce insider‑risk through access control and observation 
  • How zero‑trust improves resilience beyond the legacy walled‑garden model 
  • How to help staff recognise social engineering and emotional manipulation 

Highlights 

  • “We're using it to produce videos, to produce scripts for the videos, ideas” 
  • “AI lies to you with very convincing falsehoods” 
  • “Rewarded behaviours are repeated” 
  • “Leaderboards… got leaders within companies to actually participate” 
  • “It's not rocket science. It's a lot of common sense” 
  • “Accidents happen and not malicious, not intentional, but boy can they have tragic consequences” 
  • “We have to try in some way” 
  • “The world is flat… our front doors are logically open to the world” 
  • “If you ever get an e‑mail that makes you want to take an immediate action, pause” 
  • “Anyone that wants a little free awareness training… cyberhoot.com slash individuals” 

 🧰 Mentioned 

🎯 Special Podcast Offer: 

  • 20% off CyberHoot for 1 year using the podcast’s unique coupon code: "AI Unfiltered

✅Keywords 
cybersecurity, ai bias, zero trust, social engineering, cyber literacy, behaviour change, phishing, deepfakes, access control, insider risk, data protection, training 

Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

00:00 - The Human Core Behind AI

03:00 - Creating with AI: Power, Limits, and Responsibility

05:00 - Bias in AI: The Hidden Risk Leaders Miss

10:00 - CyberHoot and the Science of Behaviour Change

16:00 - Inside vs Outside Threats: The New Reality of Cyber Risk

23:34 - Practical Security for Real Organisations

26:34 - The Rise of Deepfakes and Social Engineering

00:00:07 Mark Smith
Welcome to AI Unfiltered, the show that cuts through the hype and brings you the authentic side of artificial intelligence. I'm your host, Mark Smith, and in each episode, I sit down one-on-one with AI innovators and industry leaders from around the world. Together, we explore real-world AI applications, share practical insights, and discuss how businesses are implementing responsible, ethical, and trustworthy AI. Let's dive into the conversation and see how AI can transform your business today. Welcome to the AI Unfiltered Show. Today's guest is joining me from Hampton, New Hampshire in the US. All the links that we discuss or any resources will be in the show notes for this episode. Craig, welcome to the show.

00:00:58 Craig Taylor
Thanks, Mark. Happy to be here.

00:01:00 Mark Smith
Good to be chatting with you. I always like to kick off my shows with food, family, and fun. What do they mean to you? And then we'll get into your background and what you're focused on.

00:01:10 Craig Taylor
Sounds good. So food wise, I don't cook. I like to eat, as you can probably tell, but I don't cook much. I do enjoy a good chicken wing now and again, that sort of thing. Don't drink much. Family is #1 in my life. My family, my kids, my friends, my parents, my brothers and siblings and everything else. It's really central. If you don't have family, what do you have really? So that's number one. In terms of fun, I do have a lot of fun in my life. I play hockey three or four times a week. I mountain bike, I golf. I have a lovely wife and I have just a tremendous social network through my hockey that allows me to really have a great social network, if you will. They keep me, pump me up when I'm down and bring me down to earth when I'm up, too much. They hold me accountable, I guess.

00:02:08 Mark Smith
I love that. I love that. I find it interesting and it just triggered for me when you said, what do you have if you don't have family? My wife and I traveled coast to coast of Russia in 2017, sliding in Vladivostok and going through to St. Petersburg. And one of the things that struck me in the entire journey was their sense of family that isn't any culture. And particularly I noticed it there, where we'd be told this is going to be dangerous in the city, don't go out after dark. It was totally not our experience. It was that people fundamentally want to have a peaceful, happy life and their nucleus is their family. And it's just so refreshing outside of news cycles and things that everything, but everybody's bad and everything's bad out there, is that there seems to be this groundedness in family, no matter what culture you go to.

00:03:08 Craig Taylor
Yeah, I would agree. And for rightly so, I mean, for it's probably bred into the human species, right? And we wouldn't have been here today if we didn't have family to ward off harm and bring us food. and keep us healthy and well.

00:03:26 Mark Smith
Yeah, so true. Tell us about what's top of mind for you right now. What are you working on? What are you involved in? I know we're going to talk a lot around security. What's your world at the moment?

00:03:41 Craig Taylor
Oh, I'm in the throes of artificial intelligence, AI, using it to multiply our workforce here at my company. We're using it to produce videos, to produce scripts for the videos, ideas, and then we have to really work double down on making the things fun, entertaining, and educational. That's what our company does. And so today I've been working on that just about all through the day, creating a video. It's funny, there's a little irony to it. We're creating the latest video I just worked on was, you know, the big threats and mistakes people are making and best practices around the use of AI. And I'm using AI to create the script for it, right? Which is, it's actually perfectly fine. You know, we're looking at only using company-approved AI tools and never posting or pasting confidential or sensitive information into AI because public AI will consume it and potentially share it with your competitors. And to know that AI lies to you with very convincing falsehoods, you know, so you have to validate everything. And ultimately, you make the final decision and you are responsible for the results, not AI. So that's been my message in working in the last couple hours on that.

00:05:06 Mark Smith
I'm always shocked at the level of, or the lack of critical thinking when it comes to using AI. And like we've seen like the stories around the world where police agencies have profiled somebody, I know this happened in the UK, because the AI system they had instructed, and it was obviously not the right person, like absolutely not the right person. But the fallback was, it was the system that told me. But like, look what your eyes are telling you as well. Like, use them together and apply some critical thinking. And it seems that we're seeing more and more of this lack of critical thinking when used with AI and whatever the use case might be.

00:05:48 Craig Taylor
100%. I mean, One of the things we didn't talk about in this video is the biases that are in AI. In the very example you gave, there's a company called Clearview AI, which most of the police companies of the world, forces of the world, are using to identify persons of interest. It was trained almost exclusively on white Anglo-Saxon males. So it performs dismally poorly on African Americans, on Chinese, on anyone outside that scope. And so to your point, you have it spitting out random nonsense for people of interest because it wasn't trained properly. If you ask it, you know, any AI tool about... Eastern philosophies, it'll give you some studied materials, but it's been trained largely on Western philosophy. It's not going to tell you about the life after life and the Buddhism and have the same depth and the cultural beauty that's been fed into it in Western society, right? I tend to look at it as really a reflection of the world we live in, but it's my world where I live, right? It's not wasn't developed in these Asian, Asian, foreign lands and it's not been fed the bulk of, there's no Chinese, there's not a huge Chinese literature that's been fed into these things. It's primarily English and, you know, Anglo-Saxon. So there are biases, you know.

00:07:13 Mark Smith
Massive.

00:07:14 Craig Taylor
If you go back 100 years, you know, ** *** was not, he was indicative of men and women. madam and master, they were equivalent. They weren't, they're not equivalent, So language matters and what we feed it matters and there's inherent biases, 100%.

00:07:33 Mark Smith
I've just read just at the tally in the last year, George Orwell's 1942. And what I found very interesting in there, and it's parallels to today, is that one of the things that they decided to do was reduce down language. In other words, they would take words out of the dictionary, and each year the dictionary got smaller and smaller and smaller in society because they wanted to limit people's ability to think, and you need words to think, right, to have context and things like that. And so they decided that what they would do is eliminate words. And I just When you said that about we're only focused on Western English-based content, there is this need to expand that massively with to be a truly, you know, worldview. Well, I think if anyone's going to do it, it's going to be Google.

00:08:25 Craig Taylor
My interest, what you got me interested in there was the analogy of the thing that stops us from thinking and using a lot of words, this little serotonin fix. reducing our discourse and our communications and our conversations and our face-to-face and the non-verbal communication and all that stuff. I always worry, I'm a risk manager by heart. So my 30 years have been in cybersecurity. I have a degree in psychology. I've studied and been an educator on hockey. I'm a level 4 certified hockey coach. I could coach high school if I needed to. And What are the unintended consequences of all of these things that are filling our lives up, right? Attention spans are shortening because of the phones. We want little TikTok videos that are 7 seconds, not documentaries that are two hours. People don't have the patience for that. We're not thinking critically as much as we used to. Things are getting dumbed down a lot. And it's a real detriment to civil discourse, to a lot of the things we're seeing in the world today, a lack of reaching across the aisle in any government system to say, where's the common ground? How can we find a win-win scenario here or a win-win-win, right? A win for the party, a win for the people, a win for the country. It's no, it's what we see too much, Mark, and this is going into a different direction than I was thinking, but that's okay, is win-lose. The only way for me to win is if you lose. That's not leadership and that's not growth as a society. That's actually regression. back to who has the biggest club in the cave. It's really sad.

00:10:08 Mark Smith
It's crazy. It's crazy, you know, and it's and it's and it's weird, I feel, at the stage that we are in the world where you think we'd continue to evolve and become more advanced, and it seems we're degenerating, you know, back to our lesser selves in a way, and I do find that so bizarre. Tell me about Cyber Hoot.

00:10:32 Craig Taylor
Okay. CyberHoot is interestingly founded on the principles of science, psychology best practices, educational best practices, all applied in sort of a Venn diagram in the middle to teaching people cyber literacy. What psychology tells us is that rewarded behaviors are repeated. Ultimately, we're trying to get people to learn good behaviors on computers, on e-mail, on the internet, online, so that they can repeat them. And we encourage and we reward those with small little trinkets and little reward mechanisms such as gamification, certificates of completion, avatars that grow as you complete assignments, your avatar becomes more ferocious looking in that you have armor and a shield and you look more defensive minded and more wise over time as you complete assignments. So the whole of it, leaderboards, that's an interesting one too. That got leaders within companies to actually participate, Mark. Before that, they wouldn't do their assignments because no one would find out. They manage the compliance metrics, but now the leaderboard shows someone dead last and the competitor in them says, no way am I going to be dead last. They do their assignments, they climb the leaderboard, and suddenly we have everybody participating. So these are things that CyberHoot was founded on, the idea and the principle of rewarding good behaviors so that they get internalized and repeated. right?The good news about cyber literacy, cyber hygiene, cyber smarts, whatever you want to call it, is that it's not rocket science. It's a lot of common sense with a little bits of information around why does the sender of an e-mail matter the most? Because that's what hackers change by 1 letter to pretend to be someone they're not. And then they convince you with urgency and emotionality to react to something and click, and then suddenly they're in your machine, they're on your network, they're encrypting your files, they've taken over your e-mail system. But it's just a little bit of knowledge with some past practices. Too often, and for 25 years, my industry, Mark, has focused on shame and punishment for clicking. And that No psychologist ever said, if you go all the way back to B.F. Skinner, he didn't say, punished behaviors extinguish. No, He said, rewarded behaviors are repeated. Just like parenting. If you think of a child having a temper tantrum and you take them aside a little later and you say, let's use your words next time. And then they use their words in a stressful situation and you praise them and you reward them and you say, you know what, Johnny, you did such a good job there. We're going to go for ice cream later. And if you keep behaving that way and you stop with the temper tantrums, we're going to, you know, and then it becomes an internalized thing. Dog training, you don't shock collar a dog to train them new skills. You use treats. And by the way, all along the way, the dog loves it and you love it and the child loves it because they're getting positive reinforcement as opposed to bigger sticks. So I don't understand why. Maybe most cybersecurity people had difficult upbringings. I don't know, but they seem focused for far too long on punishment theory, which doesn't work as a deterrent or as a behavior model change. So that's what CyberHoot was based on, is the positive reinforcement, little short episodes, a video once a month, a hootfish simulation that is very realistic to how hackers hack, going to the inboxes of users. And we regularly see very, very high compliance, a high approval rating. So videos, they rate them upwards of 90%, even our AI-generated videos. And our WhoFish exercises, they get a little bit lower scoring because people are still trying to warm up to the idea of not being tricked and being walked through an exercise. But as time goes by, we see that goes up to as high as 75%.

00:14:38 Mark Smith
Is part of what you do. I see anybody operating this security, cybersecurity type place. There's a massive emphasis placed on external actors trying to infiltrate an organization. We're taught heavily how to do that. And it goes by the logic of a walled garden. If we can stop, you know, have a gate, I know you can only get in the gate through the, sorry, only get in the garden through the gate, I just thought of Dublin there because that's where I first really came across these walled gardens. And the telecommunication industry showed us years ago is that if you could bypass that gate, you could get in there and there was no security inside the walled garden. There was nothing to detect that you're in there once you're in there. And a lot of the business failure is that we're really good at the, you know, the gate in, but Once somebody's in, and therefore the second part of that, what happened if you've got employees that are bad actors? How do you think about what's going on inside the organization and the risk profile it creates for the organization? And like you just mentioned before, the risk of taking either a company confidential document or piece of code or something else and putting that up to an AI because you want help on it that's not not an approved tool, or we're still taking some client data and feeding that into an AI, which we've already seen in the news cycle as happening.

00:16:15 Craig Taylor
Yes, oh yes, definitely.

00:16:17 Mark Smith
I remember a friend of mine at a conference was speaking and he said, if your customers knew the way you treated their data, they probably wouldn't be your customer.

00:16:25 Craig Taylor
Yeah.

00:16:25 Mark Smith
You know? How do you think about that?

00:16:28 Craig Taylor
So a little side note, in my AI video, one of the statistics that AI gave me, and I haven't validated this, but I was not going to use it, is that 80% of employees are putting confidential information into public LLMs. And I was like, that's way beyond normalized. That's everyone. And I don't want to say that because it normalizes it. I want to prohibit that because there are private LLMs that you can do what you need to do in that will be just as helpful and they're not going to consume and regurgitate the data. So I didn't allow that into my video. So I thought that's quite funny. But it's happening all the time. We hear about it in the news and it's a real problem. So you mentioned quite a few things. One is the walled garden idea. That is probably about a 15-year-old concept focused on firewalls in the beginning, right? If you connect to the internet, you need a firewall, but once you're in, you're in, and once you're doing it, you have access, you have access. So that's really gone out the window recently within the last, say, 5 to 10 years, where we start looking at zero trust, where you have only the access you need. My company is going through SOC 2 type 2 preparations. We're looking at who has access to every folder, And do they need it? And we just, someone left today, their final day was Friday, so we went through the removal of all their access, where we cross-checking and validating it. But the zero trust model says, you know, even though we have six people doing virtual chief information security officers role in our company, we do a consulting business as well as our SaaS platform, they should not have access to every VCISO client we have. We have, you know, X number. And so we made sure, directory by directory, they had access to their clients, but not the other ones. And that's the idea of dealing with the walled garden approach where everyone has access to everything. The outcome of that approach is... When you think about it from a technology perspective, it's a flat network. Why put firewalls and port restrictions on the communications between this location and that location and that location? Well, if you don't want to have a billion dollar bailout by the UK government of Range Rover, which was bought by Tata Motors, and they operated a pretty flat network where one ransomware event went everywhere,And they were, whatever the current status is, I don't know. But that's a bad management of, let's just keep a walled garden, right? That's the Titanic approach, right? We just got to make sure no iceberg ever breaks our hull. But if we want a submarine approach, we're going to segment all the different parts of the business so that one breach over here doesn't impact everything else, doesn't sink the whole submarine or the whole ship. That is what we've gone towards. Now, second part of your question was insiders. And part of it is training from products like CyberHoot, where we teach you to watch and observe the employees around you. And if someone's behaving abnormally or out of character or has weird hours or is not delivering on what they have or has some other maybe private problem, drug addiction, gambling addiction, something that you become aware of. It wouldn't hurt to offer assistance or point them to your employee assistance program or to say something to your manager who could then escalate it to HR and say, maybe we need to get some help for this person. Because those are the kinds of individuals in the situations and the scenarios where things can go off the rails quickly. And training people how to spot and provide appropriate assistance. in those scenarios, not confronting someone and saying, you are an alcoholic, blah, blah, blah, blah, blah. It's like, you know, I think our company has some internal, you know, employee assistant program. Have you heard about it? Do you, know, shoot them a note on it, you know, that sort of thing. So there's many different avenues of threats that our businesses of the world face. The walled garden is 1, having, you know, everything open on the inside and not having good internal training for insider threats. And not every insider threat mark is an adversarial one where I'm going to sell my company secrets to a buyer out there.

00:21:00 Mark Smith
It can be an ignorance one.

00:21:01 Craig Taylor
It can be a mistake one, right? There was a database in China about three or four years ago where the DBA, the developer of the database, was having problems with it. So he grabbed the problem code and posted it into a Reddit thread and said, I need help figuring out how to do this and this and this. But the username and the password were pasted into the Reddit thread. And this database contained every single Chinese resident, their personal information. It was all in that database. And it was immediately scraped by dozens of nation states and hacking organizations, putting everybody's data at risk. So accidents happen and not, malicious, not intentional, but boy can they have tragic consequences.

00:21:52 Mark Smith
You mentioned Zero Trust. How many companies do you reckon are even close to having Zero Trust implemented? Like I can understand Fortune 500 type companies. Yes, absolutely. They got the smarts, the resources to do it, but there would be a lot of medium-sized businesses, small business, they would have no idea. the common scenario I gave at a conference in Vegas was, you get Dodgy Joe who is at a work party. He sees the female partner of a dude and takes a fancy tour, goes into the internal system and does a search now with AI for next of kin, right?And can find out who it is, probably contact, phone number, et cetera. Because in the onboarding process of HR, somebody just sent out a form to say, they might not have a formal HR system that is, blocked off from everybody accessing, and they've, used SharePoint, created a SharePoint form, and said, oh, by the way, in your day one, onboarding inside the company, and you just happily give that information away, not knowing that it's unencrypted, stored in clear text, like available to actually anybody that knows how to look for it.

00:23:12 Craig Taylor
All the critical faux pas, right, of good data management. You're not wrong. You're not wrong, Mark. But what I would counter you with is that we all have to try. We have to try in some way, right? If I go to the doctor and the doctor checks me out and says on my annual physical, okay, do you drink? Yeah, I drink a little bit here. Do you exercise? Yeah, I exercise a little bit. Do you get enough sleep? No, probably not. Okay, well, drink less, exercise more, and get more sleep, right? All these are goals, noble goals for you and I to follow. Do we do it perfectly? No. Should we keep at it and try to get better at it? 100%. One of the best sort of cheat codes I can share with anyone listening to this is getting yourself a SOC 2 type 2 audit. You're going to have a third party come in, look at how you run your business. They're going to look at that HR directory and say, why is this open to everyone? Who needs this? Oh, this person in HR and even the hiring manager doesn't need it, except for the day they hire. And we can actually then take it and put it into that folder. So only one person has access and their backup. Oh, well then turn that on. right? Just as we've been doing here at Cyrahoot, looking at directories and ownerships and structures, we don't publish all of our sensitive internal files to all of our employees here. We have a special founders only folder. We have another folder for HR related materials and we've restricted access to it. And we're going to have a third party come in and just double check our work. You know, did we cross all our T's and dot our I's? And the act of preparing for that has uncovered a couple Scratch your head moments. Like, were we thinking through this properly? No. Okay. Are we perfect? No. Am I going to lose sleep over it? No, I'm going to fix it. So that's what I would suggest.Just like your doctor says, you know, stop smoking. Okay. If you smoke, don't smoke. Right. But get that third party review because it helps you organize and systematize your business. I think that would work well for a lot of companies.

00:25:22 Mark Smith
Yeah. I like that. It just for me straight away, that's an actionable step, right? That's that you can clearly take for your business. My last question as we go to wrap up is around non-technical security. And what I'm referring here is social engineering. Probably one of the oldest tricks in the book around, you know, there's been movies done on it, et cetera. How do you think about street smarts for your employees when it comes to how they operate with maybe digital artifacts and assets, but maybe it's just a phone call that they're receiving. That's what, unbeknownst to them, it's a start of something.

00:26:05 Craig Taylor
 Well, the world is flat. There was a book written by someone important that said that line. My grandmother never locked her door in her farmhouse in Delma, Saskatchewan when I would go visit her. She said, oh, nobody's going to come in and everybody in town knows everybody and no one's going to come in. And that was true for her. And it was probably just perfectly safe and okay. But in today's world, her front door is not a physical one anymore, not yours, not mine. It's all a logical door. And that door is in downtown New York or Tokyo or Shanghai or anywhere. It is right there in front of 8 billion people, however many billion people. They can come knocking on it. They can open it. They can say, what's inside? What's interesting? A phone call could come in from anywhere, from anyone. And not only that, AI, artificial intelligence can now produce voices and in the near future, probably video conference calls that impersonate the people we know and love because they've had a TikTok video or they've done one little voice recording, a podcast, and suddenly my son, my mother can get a phone call from me or a video call from me and it's Craig and he's been in a car accident and he needs immediately $5,000, right? So what do you do? my mom would say, Craig, what's our family's safe word? And then the phone call would go dead or the video call would hang up because no one knows our safe word. And that way, it's that one protection from the deep fakes and the video fakes. But the world is flat and we are, our front doors are logically open to the world. So we have to be suspicious. It's unfortunate but true. We cannot trust the caller ID, the person we're looking at, The e-mail we got came from those people. We have to be suspicious. And social engineering is making it look like it's just run-of-the-mill everyday kind of stuff. Unless they want you to take a quick click of a link, which case they use urgency and emotionality and a tie-in to your social media likes and dislikes to make you want to click on something. So if you ever get an e-mail that makes you want to take an immediate action, pause. take at least a three-second deep breath and ask yourself, does this make any sense? Because just like driving a car, if someone cuts you off and you react to that, you're not going to be rolling down your window to say, that's okay, go ahead, right? You're going to make some other gesture. You don't want to react in ways that are putting yourself at risk. You want to react thoughtfully and importantly with a little bit of restraint.

00:28:53 Mark Smith
Yeah. Interesting, just this weekend past, not knowing that I had this podcast scheduled, I had three phone calls from unknown numbers. And the latest iOS on your phone has a feature that they introduced that if the number is not in your phone book, it plays in recorded messages and it doesn't even ring my phone. And they have to verbally say who they are, where they're from, what's the nature of the call.And only once they've done that, will it ring my phone and I can pick up the call. So in all cases, they were, we can see you're trying to change your blah, blah, blah. We're from, you know, like, nah, sorry. You know, the numbers coming from, you know, one's from Estonia, one was from India, you know, just random like places that I'm like, What was interesting is that I've had my phone number for over 30 years. It's not even a full-length number. It's been around that long. And it's obviously been on websites and all sorts over my career. But it's that little feature that the iPhone now has on it is just making it, because I was instantly able to market as spam and block and report that number. And so it's obviously updating their intelligence systems on that, but it's such a, yeah, great feature.

00:30:21 Craig Taylor
That is a lifesaver. I used to get, like you, I would get four or five calls a day of people because I'm the CEO of a company and my information's out there. It's in my signature, for goodness sake. I probably shouldn't do that. But at the end of the day, that feature has reduced the number of errant calls, marketing and sales calls. I still take a few of them because I have to answer to the emergencies of our clients, right?

00:30:49 Mark Smith
Yes.

00:30:51 Craig Taylor
It's been tenable, I would say.

00:30:53 Mark Smith
Yeah. Craig, it's been so cool talking to you and hearing your stories, so real and practical. Thank you so much for coming on the show.

00:31:02 Craig Taylor
Mark, it's my pleasure. What I could share with you at the end is that anyone that's looking for a little free awareness training, they can subscribe for free to our service at cyberhoot.com slash individuals, if that's helpful. It's completely free for life. You can just take our videos and our hoot fishing exercises and really learn how to fish. So that's a free service we have out there for any individual that wants to participate.

00:31:27 Mark Smith 
I like it. We'll put that link in the show notes.

00:31:29 Craig Taylor 
Sounds good.

00:31:30 Mark Smith
Thank you.

00:31:31 Craig Taylor
Thanks, Mark.

00:31:33 Mark Smith
You've been listening to AI Unfiltered with me, Mark Smith. If you enjoyed this episode and want to share a little kindness, please leave a review. To learn more or connect with today's guest, check out the show notes. Thank you for tuning in. I'll see you next time, where we'll continue to uncover AI's true potential, one conversation at a time.