👉 Full Show Notes https://www.microsoftinnovationpodcast.com/758 Hosts: Mark Smith, Meg Smith Ethical AI starts with transparency, accountability, and clear values. Mark and Meg unpack responsible AI principles, why non-deterministic systems still need reliability, and how ‘human in the loop’ and logging keep people accountable. They share a simple way to judge tools: trust, data use, terms that change, and clarity on training. You’ll see how to set personal and organisational boundaries, ...
👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/758
Hosts: Mark Smith, Meg Smith
Ethical AI starts with transparency, accountability, and clear values. Mark and Meg unpack responsible AI principles, why non-deterministic systems still need reliability, and how ‘human in the loop’ and logging keep people accountable. They share a simple way to judge tools: trust, data use, terms that change, and clarity on training. You’ll see how to set personal and organisational boundaries, choose vendors, and schedule reviews as risks evolve. They also consider a public call to pause superintelligence, and argue for critical thinking over fear.
Join the private WhatsApp group for Q&A and community: https://chat.whatsapp.com/E0iyXcUVhpl9um7DuKLYEz
🎙️ What you’ll learn
- Build an ethics checklist around transparency, fairness, reliability, privacy, inclusiveness, and accountability.
- Evaluate tools for training stance, data use, privacy, and changing terms.
- Design human-in-the-loop workflows with unique credentials, logging, and audit trails.
- Set personal and organisational boundaries for acceptable AI use.
- Plan a review cadence to reassess risks, mitigations, and vendor changes.
✅Highlights
“we're going to be squarely in the land of robotics very soon.”
“the intelligence age is so much more than just AI it is the age of intelligence”
“if you're an unethical individual, AI is probably going to amplify it.”
“there is no ethical AI without transparency.”
“people should be accountable for AI systems.”
🧰Mentioned
Microsoft Responsible AI principles https://www.microsoft.com/en-us/ai/principles-and-approach#ai-principles
IBM AI course https://www.ibm.com/training/learning-paths
TikTok https://www.tiktok.com/@nz365guy
Roomba https://en.wikipedia.org/wiki/Roomba
Zoom TOS https://termly.io/resources/zoom-terms-of-service-controversy/
Connect with the hosts
Mark Smith:
Blog https://www.nz365guy.com
LinkedIn https://www.linkedin.com/in/nz365guy
Meg Smith:
Blog https://www.megsmith.nz
LinkedIn https://www.linkedin.com/in/megsmithnz
Subscribe, rate, and share with someone who wants to be future ready. Drop your questions in the comments or the WhatsApp group, and we may feature them in an upcoming episode.
✅Keywords:
responsible ai, ethical decision-making, transparency, accountability, fairness, privacy and security, non-deterministic, human in the loop, audit trail, intra id, superintelligence, terms
Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption
If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith
Mark Smith (00:12)
Hey welcome back to the AI Advantage where we talk about the skills that you need to thrive in the intelligence age and it's been a crazy week lots of announcements happening online in AI we're seeing advancements in robotics and that's why we think the intelligence age is so much more than just AI it is the age of intelligence because I think
we're going to be squarely in the land of robotics very soon. In other words, having a a robot in our house to assist us that is AI powered. Remember, two years ago, I got my first robot on my property and â“ it mows our lawns through the day, the night, it's full robotic mower. Does one function kind of like a Roomba that does your...
vacuum cleaning some people have, don't have one of those. But the intelligence level, the smartness of it is mind blowing and what it produces for us is incredible. So I think we're moving into this whole age of intelligence. I wanna share something quickly that I find super funny in how algorithms work, particularly of the...
of the social media channels, et cetera, that are out there. So what I'm just gonna share here with you is this is our TikTok account, right? You can see we do shorts out of these videos and you can see we get it between, what's that?
Meg Smith (01:29)
Look at those stills! Look at those stills!
Mark Smith (01:33)
Yeah, yeah. We get, you know, two, three hundred views at a time. But then why did this video only get fifteen, you wonder? And this one talks about a greedy capitalist taking our jobs. So the algorithm obviously hated that phrase in there and totally deprioritized. It's the only video that I've ever had such a low viewing on our account. So.
I find it just so obvious â“ in that obviously the greedy capitalist running the platform did not like that I said something about them and therefore deprioritized the video.
Meg Smith (02:11)
The funniest thing about that video is you had someone comment and call you a socialist.
Mark Smith (02:15)
Yeah, somebody call me a socialist, which is such an interesting concept and I'm not sure that I fully understand the difference between communism and socialism and capitalism. But all I see is that capitalism that we have had for the last period of time allowed some individuals to become mega wealthy and there's a lot of poverty in the world. that still needs addressing. And so there's an inequality in how this is working at the moment.
Meg Smith (02:39)
Yeah. I mean, whatever the ism, whatever word or label you want to put on it, we should always be able to come back to explaining it in simple language and whatever. You know, if you can, if you can be clear on what you value and, and how you want to, you know, either create or support systems that â“ help, you know, live that out, then I think that's the main thing. And we will, we watched,
Was it called the Beanie Bubble movie the other night? And it was all about the rise of T.Y. or Ty, the toy company, Beanie Babies in the nineties. And there were a couple of, was, it's a wild story anyway. And I love the disclaimer they give at the start, which is essentially that it's a true story, but they've also made some stuff up. and you wouldn't be able to tell which was made up and which was true because the true was so outrageous.
â“ It's a funny movie anyway and interesting. There was a point in it where they decide to go to the bank or go to their business broker and start this business in the late 80s, early 90s, I think. the kind of economic position is thrown up in a montage as a few things like, know, higher than ever unemployment, general, you know, â“ concern and the state of the world is just really bad.
turned to Mark and I said, that feels like the news we're getting today, right? And then anyway, the point was, well, the punchline was, why do people want stuffed animals, stuffed cats? Because they need some joy. So it's this, these dynamics and these systems, we definitely don't control them, but we're part of them. And so being able to observe and then make sense of what you observe for yourself, I think is the so what.
Mark Smith (04:18)
Yeah, my takeaway from the movie was here's a guy that became a billionaire. He, he, he's still alive today. Um, the guy that, uh, and, I don't know what his fact and fiction from the movie, I say, but there were three ladies that held helped him became become extremely successful. And yet he totally cut them all out as part of his growth and that everything was around his ability and his skills. And yet.
these three women massively influenced, if not created the trajectory that made that phenomena happen at the time. But what shocked me is that these people that become super, super wealthy, billionaires we're talking about here, and I'm not saying that this applies to all billionaires, but the nickel and diming of somebody on $12 an hour, minimum wage, and yet
this individual lady had created over hundred million dollars in revenue and he was like, yeah, but you were using my assets, you know, to do it. But it was her ideas that created this phenomenal transformation in the business. And I'm just like, why isn't there a more even distribution to the people that, you know, have the ideas that create these massive impacts on business?
And this is why today's topic is around ethical decision-making with AI. Because I feel that ethical decision-making, we've got to think about it in the concept of AI, but ethics goes well beyond AI. And if you're an unethical individual, AI is probably going to amplify it. Remember earlier on, I talked about AI as a mirror. And if it's a mirror of the character, I suppose, to some degree of the people using it.
And so the risk is to say that AI could act unethically. Well, I think the overlords of AI that are running it potentially have more of a influence over whether it acts ethically or not.
Meg Smith (06:15)
When I was thinking about the topic for this week, ethical decision-making and safety, I just could see in my mind's eye the scene from the movie Billy Madison, which is one of the like classic Adam Sandler movies from the nineties or the early two thousands. I used to watch them all with my brothers. absolutely loved them. And this movie actually, I've watched a little bit of it back and it has aged very poorly. So content warning, but it's.
There's this point, so basically the concept is Billy Madison is a spoiled rich kid who gets cut off. His dad cuts him off because he's just, you know, never finished school, just absolutely wasting his time and wasting his money. And so in order to get sort of reinstated back to his level of lifestyle that he expected or had got used to, his dad says to him, you've got to go back to school. And so that's the whole premise of this movie is this bit of a dropkick character goes back to school.
And then at the end of the, he's got this nemesis in the dad's company who really, who wants to take over from the company and the, the kind of, â“ the conflict in the movie is whether or not will Billy go through school and be able to, â“ be, I think as president of the company instead of this other guy. And this guy's a snake. Like he is just dirty. He's, he's doing everything he can. He's dodgy in the business. He's dodgy in his interactions with Billy Madison.
And at the end, do this decathlon, the two of them, they do this decathlon and the winner of the decathlon is essentially going to get this, prize. And they have to answer, sort of talk about a particular topic or answer a question. And there's like one point in it, Billy Madison does this like rambling answer to her question. And he think, you know, like the music swells and the crowd cheers and you're like, he's got it.
And then the host basically just decimates him saying like, you have not made any sense. There's not a coherent thought in that. And we're all dumber for having listened to it. And so it's all like, no, he's lost it. He's lost everything. And the snaky guy is sitting next to him looking like he's the cat that ate the cream. then Billy gets to choose the last topic. And so he's scanning this 20 odd, this board of 20 odd topics. And then he hits on it. The one that's going to absolutely stump this guy. Business ethics.
And that's the whole point because he has no ethics. He cannot stand up there and answer a question about it because he's got nothing inside of himself. So when I was thinking about how will we, know, how can we be developing this skill around ethical decision-making? We've got to start with that, that understanding of where we personally stand on these issues, these values. We keep coming back to that in the, in the work that we do.
What do we value and why and how can we make sure that our actions and our choices are intentionally aligned with those values?
Mark Smith (09:00)
I like it. So what is responsible AI, Meg?
Meg Smith (09:03)
Yeah, well, that's the thing. Like we've kind of come at it explicitly from this ethical decision making, which relates to responsible AI and trustworthy AI principles. But I am very aware that we will fall into corporate speak if we're not really careful and intentional about how we look at this. And I was just doing a little bit of comparison between, you know, the big tech players like OpenAI, Google.
â“ Microsoft, how they talk about and make these public commitments to have, they are building responsibly with AI and there's the pretty similar. there's not a lot of differences. Like I, I go to Microsoft's responsible AI principles quite regularly. there's six of them. You can go and have a look at them. We'll include a link in the show notes and they talk about concepts like, fairness, right? So all systems, all AI systems should treat people fairly. they should.
Be reliable and safe. So that means that it will perform reliably across several different contexts and conditions. So, you know, we've talked before about how when you're using AI, you're not going to get the same answer every time because it is, I always get this wrong, Mark. What is it, deterministic or non-deterministic? Which one? I don't know.
Mark Smith (10:11)
It's non-deterministic what an LLM is.
Meg Smith (10:15)
Yeah, so you will get different answers, but there should be â“ reliability in terms of how in different conditions it will still work and the way that it was intended. talk there, third principle is privacy and security that AI systems should be secure and respect privacy in the way that they're designed and also in the way that they work. They should be inclusive and for everybody, they should be transparent. â“ One of the first ethics
â“ AI ethics courses I did, or it was touched on was in an IBM course on AI. And one of the things in that course really struck with me, which was there is no ethical AI without transparency. If there's not the ability to understand how the models that the system uses are trained and are working. if it's not clear on how the data comes in and how the generation happens and comes out. Then.
It's not an ethical solution. And the last one of Microsoft six principles are accountability that people should be accountable for AI systems and that touches on the human in the loop concept. So the ways in which people are involved in the design, the monitoring and the use of AI and ultimately responsible for the decisions that they make based on the outputs. when I did the, sorry, you go.
Mark Smith (11:35)
was Danny a few weeks ago that asked â“ one of the listeners about who should be responsible for when AI makes mistakes. And if you look at this accountability principle that Microsoft has here, it quite clearly says people should be accountable for AI systems. So therefore it will reflect back on people. But I tell you what, what you know, just if what you're seeing from Microsoft's perspective now with intra ID.
putting in every agent will have a it's your own unique credential, et cetera, meaning that it is fully logable traceable around what it's doing. you do you can put this account accountability and put an audit trail on it. So and log what's happening. Of course, we're going to have to you know, there's potential that a rogue agent could erase the trail behind it. you know, sweep away the breadcrumbs so to speak. it is interesting times here.
Meg Smith (12:28)
Yeah. And I think when I had looked at the differences, you hear in OpenAI's positioning, they are talking about long-term impact, which I thought was interesting. They sort of specifically call that out. And then if you look at the differences in the way governments think about it and talk about it, the US is more on the risk management approach.
and the EU is more in the legislative approach. They're looking to make rules and they want people to follow the rules and they have punitive measures in place if you don't follow the rules. And all of that kind of feels a little bit overwhelming when I think about it because even when you just said that, know, my heart starts to beat a little bit faster because we don't know.
Even when tools say that they're trustworthy and transparent, still requires an element of us making decisions about which ones we will and won't use. And I kind of think about it in three levels. We need a framework for how we make these decisions and how we will then review, is it working as intended? Are there breadcrumbs being swept away by agents? know, the system needs to be designed to think about that.
But I start with my personal use and have been thinking about how do I decide or what parts make up this framework that allow me to make decisions about tools that I will interact with and I will give my information to. And we've touched on that a little bit before, but that's the first thing for me is do I trust? What information can I have here about the level of trust that I should give this particular tool?
And one of those sort of principles is, do they make it clear their stance on certain things like training or not training? And the catch, the sort of thing to be aware of there is terms and conditions are changing all the time. And that's true across the board of every single tool, right? So there's an almost an element in the framework that I need to be having a check to go back.
And understand there was one we were talking about Zoom, for example, their terms of use and this is pre-generative AI buzz and hi back in 2020. It was when the use of it had shot right up. People were using Zoom more when they were joining calls from home and only then sort of realizing that Zoom's terms of use had some holes in it that other enterprise video conferencing
tools like Google Meet and Teams didn't have. And they now come into the AI scenario where, you know, they're able to take whatever you record and whatever you use their tools for and use for their own training and understanding, you know, do you want that depending on what you're recording? I think is a fair question.
Mark Smith (15:08)
Yeah, actually that Zoom's terms of use came out, I think in much more recent times. So came out in 2023 when they made that. So it was after the AI piece where they said that we're going to use your Zoom conversations to train the Zoom's AI. And a lot of people, as you said, you just want to call the other day and they did not know that that was in there. you know, it's one of the reasons why I've
left Facebook or most of Metta's channels is that because when I look at the way Zuckerberg and that organization can think about ethics it is not aligned with me at all, not even remotely and so therefore I don't want to be part of that and this is the script going back to critical thinking again of going hang on a second what has it been used for how is it affecting
how I think in the world or what's it been used in influence for. Now the thing is is that
You know, area of ethics and safety is such a broad area and it means different things to different people and it has been politicized, right? It's been turned into a political porn backwards and forwards. But I think fundamentally, you know,
AI should be here to help humanity, right? To help us as humans in life. And I believe fundamentally it will do that, but it's who are the people behind the scenes controlling it and do they have ulterior motives? Because, you know, there's some folks that don't believe the average human being is intelligent enough to make these decisions for themselves and they think they're more intelligent.
to make it on our behalf and therefore it comes down to then how will they use these tools to enhance what they're doing. I had a certifier here last week who was certifying my new solar system that Meg and I have had put in on our property and he was asking, am I worried that the Terminator thing is going to happen with AI? And I'm like,
It's funny because I hadn't thought about that for ages, a good week, while since generative AI came about. And I was like, you know, I'm absolutely not worried about a Terminator type scenario ever happening and where the technology is going. just, I do not see that dystopian view. I see more of the potential for a few elites that...
for whatever reason, get themselves into a position if they're not already in a position that will use the tool to do their bidding and potentially submit us to servitude to that system or their way of thinking how the world should work. And it's a fine line, right? Because you can start getting into the inconspiracy theory very, very quickly. And so it's this.
It's this fine dance that we need to be playing and we need to be back on that area of critical thinking going, hang on, where is that now? What is the impact? What should I be doing differently? How should I be thinking differently? Joe Rogan and Elon Muster's did a podcast in the last week or so, and it was very interesting in there. Once again, Elon talking about this hyper abundance that he believes coming and that, you know, cause
The question was asked around, everyone, will there be a minimum cost of living type or a, sorry, a universal income be set out? And he was like, well, if it was, it would be a massive amount. Like everybody would have a massive amount of money and it would be an abundance. And so you've got some folks that are seeing this massive abundance coming, abundance in energy.
energy being dropped to ultimately zero and yet at the moment, you know, if you look at the media in the US, you're seeing energy prices skyrocketing and they're going, everyone's going, wow, it's because of AI, needs all this consumption. but there's a lot around, hang on a second, there's been a lack of investment in energy infrastructure for ages. In our own country, we've found this, populations has doubled.
And no new generation facilities have been put in that match this kind of growth in consumption. But what happens is that doomsdayers go, wow, it's the AI that's causing these hypes and prices. And when we see unemployment happening, it's the AI is the reason for it. But hang on, there's so many other things in play that it's going back to critical thinking again, right? That we don't just say it's this or that.
but really think more broadly about what the impact is or how it's coming about.
Meg Smith (19:41)
Yeah, it's scary how much that like this or that programming that is designed and we talked about this last week, um, built into the media that we consume. It's, it's designed to make you this or that. And if you don't land on one side of that, it's kind of failed and therefore gets, you know, doesn't get the exposure. Um, but all of the things you just talked about made me think we need to hold on to.
our human connection and our talking with people who have different opinions than us and being able to hear those opinions, you know, in the right time, â“ sometimes you can just ignore them, pretend you didn't hear them because you just don't have the energy for that. They didn't have the space for it, like take Joe Rogan, right? Like my stance on his podcast is that it would add no value to me. Therefore I don't listen to it because it's unbiased. Sorry, from my point of view, it doesn't serve me. So I don't listen to it.
But then I was looking at the, the rankings of podcasts and popularity in New Zealand. It's the top ranking podcast by a long shot. And so you think again about the influences and the way that people's minds are being shaped anyway, by the media that they consume. And you have to have some, an element of reality that exactly as you said, everyone is going to be coming at this from a different point of view. And the question becomes, how can we.
use the influence that we have to ensure that we are, you know, training it to for good. And I think about it in a couple of buckets. So there's the framework for how you personally use AI. So the ethics framework by which you are assessing whether it's a good tool for you. And also you are governing your own use of it, what you would and wouldn't use it for. There's that.
In some scenarios, you're going to be responsible for choosing an AI system or an AI tool for your organization, for your club, for your school. If that's the position you're in, you would be making that decision. You'd need a framework by which you could make an ethical decision when you're choosing what's the right tool to be used. And then in some cases as well, we know that many in our community are makers and builders. You're building with AI, you're building AI solutions. So then again, you have to apply
a different framework to govern the ethical building of that and also the use of it and the monitoring of it of is it working? And this, this kind of, we can't wait for perfect. We, we, we, you know, we know that we're all using it without these frameworks. We're all building without these frameworks, but the, like the idea of being able to hold yourself to account. And maybe with people who do share your views or you are like-minded with and can build with.
that you come together and find a way to go, okay, this is where the technology's at. This is what we're trying to do. This is the risks that we're aware of and how we've mitigated them. And we've assessed that in three months, we're gonna come back, or six months or 12 months, you will agree a review process to come back and see if those are still the right risks to mitigate.
Mark Smith (22:32)
you
Meg Smith (22:39)
cool.
Mark Smith (22:39)
Okay let's have look at some messaging from the community. We have one here from Alex. Do you want to read that one out?
Meg Smith (22:44)
Yeah, this was a great question. So he says, Hey, Mark and Meg, it would be interesting to get your take on super intelligence and the race by the AI companies for this goal versus the reluctance by some or all of them to put safeguards in place. And then we'll jump to that statement in a second, superintelligence-statement.org. But I'm not sure whether you'd be considered this to be outside the intended scope of your podcast.
â“ I don't even think we'd thought about what was outside of scope of their podcasts. Maybe we should have, but I really like this. So, â“ have a look at that link if you haven't already. â“ it's basically a statement and you can sign it or a hundred thousand people have signed it and they share some of the, â“ signatories and their stances where they kind of, you know, answering or making a stand on what we've been talking about today. â“ that the rapid,
Mark Smith (23:12)
Tia.
Meg Smith (23:36)
Innovation with AI tools can bring unprecedented health and prosperity and also has, you know, risks in terms of what the impact on humans will be. The statement that they have is really simple. It's just two sentences. It says, call for a prohibition on the development of super intelligence, not lifted before there is one broad scientific consensus that it will be done safely and controllably and two strong public buy-in. So Mark.
What do you think I, I'm not sure if you've had a look at who signed that. the people that stood out to me was Jeffrey Hinton, Steve Wozniak.
Mark Smith (24:07)
Yeah, there's a most famous person that
jumped out with Steve Wozniak, right? One of the founders of Apple, Sir Richard Branson signed it. There's a couple of big names on there. It's potentially toothless. know, yes, great. We've pledged to, but no government, no organization is under the influence of it. And
There's an element that I feel that this statement comes from a place of fear. And I have learned a long time ago in my life, nothing good comes from a fear based framework. It doesn't serve me. so when I, my, my first thing when I view this is that's fear based. We are worried about the unknown, you know, back in the original, when they were exploring the world, any part of the world that wasn't charted, they would have
Danger there be dragons there like because we don't know what's out there and I feel like this is the same type of thing Danger there be dragons there and it's just like well, it's just unexplored And so if we go hey don't know what's allowed to explore because it's dangerous well We will live in ignorance and so I feel that this is that and so for me. I'm like, yeah, I remember when it came out but like you know, the world's gone on and and AI continues to advance and and and
You know if I was looking at anything around ethics and stuff I would look at the individuals running these organizations and make your decisions based on that So, you know people like Sam Altman. What do you think? You know, look at some of the media etc around him and and some of the people that know him I mean that's referenceable, know, Elon Musk Meg's not a fan. I'm more of a fan of the guy there's These are the things that it's the people behind these things that are either gonna be you know Palantir
you know, another big player. What are your thoughts on these, the individuals running them? What do they do outside their day job? You know, what clubs are they into and things? Cause you're probably gonna get a good read on them and go, do these align with me as an individual and they align with my ethics?
Meg Smith (26:07)
Yeah, and look at track record too, right? I remember you often were the detractor when I was at Google being like, no, they're not doing this and they should be doing this. like, know, outside the voices, especially you being in the Microsoft ecosystem, you're quite critical. And now that I've come outside as well, I hear a lot of criticism as well. So it's really interesting to hear that perspective. But from the inside, when I was there,
I saw a lot of really great decisions being made and I saw a lot, I knew a lot of people who cared very deeply and took their responsibilities very seriously about the different data that they were responsible for and had access to. And, and so it's, it's that combination of, of getting a point of view from people from the inside who are maybe detractors or critics, and then looking at your own views as well.
Yeah, I, my, my take on this is it's really interesting to see some of the names. I, I completely agree with you, Mark, that whenever I've made a decision in my life from a place of fear, I felt disempowered and I've constantly second-guess those decisions because that thing that caused that fear is moving all the time. Right. â“ but I thought this was a really interesting statement and I will be watching it with, â“ yeah, with interest as it evolves.
Mark Smith (27:25)
Remember if you are hearing this podcast, it is in video format on Spotify and YouTube. We have a WhatsApp group. We will provide â“ links in the show notes for everything that we've covered today. I had a request over the weekend that we put more links in when we mention products and things like that. So while I'll do that.
make sure you share with your friends and colleagues if you think this they would find this valuable. Next episode is topic 12 which is personal leadership and influence. That will be our focus. So if you've got ideas around personal leadership and influence when it comes to AI and this is really about taking personal control of yourself and then the sphere of people that you influence which is fundamentally a family and your community around you outside of necessarily work.
You know, get involved. Let us know in the WhatsApp group what you'd like us to cover. And thank you very much for joining us.
Meg Smith (28:19)
I'll leave a parting thought because it ties to what you just said about how we look at the founders and the owners and the runners of these companies. Look at how they are treating their families because when it comes to, you particularly see it, you listen to leading minds in AI and technology, when they're asked about their children, they will say, no, no, they're not online or they don't use the tools.
They're not on Facebook. They don't have these things. And it's interesting. There's something to learn there, I think, in terms of some, it's good for thee and not for we. So, you know, it kind of, I think, boils down to we should all touch grass a little bit more often as well. Thanks everyone.