Train Your BS Meter for the Intelligence Age
The player is loading ...
Train Your BS Meter for the Intelligence Age

Hosts: Mark Smith, Meg Smith 👉For full Show Notes: https://www.microsoftinnovationpodcast.com/737 A practical conversation on critical thinking in a noisy news cycle. The hosts unpack bias, truth detection, and source triangulation, then show how to set personal inputs, tune ad preferences, and use AI as a questioning coach. You get simple checks for risk, bias, and truth, plus resources to build daily habits that protect focus and decision quality. Join the private WhatsApp group for Q&am...

Hosts: Mark Smith, Meg Smith

👉For full Show Notes: https://www.microsoftinnovationpodcast.com/737

A practical conversation on critical thinking in a noisy news cycle. The hosts unpack bias, truth detection, and source triangulation, then show how to set personal inputs, tune ad preferences, and use AI as a questioning coach. You get simple checks for risk, bias, and truth, plus resources to build daily habits that protect focus and decision quality.

Join the private WhatsApp group for Q&A and community: https://chat.whatsapp.com/E0iyXcUVhpl9um7DuKLYEz

🎙️ What you’ll learn

  • Apply a low, medium, high risk check before trusting information
  • Use source triangulation for travel, policy, and high-impact decisions
  • Reduce algorithmic pull by auditing and adjusting ad interests
  • Spot common biases and set prompts that challenge your view
  • Build a daily news routine that filters noise and highlights impact


✅ Highlights

  • “It’s really important… identifying and understanding how we measure or apply critical thinking.”
  • “Who wants me to believe this? … Why do they want me to believe this?”
  • “Control and even limit what you consume.”
  • “Decide, is this a low, medium or high risk of me getting this wrong.”
  • “We need to develop our truth sense.”
  • “Act as my critical thinking coach… challenge my thinking one question at a time.”
  • “Go one step further and ask it to explain it.”


🧰 Mentioned


Connect with the hosts

Mark Smith:
Blog: https://www.nz365guy.com
LinkedIn: https://www.linkedin.com/in/nz365guy

Meg Smith:
Blog: https://www.megsmith.nz
LinkedIn: https://www.linkedin.com/in/megsmithnz

Subscribe, rate, and share with someone who wants to be future ready. Drop your questions in the comments or the WhatsApp group, and we may feature them in an upcoming episode.

✅ Keywords:
critical thinking, bias, confirmation bias, selective attention, frequency illusion, baader-meinhof, triangulation, truth, amygdala, advertising preferences, perplexity, whatsapp

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

00:00 - The Importance of Critical Thinking in Today's World

02:32 - Understanding AI's Role in Critical Thinking

06:21 - Navigating Information Consumption and Bias

11:36 - Developing a Critical Thinking Framework

16:54 - Source Triangulation and Validating Information

18:53 - Techniques to Avoid Bias in AI Usage

Meg Smith (00:11)
Hello and welcome to the AI Advantage. It's been a bit of a week. I forgot to introduce myself. My name is Meg Smith and I'm here with Mark Smith. And today we're going to be talking about critical thinking, which in the week that was, you know, feels pretty important. I try not to read the news too much because it makes me sad. This week, I kind of felt like one of those weeks where, know, you had to read it. You had to be on, on it a little bit. I don't know about you, Mark.

Mark Smith (00:36)
Yeah, absolutely. There's like so much going on in world news and stuff at the moment and I am trying to remember that a lot of the information that I am getting is it intended to raise fear and I feel like a lot of the news at the moment seems to be designed to elevate a fearful kind of state of thinking whether it's in the UK they're rolling out a an electronic ID for every citizen I'm hearing it's the government wanting to control everything from your finance to your movement to your employment and any number of things right on through to this is the mark of the beast that is going to be put on everybody which is a rhetoric I've heard for probably over 30 years. So interesting time then you of course see all the crazy stuff happening in the US and Israel and it the news is not. Great, and so therefore it's really important, this topic today around identifying and understanding how we measure or apply critical thinking in everything that's coming across our path.

Meg Smith (01:45)
Yeah. And this is not a political podcast. We're not trying to turn it into one, but we are very people-centered and we think about the people that are affected by these different things that are happening in the world. You know, we think about people in Palestine every day and we think about all the different ways in which we can be using our minds and our voices to do our part, what that looks like. And I really ⁓ think that The question to all of us is how do we get back on the same team? Now, I say that feeling a bit Pollyanna-ish, but I do think that that's the question at the heart of it. And so today we're gonna be thinking about how do we engage our brains alongside the way that we use technology to play the best part that we can to have the most impact.

Mark Smith (02:32)
Yeah, I love that. One the things that just jumped into my feed on X in the last, well, just earlier today actually I was looking at ⁓ is a podcast, looks like it was recently done with Mustafa Salman and it was on John Hazard Hernandez podcast. And I just want to play it for you, this soundbite and then let's have perhaps a little discussion on it. So I love what is covered there from a... Sometimes we believe that what AI is producing for us is very human-like. And we have this perception that it's human-like and to ground it back in that there's no fear mechanism, there's no incentive mechanism built in. These are all human traits that haven't yet become part of the algorithm. But it's, of course, it's a savant at language and knowing how language can influence and all of a sudden it creates perceptions in our mind of something more than what it is when we're dealing with AI. And I just thought, if you don't know Mustafa Salman, he's got a brilliant book out that is well worth the read. He's the CEO at Microsoft for AI. And so, and he's ex DeepMind. Is it DeepMind? DeepMind, right? From Google. And so this guy's been a long time in this space, involved in AlphaGo and things like that. And so I think that, and AlphaFold, so I think that,

He highlights something that we need to keep top of mind around this element of critical thinking and to remember that the level of intelligence there although seeming human-like is not human because it misses and I thought very poignant there no sense of pain It has no sense of pain and in human nature fight and flight my understanding that's managed the amygdala in your brain is something that was being bought in, you know, with a lizard brain for, it's part of how we operate as humans. That's not there in AI. Now, I have a feeling that it's not there at the moment, but it could be in the future, right? Because ultimately our brain is a heap of algorithms and some, and so maybe that'll be part of the algorithmic creation of the future. when I think of, critical thinking there are so many layers and one of the layers I just want to touch on to start with is the importance of

Controlling and even limiting what you consume. Because otherwise, the algorithms we know on most social media platforms are designed to give you more of what sparks your interests, right?

Meg Smith (06:37)
If you want to go and have a look at this, this is interesting. was talking about this with someone this week. I worked for Google for nearly a decade and always in the advertising space. And what you might not realize, I think what a lot of consumers don't realize is you can go in control. You can change what Google knows about you. That really came about, I was probably about six years ago when they first launched, it was called Ad Preferences. Now I think you can manage it under my account. If you Google Ad Preferences,

You can go to your own account when you're logged in to a Google ID and you can see all the categories of interests that Google has said you're interested in and should have personalized ads for. You can change if you don't, I just went in there the other day. I hadn't gone in for a few years and I was like, you know, it said I was trying to get pregnant. Well, I'm not trying to get pregnant. So I turned that off. I was like, I'm done. We, that is, that is a finished thing for me.

But go in and have a look, because you might be surprised. And then you might also go, I might be interested in that, but I don't want that to be, I don't want to be served ads for that. One was for me was credit cards. I was like, I don't want ads for credit cards. So go in and turn that off. Look in your social media as well. Different channels have this, right?

Mark Smith (07:42)
Yeah, being able to, I know Microsoft has it in their systems about what type of advertising you wanna receive. And really it's really going, okay, how do we personalize that experience for you? A couple of books that I think as we start this discussion today on critical thinking is there's a book by, the most recent book by Yavan Noah Harari, which is Nexus.

I highly recommend you go and read that book and understand the implications of AI from a historic perspective and how messaging is used to manipulate populations of people. The communication method and then what is communicated.

It's a really sobering way to look at this next period of time that we're moving into that is really built a lot on the period of time we've come from, which is a lot of social media. And then the influence of who's influencing the news cycles and who influences news. And, you know, one of the things that I always think about is who wants me to believe this?

And then if I ask myself who wants me to believe it, I can ask myself why do they want me to believe this? And then I can, and this is a start of applying critical thinking. And when you find particularly, I find one of my checks that I use for myself is that if I find myself feeling that the world is sad or is going backwards or is going downhill, I check how much have I been consuming of messaging that perhaps wants me to feel fearful, to feel afraid, to feel that life's not that great.

Meg Smith (09:25)
Yeah, and the other thing we do is I look at, agree, like usually for me it's I've been on Instagram for too long. So I go through my cycle of breaking up with Instagram. We have a toxic relationship, me and Instagram. But then I also go for the massive antidote, right? What's the antidote? You go and you find connection with people. So, and sometimes people that you know have different opinions than you that hold a different political perspective you'll still find something in common, right? So I was watching an interview of Jacinda Ardern, the former Prime Minister of New Zealand, who also is apparently a polarizing figure, but I really love her and what she stands for. And she said, was talking about the thing that everyone can still agree on is that content, violent content should not be shared.

And she was talking about the work that was done in response to the Christchurch terror attacks. And she said, even it doesn't matter where you are in the political spectrum, we all agree that things should be done to stop violent content from being shared across the political spectrum was her point of view there. So it's like, let's find the human connection. Let's find the things we can agree on. And we don't have to go and engage with the subjects that we know will lead to conflict. We can go on and engage on things that are...

are universal. love on your other pod. was listening to the copilot show this morning on the way back from dropping our kids off. That's another show on this podcast. we have Mark interviews, people from Microsoft and, ⁓ you know, food, family and fun. You aren't, you ask people because those things are universal. ⁓ everyone's got to eat. Everyone's got a family either chosen or, you know, born into or adopted into. And, and I think we all probably need a little bit more fun, ⁓ where we can get it these days.

Mark Smith (11:08)
it. Another thing that it's well worth watching, given some resources today, Hans and Ola Rosling, their Swedish father and son, in fact Hans has since passed, but he has got some great TED Talks up that are, I don't know, six, seven years old, around how not to be ignorant in the world that we live in now. he just provides four rules of thumb. around how to assess anything that is news related that comes into your feed. And like one of the examples he gives is, the poverty level of growing across the world and then infant deaths and things like that and what the news says and then actually what the reality is. And it's a brilliant way to understand what is a misconception and then how to apply some, four rules of thumb that will allow you to go, okay, is this right?

without having to kind of like go, okay, now I have to pull out my full research and I'm gonna start researching all the facts around the subject. And I think one of the things as I've been delving into this topic a lot more is that when you hear something that you get a check on, you've got to decide, is this a low, medium or high risk of me getting this wrong of the impact? Now, if it's low, you don't have to triangulate and decide

Is this relevant or whatever? If it's low, you know, you might decide, okay, I'm not going to share it. It's not a big deal if I get this wrong. But if it's something that's medium or high, you should step through levels of criticality in assessing whether the message, the information you're hearing and why I say message and information. I'm not just looking at it from an AI perspective, but also

a news cycle perspective and a social media perspective. Who is wanting to control the messaging, give the messaging out that influences me to do something or act in a certain way in that. And part of that is bias. So for example, one of the things that I think about bias is if I said, listen, I'm gonna buy a red car. I'm thinking of buying a red car. All of a sudden, what I find

is that I start seeing red cars everywhere on the road and this is a form of bias, it's selective attention and it is also confirmation bias. Now that is sometimes used, it's also called the frequency illusion or the Baader-Minhoff phenomenon, right? And I've just, I've got those references on my screen here. yeah, but exactly, but sometimes,

Meg Smith (13:48)
You just, I was gonna say you just knew that, right? That was just off the top of your head.

Mark Smith (13:56)
through social media and the news, messages are planted or provided. And it's not planted in a way that's like subliminal or anything like that. They're actually making it quite clear, but they want you to focus on a certain thing because that then will take your attention as you focus more and more on that while perhaps distracting you from something else. But it is a form of bias and that we need to be cognitively aware of and go, okay,

How do I build a muscle that starts me to, if you like, automatically identifying and just as I'm talking, another book by one of the authors that I really liked that I can never remember his name when I want to say it, but he has this book and the title of the book is called Blink. And

Meg Smith (14:46)
Malcolm Gladwell.

Mark Smith (14:47)
Malcolm Gladwell, that's it. Malcolm Gladwell wrote the book and in that he talks about how your brain can make snap decisions on something. And of course it's coming from your subconscious, right? It's what you have learned over time that allows you to make these snap decisions, facial expressions, someone walking weirdly down the street, something that looks out of place in your mind even though you didn't know to

If you haven't thought it through, you just get this instinct. often you've heard of the sixth cent. What is that sixth cent that people can have? And it's this recollection of your subconscious to be able to fire a concept. And I think that the further we go into an AI world, we need to develop the sixth cent of going, hang on a second, and it comes through.

couple of things. It's understanding what is truth. I think it's becoming very clear around truth. I remember hearing a story years ago around how expert identifiers of counterfeit money, how they trained and they didn't train on fake notes. They trained on actually what was the actual correct note. So when they got a fake, no matter how that fake was faked,

They had trained their minds so much on the truth that it just was almost instinctively they knew that this was a fake. And then they would go through a process of validating the fakeness of money. And I think that we need to really develop our truth sense. And then, as I said, our bullshit meter on the other side to just go, hang on a second, this doesn't sit right with me.

Meg Smith (16:27)
You touched on it a little bit before triangulation or source triangulation. Let's just explain that a little bit, because I think it's a really helpful con concept in critical thinking when we are in that medium high level of let's check this out. The impact of me using this information that AI has generated for me. Going wrong if I get it wrong, if the impact is high or medium, let's let's find at least a couple of sources. So source triangulation looks like this.

So AI, you get an output from AI that says, the example I often use is a visa. Say I'm traveling to, where do we go? Say I'm saying I'm a New Zealander traveling on a New Zealand passport to Mexico. Do I need a visa before I go? If I ask generative AI that today, whatever it outputs, I don't know what it would output today. Say it says yes or no.

Source triangulation because the impact of me getting that wrong or believing AI from that one, the one answer and getting it wrong is that I might end up with, you know, best case back on a return flight home, worst case detained in a country where I may not have consular assistance, who knows, wherever that might be. So the impact is medium or high. So then I go, okay, here's my output. It sees yes or no. Now, where do I go next? Where is a really good source?

For me, it's a New Zealand government travel site that says, you know, when you go to different countries, this is where you might need to get a visa. And then if I needed another proof point there, I would go to the government site of the country that I want to visit, because usually they also have that information available. And those are the two most reliable sources. Now, I might also have another potential third or fourth source that says, you know, someone who went there last month, a New Zealander traveling on a New Zealand passport went there last month.

But that's this triangulation. And it's actually a concept that I learned in university when I was having to provide sources for the points I was making in a paper. So these are the kinds of things where you go, okay, what is a valid source? Is it the government site? Is it a person you know? Think critically about what is a valid source and make sure you can find at least two sources that are valid and reputable. That will reinforce that first point before you make a decision based on it.

Mark Smith (18:43)
we've had in the WhatsApp group, somebody has come through. Dan Barbara. And he made this request. I would be interested in what techniques you could use to avoid bias.

when using AI to support critical thinking. With its non-determinism, each time you run the same prompt, the answer could be sourced from different material, giving different biases each time. How can you ensure a broad spectrum of opinions in your results?

Meg Smith (19:11)
I love this question. We've touched on a little bit of some options today. My first thought when I read it was, I don't even think I'm aware of all the biases that there are. You know, I have a bias towards biases. I definitely use the confirmation bias a lot. When I make a decision, I only look at information that supports that that was the right decision. I also, I'm guilty of using recency bias. What was the last, you know, the most recent piece of information I had about a particular thing that's usually top of mind for me. So,

A great way to use AI is to teach you about the different biases that there are and the things that you should be aware of. We covered this a little bit in our book, actually. and I have a book coming out this month, which is called... Why does my mind go blank when I'm trying to remember something important like a book I wrote?

Mark Smith (19:54)
called the Microsoft 365 Copilot Adoption.

Meg Smith (19:57)
Yes, it's a guide for business leaders and consultants. There's a chapter on critical thinking and in it we discuss the different types of biases because things we should be aware of is, know, who built the AI tool that you're using? Was it built by researchers? Was that a large language model built by researchers who have a different perspective say than a company that is a commercial company who was building that, like say for example, Google.

they're building their large language model with that context. They have different data access to say other companies like OpenAI, thinking about the inherent biases in the data, also recognising that it's predominantly trained. Most of the large language models we use are predominantly trained on English sources. So if you're looking to use it for generating content in another language, there's bias built into the large language model that you'll need to look at ways to mitigate.

So there's ways that we can use AI to help us learn about these things like bias. The other thing that we included in this book and our chapter on critical thinking, which I actually think we will share in our WhatsApp group. So if you want to join the WhatsApp group, we'll give you a couple of days and then on Friday this week, we'll drop this full prompt into the WhatsApp group because it's quite lengthy, but it says act as my critical thinking coach. Your job is to challenge my thinking one question at a time so I can sharpen my reasoning and uncover blind spots.

And then we give the ability to input a scenario and give quite detailed instructions on how the AI works. And I did this with putting in a news article. And then it asks me questions one at a time. And I get to really actively think through, what have I just read? And what does that mean? And where are my spidey senses tingling a little bit to think I need to dig a bit deeper here before I just accept this as truth.

Mark Smith (21:43)
Yeah, personally I find perplexity really good on doing this is that I, know, for me, because I've noticed that the news sources are becoming so corrupted or I'm becoming more aware of it, perhaps they're not changing at all, is that I now have a automated routine that runs at 5 a.m. every morning, which generates the top 20 news stories around the world. And I've got a whole bunch of parameters on how it defines these as news.

⁓ I find that really good at kind of filtering through, as I say, what is really noise, particularly coming through on social media as into what is actually newsworthy and can rank at those. And also I even have a kind of a part of the prompt there. What would the impact of this be on me? And so it just, once again, you know, things that are happening on the other side of the world, probably very little.

And so it allows me just to balance my thinking there. But like that prompt that Meg just said, right? It's a great way to even, you know, take any article, run it through it and say, help me identify the bias in this. And so therefore it will, you know, I love something that Scott Hanselman said that I saw recently, in that he was saying when you work with AI and it outputs something for you,

Go one step further and ask it to explain it, ask it to teach you something more about what you've just asked. So therefore you are building that strength, that muscle, memory, that yourself in becoming an active, critical thinker around everything coming into your life. Remember, if you wanna join the WhatsApp group, we'll put the link up here on screen now. It'll be in the show notes.

If you're listening to this and you're wanting to see a few of the different items that we've shared, Spotify and YouTube is where you can get the full video experience of this podcast as well.

Meg Smith (23:44)
Thank you very much for listening and thank you so much for your feedback. It honestly means the world, especially to me as I'm newer at this podcast stick, ⁓ I'm into it. It's fun, but the feedback makes it all the worthwhile to know that it's adding value to people who are listening. So thank you very much to anyone that's let us know that it's helping a little bit or getting them involved in AI more and more in their practice.

Mark Smith (24:06)
Have an epic week.