AI Is a Mirror: Better Prompts, Better Results
The player is loading ...
AI Is a Mirror: Better Prompts, Better Results

Hosts: Mark Smith, Meg Smith 🎙️ FULL SHOW NOTES https://www.microsoftinnovationpodcast.com/731 Most people treat AI like a search box, then complain when the answers feel thin. In this episode, Meg and Mark reframe AI as a conversation you practise on purpose. From a “five prompts a day” habit to stronger Custom Instructions, they show how intention, examples, and pushback turn generic outputs into useful work. You’ll also hear a practical te reo Māori learning story, why the word “should” ...

Hosts: Mark Smith, Meg Smith

🎙️ FULL SHOW NOTES
https://www.microsoftinnovationpodcast.com/731

Most people treat AI like a search box, then complain when the answers feel thin. In this episode, Meg and Mark reframe AI as a conversation you practise on purpose. From a “five prompts a day” habit to stronger Custom Instructions, they show how intention, examples, and pushback turn generic outputs into useful work. You’ll also hear a practical te reo Māori learning story, why the word “should” is a red flag in voice interactions, and a calm way to choose tools without FOMO.

Join the private WhatsApp group for Q&A and community: https://chat.whatsapp.com/E0iyXcUVhpl9um7DuKLYEz

What you’ll learn
- How to build prompting skill through small, daily reps, not one-off hacks.
- How to use conversation structure (goal, audience, examples, iteration) for higher-quality outputs.
- How to set effective Custom Instructions and ask the tool to interview you first.
- How to spot weak guidance in voice mode and push back for evidence.
- How to compare models sensibly and choose depth over distraction.

Highlights
“Five prompts a day is what you should be doing.”
“They don’t come to AI with a ‘let’s have a conversation.’”
“Instead of approaching it like a one-shot conversation or a search, I thought about approaching it as a conversation with a friend.”
“AI is a mirror of who you are.”
“If you’re a lazy prompter, you’ll get lazy answers.”
“Ask me all the questions that you think you need to ask before I’m ready to create that output.”
“The word ‘should’ is what’s important here.”
“It’s very easy to say a tool sucks when you don’t use it.”
“Different tools do different things.”
“It’s still on me to make a decision with the information that I have.”
“I’m going to do one a day.”
“How lucky are we to be able to take that approach?”

Mentioned
ChatGPT Custom Instructions / “Customize GPT”
Microsoft Copilot (Custom Instructions)
Claude (Anthropic)
Gemini (Google; on-device models referenced as “Nano Banana” in the episode)
Grok (xAI)
Toastmasters
ISO standards; ANZ standards (Australia/New Zealand)
“AI: The Mirror and the Tower,” Dr Oliver Hartwich (Aug 2025) https://www.nzinitiative.org.nz/reports-and-media/opinion/ai-the-mirror-and-the-tower/
kahu.code (te reo Māori learning tools) https://chatgpt.com/g/g-wXVoS6B3c-whakawhiti-reo-kahu-code?model=gpt-4o
Show WhatsApp community (listener Q&A) https://chat.whatsapp.com/E0iyXcUVhpl9um7DuKLYEz
Video versions available on Spotify and YouTube

Connect with the hosts
Mark Smith:  
Blog: https://www.nz365guy.com 
LinkedIn: https://www.linkedin.com/in/nz365guy
Meg Smith:  
Blog: https://www.megsmith.nz 
LinkedIn: https://www.linkedin.com/in/megsmithnz 
 
Subscribe, rate, and share with someone who wants to be future ready. Drop your questions in the comments or the WhatsApp group.

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

00:00 - Unlocking the Power of AI Conversations

02:36 - The Importance of Iteration in Learning AI

05:39 - Cultural Connections Through AI

08:10 - The Role of Conversation in AI Engagement

11:22 - Customizing AI for Personal Use

14:01 - Shifting Mindsets: From Search to Conversation

16:46 - Navigating AI Responses: The Importance of Context

19:39 - Integrating AI with Human Interaction

22:29 - Choosing the Right AI Tools

25:30 - Deep vs. Broad Engagement with AI Tools

Meg Smith (00:12)
Welcome to the AA Advantage where we're all about the skills to thrive in the intelligence age. I'm co-host Meg Smith and I'm here with Mark Smith. Today's topic is all about conversation and prompting. I'm really excited to dig into this one because it's one of the skills that I have really practiced over the last, I mean, is it two years now? From first starting to have a play with ChatGPT or seeing it. And it was the thing that became a bit of an unlock for me in my, I guess, journey with AI. And I have to credit you, Mark. One of the things that helped me get over the hump was you set yourself a five prompts a day goal. Do you want to talk a little bit about that? Because I just copied that, so it's your idea.

Mark Smith (00:56)
Yeah, so that kind of started from like how do you find your way with a new piece of technology just by using it? Like don't have to be fancy, you don't have to try and invent something new, but just try and incorporate it into what you're doing because out of it I think iteration comes naturally. It's not something we have to kind of think too much about. And so I, yeah, I don't know how long ago it was that I suggested that five prompts a day is what you should be doing. Now, that was well over a year ago that I suggested that. For me, in my work day, I would probably be using it five times an hour now from a prompting perspective or AI is I feel definitely augmented into my life and I continue to reap benefits from that augmentation. Things continue to grow. And I continue to iterate and learn more all the time from it.

Meg Smith (01:52)
I found the similar thing that rather than the kind of assumption maybe that we hear sometimes from people who aren't using AI where they say, will mean you stop thinking. know, often that's something we've seen in the media lately and we'll talk maybe a bit more in a future episode on critical thinking about the MIT study that has kind of, people do that thing, right, where you read a headline and make a conclusion. The headline says, we've done the study and it shows that students who use chat GPT aren't thinking as much. Over simplification, not what the researchers said at all. But if you already believe that a little bit, you're probably going to see that and go, my God, you know, I kind of already thought that I don't really have time to get into this. This is just another reason why I shouldn't.

And it's such a shame because my personal journey, yeah, that went from, okay, I don't think I started with five, be honest, Mark. think I had like, I think I just set one. I'd like, cause usually that's the way it goes, right? Like you're a thousand miles an hour and I'm like a little bit slower. So I just said to myself, I'm going to do one a day. ⁓ and so that I get into the habit of it. And now similar to you, it's, it is habit, ⁓ across the different ways that I work. but it's made my brain work harder.

I feel like I can joke that I can kind of feel my neural network sometimes working super hard. I'll give you an example. So in Aotearoa, New Zealand, in New Zealand this week, it's Te Wiki o Te Reo Māori, which is Māori language week. This is really close to my heart because my family connection is through to a tribe in the north Ngāpuhi and also in the centre of the North Island, Tainui. And my grandmother did not speak.

Māori, she did not speak her language. She could, but she was so socially conditioned that it was not valuable, that it was better to speak English. And there was a big movement, a big activist movement in the 70s in New Zealand that led to Māori being recognised as an official language of New Zealand. And it meant that while my dad also didn't learn Māori in school or wasn't really taught any of it, when I went through the school system, right from when I first started school, I had access to being able to learn songs, to learn waiata, to learn haka, to learn, you know, this identity, this connection that has only grown. And now our tamariki, our children are coming home from their daycare, which is not a Māori language daycare. It's a mainstream daycare in New Zealand. And they have way more reo. They have way more language than I had.

And now as an adult learner trying to demonstrate and lead for our kids, Mark and I both, one of the ways that we're learning is with the help of AI. There's a really cool company in New Zealand called kahu.code, a father and daughter company who have built custom GPTs in Chat GPT to help you learn te reo Māori. And as someone who has ⁓ learnt at school, learnt at university, learnt in groups as an adult learner. This is just another way that can augment my learning, right? I'm not, going in there because for me, my goal is to learn. My goal is to be able to understand and share more of my culture and my language and to also teach my children and learn from them, keep up with them to be honest. so when I'm working with AI in these tools, they have, ⁓ one that's a teacher, Kaiāwhina Reo.

and one that's a straight translator. So you put in English, it'll give you Māori. If you put in Māori, it'll give you English. I use it to teach me because I want to learn. it's now, before I go to say, if I want to reply to someone's comment on LinkedIn or write a birthday message or do something, ⁓ I'll make it an attempt. And then I use the two of those tools to come to something that is accurate, that sounds like me, that is genuine.

but also I've learned in the process, the next time I come to that same place, I'm coming at it from a different place. So yeah, that was just kind of the example that came to mind when I saw this accusation of like, it's gonna mean we stop thinking if we use AI.

Mark Smith (05:53)
And this is why conversation I feel is so important because there's a lot of nuance that comes out of conversation and that's why conversation is not just question and answer. One of my challenges for some time is that people look at the chat interface of their favorite AI tool and it looks very similar to the Google interface of search in a search engine. And so people come at it with the I'm going to ask it a question.

And I want an answer. They don't come to AI with a, let's have a conversation. And conversation, as you say, is interesting because one of the things I learned, you know, up North when you were doing an AI event is that the name for flax in New Zealand, which is a plant, is different in different parts of New Zealand because the nuance of language that comes out through conversation. So if somebody uses, is it harakeke

Meg Smith (06:52)
Yeah,

Mark Smith (06:53)
and then what's the alternative? So therefore that gives context about where that person's probably originated from or what part of the country in New Zealand. So see just out of conversation all of a sudden we get to link in to understand so much more than asking the question directly and what I find and there's a great article feel free to look up called the AI the mirror and the tower.

Meg Smith (06:55)
Kōrari

Mark Smith (07:20)
It was written by Dr. ⁓ Oliver Hardwich around August 2025, as when it was published. And what I found so interesting in this, in that he works with a lot of scholarly people, know, very academic, very intensive research-based academics, and he is involved in publications and things. And one of the things he said is that, an academic would come and provide a written piece for publishing. And he goes, I can't publish this. And they were like, why not? And he was like, well, it's so academically written, there's going to be no connection with the audience. and then the target audience of the publication, it is about connection. And so, and then one of the excuses he would get from them is that, well, that's the way I write. Well,

That can be fine, that's the way you write. But if you're writing for this audience, you've got to write so the audience understands and creates connection and gets insight. there's so much in this one article, that's why I highly recommend, search it, look it up. Because in there he talks about AI as a mirror. And in the commentary that's been online in recent weeks where AI is going to make you dumber.

It is such a lot of rubbish and that AI is a mirror of who you are. If you're an inquisitive, a dig deeper, uncover all the different angles, it's going to lean in in the reverse and give you that, you know, detail. he says there, if you're a lazy prompter, you'll get lazy answers. And if you haven't learned, and why I was saying practice is so important is that in the conversation of practice, you get get better and this is why you know for years I've recommended to anybody that's asked in my mentoring programs and such why go to Toastmasters why is Toastmasters so good for language it's not because you learn a formula but it's because you have an environment to practice and you practice over and over again and you realize that in that practice how you engage with an audience and there's been a lot of research done in the space is that when somebody is giving a speech that the brains of the people in the audience when they've put brain monitors on to the person speaking, et cetera, there's a synchronization that happens between the speaker and the receivers brains. And this is why Ted talks have been so amazing around the world is that allows you to transfer ideas and concepts. And so I go back to why practice is so important is that

Out of the practice creates this kind of connection with even though AI, you know, is a machine. What it is doing though is creating this amazing connection of your understanding how to work with it in a better way. And it understands you more and more. And one of the suggestions I have is that you should understand like system prompts.

In other words, give it the context of who you are. Understand, you know, and Meg has talked about this before. Get it to ask you questions and actually build that out. So, for example, in most AI tools, you can go into the settings and it'll allow you to go, how do I like you to respond to me? Here's some contextual information about me. Well, don't just type in what you think there. Actually start with your AI tool and ask it to say, hey,

I'm going to fill out this core settings prompt. Ask me all the questions that you think you need to ask before I'm ready to create that output and put that in as a system prompt or the way I like to be responded to. And that I don't want you to be agreeable to me, for example. I want you to challenge my thinking. don't want you, you know, at the moment I always have my prompt response come back with a TLDR.

at the top and it just gives me a sentence output. Then it gives me kind of like an executive summary view of how it's thinking. And then it'll go into the detailed research breakdown, reference files, et cetera, that I might want to jump into and look at. that's, didn't, I wasn't doing that two years ago. I wasn't writing, you know, JSON prompts that I do now. I wasn't doing that two years ago. It's been a journey of iteration that's got me to this point of conversation. and allowing me to kind of drill in and get that level of accuracy.

Meg Smith (11:52)
just set my custom instructions in Copilot the other day. And it has been there for a little while, but I felt like we were talking to a few people and still kind of either forgot to go back and do it when it first announced. I think it was announced and then it's rolled out to accounts sort of slowly. So yeah, that's what I did. went to and I did something that I have started to do if I want a prompt written as well.

I asked AI to do it. So I went to AI. I went to a few different ones. So I used Claude and ChatGPT and Copilot and asked the same thing. Hey, I'm setting my custom instructions in Copilot. What's the best practice in terms of structure? And they draw on different things. And I usually end up using a bit of a combination of the outputs or finding one that I like better and continuing the conversation. But when I When I was first using chat GPT to shift from the experience I was having, which was opening up the window, thinking about it in the same way that I did with search, which we've kind of learned to ask the right question or put the right keywords in to get the answers that we're looking for. Right. I was starting with that. The two things that made like were game changes for me in terms of shifting from this is annoying and kind of dumb to this is really valuable. And it's actually helping me get stuff done was.

Setting my, it's called customize GPT in chat GPT. So you setting your persistent settings around context. So like I'm in New Zealand, use British English, use, don't use hyperbole my work or the context, right? That meant that rather than all the things in the world across the LLM that it was trained on, the large language model was trained on. It was already starting to narrow it down to things that were relevant to me.

So that made a big difference. And the other thing was my mindset, how I approached it. Instead of approaching it like a one-shot conversation or a search, I thought about approaching it as a conversation with a friend. And if we think about conversations, we've kind of maybe got a bit lazy on them too, because we're messaging, we're using short, you know, DMs, we are.

spending less time maybe, depending on your phase of life, depending on where you're at. Maybe you're spending less time really intentionally engaging in conversation. So I thought, okay, what's my intention? I'm going to, when I approach this as a conversation, I'm going to be really clear on why am I doing this and what's the goal here. And then I'm going to make sure I give it the right attention. So significant amount of time, not necessarily significant, but the right amount of time.

depending on what I was trying to do to really sit down and go, okay, I've got time to go back and forth on this. ⁓ and the example I give, which still makes me kind of smile. Cause I just think it's hilarious to think about this. Imagine if I, know, you're talking to a friend, say, say, Hey Mark, I'm going to be in London tomorrow and I'd love to know, ⁓ what's your favorite? I don't know. Sushi restaurant. I don't know which one you'd say sushi samba. ⁓ but say, say I say that, and then you reply to me and go, yeah, there's a really great Indian.

I would say, know, Indian takeaway, I'd be like, I wouldn't just go turn around and go, no, you've not got it. And I've walked away and I don't want to talk to you anymore. I would, I would put the onus on me to say, no, I haven't given you the right information or I haven't been understood. Therefore I'm going to give another attempt, more information, an example, a specific thing I might have wanted to try or get out of it. And so for me, that reframe meant

I was less likely to get frustrated at the first answer and close the browser window. And instead look at it again and go, okay, have I given an example of what good looks like? Have I given the audience who I'm writing this for, creating this for the purpose behind it? So those two things were pivotal at that point to go from. I don't understand AI, it's a bit stupid, I'm so much smarter, because let's be honest, it's kind of what we think, right? But it's the wrong comparison.

Mark Smith (15:55)
This is why conversation is so important in that I'll give you an example. Years ago, I used to work in the medical industry and we would have standards that things need to be assessed against. And so there were just like you get ISO standards in Australia and New Zealand, we get what are called ANZ standards. So they're either Australian standards or New Zealand standards at something needed to comply with. And The nuance of words was so important. So if it said must in the standard we knew that that there was no kind of close enough it must should gave it that word should was a little different in what it What it actually meant and what I found all these years later now, it's coming out in AI. So one of the things I like to do more and more these days is use voice as the interface. So I talk to AI and particularly in chat GPT, I will go into voice mode. I'll turn my camera on just this weekend. I was in my greenhouse again, analyzing whether the seeds that I'd planted, which were now about, you know, 10 centimeters tall, they didn't match the kind of plant variety. So I flicked the camera on, said, this it? And had this conversation.

One of the things I've noticed in voice, which I find extremely frustrating, is that I, and I would say it is lazy in how it responds to you. So for example, the other day in a piece of software, there was a setting that I needed to fix in a piece of software. And so I said, hey, this is the software I'm using. Gave it explicit instructions. This is the software I'm using. I'm trying to change this. And it said, that's very straightforward. You need to go and do this. And you should see on the right hand side,

And there's the clincher. It said, you should see. In other words, it didn't say you will see. It says should. And straight away, I knew it was making shit up. Right? I knew it wasn't actually genuine. said, listen, stop bullshitting me. That's exactly what I said to it. I said, go and find the explicit instructions. Guess what? It came back and said, after researching that, I found that there is no way that you can do that at all within the software. That is not a feature that is there.

Meg Smith (17:45)
There's the kicker.

Mark Smith (18:10)
And so, and I just find that anytime I'm using that voice, I've got to be really on my guard about how it's answering because it seems to go, yes, you've asked something, it's kind of logical. Therefore there should be a logical answer, but the word should is what's important here. It's not that, you know, people design bad software all the time. Another example is trying to stop a subscription. And I don't know if you've found this in SaaS solutions, they make unsubscribe extremely hard. They bury it down in navigation. They hide it away here, there, and then their help files are talking about the old way to unsubscribe and they've updated and there's been no change. And in this case, I found out there was no way to unsubscribe apart from emailing their tech support and asking to unsubscribe. They had purposefully made it extremely hard for me to exit. What is crazy in the scenario for these organizations that don't know if they run the data. I find that so disingenuous that I decide never to return to their product again. Like I might not need it for a period of time. My circumstances have changed, but now you've made me angry. Now you've made it so difficult for me to leave, I'm pretty much gonna give you the middle finger and not come back. And I'm gonna talk smack about your software from now on. So you gotta be really careful. That's a side thing.

Meg Smith (19:21)
Yeah. Yeah, a little rant aside.

Mark Smith (19:33)
But see, conversation is so important and learning the conversation with AI is so important.

Meg Smith (19:39)
But I love that, to say, know, that perspective of when you're talking to someone and you are talking smack in the conversation, you leave room for the fact that you based on, hey, well, you know that person. I know you're prone to hyperbole. So, you know, like the context of like, you don't have to take exactly what they say and go and change direction or completely change your point of view because of it. It's great to be curious. It's great to have room for that.

But in the same way with AI, just like you demonstrated in the example with the, having to push back, it's not all knowing. So when we approach our conversation, we know we shouldn't approach it like this thing is all knowing or as a real, like person or relationship. It's just, it's just helping us. It's a surface things that we wouldn't necessarily be able to have the scope of experience or time. If it's research, this is on a personal level anyway. Like I, just think.

When you were talking before about conversation as well, why I like to parallel to skills that we're used to and using to relate to people is because we can, we have much more allowance for that. Like I can give you more grace for not being always necessarily a hundred percent accurate. So why don't we, if we approach the conversations with AI like that, it's not to say, ⁓ AI might be wrong sometimes and that's okay.

It's to say it's on me what I do with that information after that conversation. It's still on me to make a decision with the information that I have or to go deeper, ask more questions, go and talk to a person. That's the other part. That's the other reason why I love conversation as a skill, because it makes us better when we work with AI, but it also makes us better when we work with each other. And I'm often maybe

starting a conversation with a person, now perhaps going to AI for more information and then going back to the person or the other way around saying like, was, you know, I was, was having this prompt conversation or I was look, trying to do this thing. And AI suggested this, Hey, that doesn't sound right. You know, I might go to my governance mentor and be like, I was just kind of like riffing the scenario and it suggested this, but Hey, in your experience, do you actually think that's relevant at the board table?

You know, like we can't forget that just because the technology is there to help us, we don't have to stop doing the other ways that it used to be done as well. Like it can be a mix of both. How lucky are we to be able to take that approach?

Mark Smith (21:58)
I like it. We've got some questions. have that WhatsApp group. Feel free to join it if you want for the show and ask your questions. But what was the ones that come in this week Meg?

Meg Smith (22:07)
Cool, so there's a great question here that I've heard a few times. It was from Danny. What would be the recommended large language models to use for what tasks? For example, he mostly uses ChatGPT and Microsoft Copilot, but he's played with others like Gemini, Claude, Grok. Are some better suited than others for some say consulting work was his example. What do you reckon, Mark?

Mark Smith (22:29)
So I know we had a response from somebody also in the group about that, and then I'll jump into what I recommend.

Meg Smith (22:36)
yes, so they actually said, I tend to run the same prompting through both and see what comes up with, so which each comes up with and then decide which to go deeper with. And I love that approach. And I mentioned I've been using that same approach myself.

Mark Smith (22:52)
Yeah, very good, very good. For me, listen, it's about there's not one tool that does everything, right? I have a workshop and in it I have a hammer and hammers are really good at putting nails into things. It's really good at hitting my big finger and it's really good at removing nails from things. But listen, if I need a hole drilled, I'm going to go and pick up the tool that's a drill and I'm going to drill to get the hole that I need, right? I'm going to choose the size, etc. that I need. ⁓ I'm going to use pliers differently and I think we've got to think of AI the same way is that there's going to be a range of AIs and different tools will do things differently and so I'm always trying to experiment with new tools because I'm going to go is that going to be something that I'm going to make part of the tools that I keep and use over and over again sometimes I've even lately started subscribing

to tools and then when I don't use them I unsubscribe and stop using if I'm finding like I know my work for the next couple of months is not going to have a dependency on this I'm going to not subscribe to it but you know lately so for example I get a lot of people that are very opinionated about tools that are they're going who's the creator of the tool and the common one is grok right I'm from XAI and so well I don't like Elon Musk so I'm not going to use that tool well

You know, I don't care who invented the ruler. I don't know who they were. I don't know what their opinions on life and public knowledge. I don't ever, I don't give a rip, but I find the ruler very helpful. And so I'm not going to write off a tool because I don't like the inventor of the tool. And so for me, I use Grok. I try it out. I try it in different scenarios, whether it works for me. I use Claude from Anthropic and you know,

Meg Smith (24:36)
I'm Google.

Mark Smith (24:36)
I'm actually leaning into the tooling provided by Google more and more because, you know, whether you've seen the latest things like Nano Banana, where they're what they're doing in the graphical representation space is absolutely phenomenal. And so I don't get tied up with what tool is best. Different tools do different things. And, you know, for a long time, I actually didn't like Copilot, M365 Copilot from Microsoft.

And so I didn't use it and it's very easy to say a tool sucks when you don't use it. And I say, well, perhaps it's more on you because you've not become skilled at that tool. You aren't getting the kind of outputs that you're expecting. know, there's a surgeon that is skillful with the scalpel and there's probably surgeons that are just learning and they're not so skillful. I prefer the skillful person, right? I can take advice from the skillful person that's learned to master that scalpel. ⁓ or the various medical devices they work with. And so I think of AI in a similar vein, different tools, I want to learn them all and know which tool is going to help me in the situation I'm in.

Meg Smith (25:41)
Yeah, I get, I have a slightly different approach on that, which is just to say that you are someone that wants to learn all the tools and you want to try all the different things. I think what, what I hear a lot of is this is a magic tool. This new tool is a magic tool and you're an idiot if you're not trying that tool. This tool is my only magic tool that nothing can do that. And, and I think what that does is put people in.

Mark Smith (25:57)
Yeah, valid.

Meg Smith (26:05)
put fear into people even more and anxiety. And we saw this, I saw this in marketing technology when I was working in that space. The customers that I worked with, the CMOs, the chief marketing officers would say they were having sales reps from 10 different tools come into them every month and say, tool is magic. You're not, you're wasting something you're missing out. And it's the FOMO, the fear of missing out that would then drive that behavior. And at what cost? Because yes, the tool might get you 10 % more.

But if you're distracted and you're still hovering up here with all the tools, instead of going deep and not just how you know to use that tool, but what that tool has learned about you and how you work. So I'm kind of taking the approach of, I'm going to go deep on a few at a time and rely on Mark's expertise from experimenting with others. Or look at, take advice like when friends have said, ⁓ I'm using this particular tool. Like I use Claude a bit more. ⁓ because I like how it can, I like the language it uses, ⁓ more naturally than, the others take a bit more tweaking and training to get to that point. But I also found, that I had to be really careful about some hallucinations. This was a couple of months ago. So I know that they've released a new model, but I think I also take this, this question, we're asking the wrong question to say what's the best tool or what's the right tool. I'm more interested for myself in what's the framework by which I make the decision about how to introduce a new tool or the depth to which I want my experience to go with the tool.

Mark Smith (27:38)
I like it. If you've been listening to this podcast on your favourite podcast app, there's a video version of it as well as the audio so you can watch it on Spotify and or YouTube. Remember join the WhatsApp group. The details are in the show notes. Would love to hear your thoughts and questions. In fact, you've found a new tool, I would love to hear about it in the WhatsApp group and how you're getting benefits, etc. from it. So feel free to join us then. Meg, what's coming up in the next episode?

Meg Smith (28:06)
The next episode is going to be a good one. So it's all about AI augmentation, particularly thinking about research, drafting and analysis, which are three ways that we can augment our knowledge and experience with AI. So if you've got questions or even ideas you want to share ways you've been using AI tools, honestly, I think we might need a drinking game for the word tools, depending on your time zone, your listing, take that with risk. But yeah, if you've got some great advice or questions around research. analysis, we'd love to hear it drop it in the WhatsApp group. And thank you so much for listening and for your lovely feedback. Ka kite anō, which means see you again.