

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
This episode features Martin Miller and explores the practical reality of using AI inside organisations. The conversation cuts through hype to focus on where AI genuinely adds leverage, where it breaks down, and why subject matter expertise, data quality, and critical thinking still matter. From AI‑generated code and agent teams to data governance, outages, and deepfakes, the discussion frames AI as a powerful amplifier rather than a replacement. The core message is clear: AI rewards clarity of intent, strong foundations, and human judgement.
👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/815
🎙️ What you’ll learn
- How AI‑generated code fails without clear problem definition
- Why data quality and governance are prerequisites for AI success
- When AI amplifies expertise versus creating costly mistakes
- How organisations misuse AI by chasing headcount reduction
- Why critical thinking weakens when AI replaces core learning
✅ Highlights
- “It’s not about can you write a for loop. It’s about can you define what you’re trying to accomplish.”
- “The more you know, the more you can use this tool.”
- “Nothing like making a decision off of bad data.”
- “AI is a power tool in the box of tools, and it’s a super tool.”
- “It will confidently deliver you results that may or may not be what you need.”
- “Headcount reduction and creating solutions are mutually exclusive.”
- “If you can’t manage your data with or without humans, you need to know that.”
- “You can generate the anomaly of the moment using AI all day long.”
- “There’s no Brent to call when your AI isn’t working.”
- “People that used AI couldn’t remember anything about the paper.”
🧰 Mentioned
- Anthropic: https://www.anthropic.com/
- Claude: https://claude.ai/
- OpenAI: https://openai.com/
- ChatGPT: https://chatgpt.com/
- Gemini: https://gemini.google.com/
- DevOps: https://www.atlassian.com/devops
- The Phoenix Project: https://itrevolution.com/product/the-phoenix-project/
- Foursquare: https://foursquare.com/
✅Keywords
ai strategy, ai coding, data governance, critical thinking, ai agents, devops, site reliability engineering, ai outages, deepfakes, prompt engineering, subject matter expertise, enterprise ai
Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption
If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith
03:41 - The AI Reckoning Has Nothing to Do With Code
05:18 - Why AI Sounds Confident Even When It’s Wrong
10:42 - The Most Dangerous AI Strategy CEOs Are Pitching Right Now
12:54 - Data Is the Model Fire the Team, Break the System
15:39 - AI Is a Power Tool, Not a Brain Replacement
18:14 - When AI “Fixes” the Bug and Creates a Bigger One
31:54 - The Hidden Cost of Letting AI Do the Thinking
00:00:07 Mark Smith
Welcome to AI Unfiltered, the show that cuts through the hype and brings you the authentic side of artificial intelligence. I'm your host, Mark Smith, and in each episode, I sit down one-on-one with AI innovators and industry leaders from around the world. Together, we explore real-world AI applications, share practical insights, and discuss how businesses are implementing responsible, ethical, and trustworthy AI. Let's dive into the conversation and see how AI can transform your business today. Welcome back to the AI Unfiltered show. Today, I'm joined by Martin. He comes from California in the United States. All links will be in the show notes for this episode that we discussed today. Martin, welcome. Thank you, Mark.
00:00:56 Martin Miller
I'm glad to be here. Let's have some fun.
00:00:59 Mark Smith
Let's have some fun. I'm excited to talk to you based on our pre-call. You're an interesting person, and I love having interesting people on the show. But before we start, food, family, and fun, what do they mean to you?
00:01:10 Martin Miller
Wow, You hit me at an awkward time, but let me go for it. So, just the other day, I made a paella, the first of the season. So for me, that was fun. And to be honest, it's a lot of prep thinking about what am I going to put in this thing? And we went for it. And then I posted a picture on social media of all places of a single picture saying, first paella of the season. It's not that I don't do social media, but I thought this would be like a boring post. And I got nothing but people hitting me from all angles and people that don't even know me very well asking me for recipes. So that was fun. Now the other F, so we're going through the F words, right? That's it. Yeah, So another F word that's polite to say is family. Well, I do have four kids. I, you know, for the visually impaired that don't see what's behind me, I have a picture of four of my little kids. They're not little, as you can see. That's part of the family and I'm happily married. Hope to remain so after this podcast. And then, so we talked about food, but I said fun. Now we're going to talk about real fun. I'm also a real life Vespa Rado. That means I ride on 2 wheels A lot. And I love it. I love the lifestyle and just I do go faster than 50, 60 mph on a regular basis. In fact, my scooter, I won't say what the top end speed is, but it's faster than average. How's that?
00:02:37 Mark Smith
Nice. That's so cool. So cool. So interesting. In our prequel, we talk about the reckoning. What do you mean by the reckoning, particularly around where AI, you know, starts, AI written code starts breaking down for people? We're seeing, I think, that Anthropic, for example, just came out and said that 4.6 that we're currently on was, I think they said 100% written by the previous model. And we are seeing rapidly. I just watched a podcast with the head of, I think, Code, so the coding tool on Anthropic. And they said they couldn't, they hadn't written code, I think, since September last year, and they're the lead developer on it. And they saw a future within the next six to nine months of code by humans being done, as in it's not as an efficient way to write anymore. What are your thoughts?
00:03:41 Martin Miller
Well, you know what I find humorous is It's not so much about the skill of can you write a for loop, a conditional statement, a data lookup. It's not so much about that. It's about can you define what are you trying to accomplish is really what they're getting to. And can you define it in such a way with confidence as a subject matter expert to have it regurgitate back and ask you and challenge you in the path of delivery? So what they're telling you is based off of well-recognized, well-known patterns, code patterns, deployment patterns. And there's books and books on algorithms and patterns. So it shouldn't surprise anybody that we could do code generation using the verbal language. Now, step back for a second. This is always humorous to think. Ask 4 people to write a haiku, right? Which is like a Japanese form of poetry, right? 17 syllables, you know, and give them a topic of that haiku and have them write it. And you'll come out with four different versions. Probably if they don't talk to each other, it might be very entertaining to share. Well, you can also take that same prompt as to humans to the machine and ask it to come back with a haiku. And it's going to come back with multiple ways of doing an haiku. Haiku, excuse me. Core issues in the reckoning, I want to get to pretty quickly. So let me get to the point. Code generation, origination, straightforward, pretty straightforward with high definition, high precision of accuracy. Where things get to the reckoning is when you got a lemon twist or a lack of knowledge of subject matter expertise to know that the machine is giving you confidently a probably not the best choice to answer. I'm trying to avoid the word wrong or incorrect because to the novice, it doesn't look wrong or incorrect. To the trained eye, it goes, That may cost me a few more pounds or dollars or pesos than I need to spend per moment if it's going to be a cloud deploy. Although it has information on those models and how they financially work out too, and does generally get those for the most part, right? So the more you know, the more you can use this tool. But you can step back as a total novice and do code generation. I get that. But you know what? You could have gone into a random number generator and come up with a random number. So it doesn't surprise me.
00:06:13 Mark Smith
Yeah. And what I find interesting, like you're so right, because I've been building my own software team that are agents. So at least eight agents involved in it.And I've never been a developer, but I've had over 50 developers working for me at a time. Like I've had teams of developers that'll work for me and I understand software development. I understand, the different ways of testing software, application lifecycle manager. I know what I'm kind of looking for. What I was surprised is that the level of sophistication I was then able to build based on that knowledge and based on, I suppose, giving really robust context and really robust outcomes. in being able to do that, even though I am not a software developer, I'm pleasantly surprised because I'm not trying, by the way, I'm not trying to build an enterprise application, right? I am solving software problems that are in my business that I feel only I have, and I don't need an off-the-shelf solution to solve it nowadays.
00:07:17 Martin Miller
And for those type of purposes, I think it's a good application and a good usage. It comes down to how is the runtime of this going to work? Is it going to be deployed somewhere or are you just going to stay on a prompt all day long? I mean, you have to ask yourself, what's my deployment model? What's my usage model, my access model? And you can have your army of agent software developers, but they may not all get along. Is this like real people? So just kind of walk that line of cultural divide of the agents, so to speak.
00:07:54 Mark Smith
Oh, you know what? I've really treated them in a way that, just to give you an example of how I liken it to humans, I gave a full detailed role description, you know, handoff path as in, and they're, so very tight guardrails. And like one of my big things is you can never mark your own homework. You can never say it's good code if you wrote the code, right? It's got to be some, a third party has to validate that. And in my case, I'm even using a totally different model provider to do the validation. So there's kind of no, none of that. What I'm interested in, your role as CTO, what do you see as the opportunity landscape for organizations right now? What is, and do you see any, do you see clear and opportunities, and I'm talking about in the corporate world being missed or being misguided. And I like, I really want to go, I'm not into the jazz hands of AI. You know, oh, it's amazing. It's going to solve everything, you know. I'm into the practical skills, outcome-based things that move the needle for an organization and make it, you know, more profitable, make it a better place to work. all those things that would define as a great company. And of course, I'm not on the other end of the spectrum either, which is the sky's falling. This is going to be the end of the world as we know it. So what are you seeing?
00:09:24 Martin Miller
Oh boy, there's so much to be seen. Well, you've got pre-existing conditions that you may be working through. So maybe there's company name, name a fictitious company. Give me a, give me a, give me a company name. Let's run with it. I'll give you it.
00:09:40 Mark Smith
Acme from Roadrunner.
00:09:43 Martin Miller
Oh, Acme from Warner Brothers as in Wile E. Coyote Acme.
00:09:47 Mark Smith
Yeah.
00:09:49 Martin Miller
That's my favorite, favorite company to reference. Thank you for, we didn't even, we didn't even schedule to talk about Acme.
00:09:56 Mark Smith
Yeah.
00:09:56 Martin Miller
So consider me super genius, Wile E. Coyote. in the flesh as a human. So imagine what could go wrong if I decide I'm going to replace my full staff of data analysts, business analysts, business intelligence, product marketing, product management, software development, interim program management, hiring, resource management, all the frontline people. Just imagine, I'm just going to replace them all with a bot. Could I do that? And does that make sense? And so could I do that? Is a binary yes, I could. Is it a smart decision? I will leave it to Wiley Coyote, the super genius, to say, let's try it and find out. But in reality, it's going to be a terrible decision because you're not going to be solving the problem that you're trying to solve, which is, am I trying to make money? generate more efficiency? Am I trying to save money? How am I going to do that? If I'm trying to reduce headcount, that's a different problem. If I'm trying to create a solution, that's what I want to focus on. So I call those mutually exclusive, headcount reduction and creating solution. And so depending on which way you want to go, you can really drive the car or the plane. You can drive the, you'll fly the plane into a side of a mountain, right? Literally. So That's what you're seeing in the reckoning. You're seeing CEOs or want to be startup CEOs and want to want to be entrepreneurs. You know, they got this idea. It's.com, you know, was it 5.0? Let's just put it out there because I'm done with Web 3. I want to skip a phase.
00:11:49 Mark Smith
I never got on it.
00:11:53 Martin Miller
SoLet's just say I'm on, let's skip the notion of 3G and four LTE networks. We're going to go to infinity networks and just, you know, bypass all the intermediate steps of infrastructure and roll out. So I've got this data store of data and what could go wrong if I decide to dump all the people that know how to touch the data, move the data, save the data, restore the data, fix the data. Start with the data. If you can't manage that data with or without humans, You need to know that. So that's disaster #1. Nothing like making a decision off of bad data.
00:12:29 Mark Smith
Exactly.
00:12:30 Martin Miller
So let's start there. And by the way, what is AI built on? Yeah, you could talk about a model, but what is a model?A model is code or code segments and code organization plus tables or referential tables or referential data, loosely coupled data. It's code plus data to call it very simple. And you really can't separate the two and have a complete model. So it has to have both parts of the shell together. Now, there's several layers of models and modeling types and different methodologies of how you can qualify your data and such. But the minute you say, I'm going to let go of my whole data team to save money, You kind of broke the model. So you have to figure out what's more efficient for my data team might be a good example to go after. And that would be a great use of time and energy, cost savings, probably operational efficiency, looking at how is my data managed? Can I improve my data management? Because if I can improve my data management at the Acme Corporation, I guarantee the next problem will be, how do I get this data better managed, governed, access controlled? delivered to people that need it for their business intelligence if they need it. And then the Acme crack team with Wiley as the chief technical coyote, of course, will be out there saying, the hell with BI. We're going to roll our own. We're going to do our own on-demand, you know, prompt engine to generate what we want to generate. I can see how many, what is it, mallets, anvils. What does Wiley Coyote love? He loves rockets. How many rockets can I manufacture? And where are my suppliers? How can I use this data for my suppliers to build these rockets more efficiently with sources from global economies implied in here? So yeah, let's take this rocket ship and see what Wile E. Coyote can build and see if it launches or does it go straight up in the air and come straight down and crash?
00:14:39 Mark Smith
Yeah, it's so right. And one of the things you covered there is that you've got to keep that expertise in the organization, that know how to touch, et cetera, the various parts of your system. Those experts also developing their skills in AI, does that create a massive four-den advantage for the organization? So is AI really an amplifier of expertise in what you're seeing?
00:15:05 Martin Miller
I would say it's a power tool in the box of tools, and it's a super tool. Under the wrong use,You don't use an impact driver to turn a little screw that takes less than 1/4 foot pound to turn, because if you do, you're going to rip it through the holes, whatever you're putting it on. So it's overkill. There's a place for it, and the place is increasing. And you can adjust how much pressure that AI, so to speak, can leverage or use. You can control that. I think it's a great tool in the tool shed. It doesn't replace thinking. how you use it. doesn't replace challenging. Is it delivering the right results? It will confidently deliver you results that may or may not be what you need in the end. And you should get into the place of, can I challenge anything I get out of it, if I'm using prompted AI specifically, to get better results. So don't think a single query or a single set of a PRD, like a product requirements document, is going to get you the perfect output. Think about it a little longer, iterate on it. Go ahead, put your PRD out there, put it into your favorite coding engine, define in the PRD how you want this thing to deploy, how it's going to get data, how it's going to manage it, what's the user experience, and let it rip. Sure, you'll get a first cut, maybe a good second and third cut. All right, so you bring it live. Great, awesome. And this is where things get really reckoning and understanding happening. So let's tell a real life story without naming anything other than ACME. So ACME delivers patient health care in this case. And in this case, Wiley Coyote, the chief technical, you know, whatever he is, out there and he needs to have his team make some anomaly fixes. There are complaints from users for some anomalies. You know what anomalies are. late 1977's word to replace the word bug, right? So we say anomalies and issues. So now we're going to get the feedback on that, you know, you're going to get your listeners coming in and say, okay, boomer. All right, yeah, I'm a boomer, whatever. I'm past that. But Wiley Coyote, who's older than I am, unfortunately, is going to ignore the feedback. And it's going to use AI to address these anomalies. And Did a great job. Addressed the anomalies, nailed it, put out the code base. We're going to deploy it live. Users wake up the next morning, turn on the switch, their computer, they go to log in, and uh-oh, the screen looks different. They didn't expect anything to look like this before. And they don't know where to find little things in the UI that they saw the day before. Now what? Is that a new set of, God help us, anomalies being reported? Or is that a time to transition, we're moving forward ahead with this. Because the reason I bring this up is you literally can generate the anomaly of the moment using AI all day long.
00:18:04 Mark Smith
Yeah, That's so, that's so relevant, so relevant. Going back to your metaphor of tools, you said it's just another tool. Do you think it's like that simple? It's just, you know, let's say it's just another, C-sharp to an organization or it's a, new BI tool or it's a new SAS tool. Do you see it just that simply as just a single tool or is it, you did say it was a super tool and I'm just wondering how much that metaphor applies with AI and the tool shed scenario? Because I have used the term as a tool myself as a way to explain it to people. But now I've leant more into it's electricity. It drives all the tools. It makes everything work. And it can be highly dangerous on one side. And that's why we have regulations. So we don't put fingers in sockets and kill ourselves with electricity. But it really powers everything. What's your thoughts?
00:19:10 Martin Miller
Wow. Did you know Martin is an electrical engineer by training from the beginning? This is shocking to me that you would bring up electricity. Absolutely shocking. But to Wile E. Coyote, it's a lightning bolt of thought. Remember that. So it's not a singular tool per se, but it's a superset of tool potentials. And you can have your ability to have agentic solutions that talk to each other with protocols that are well-defined. You want things well-defined. You don't want the machines to go run amok on you. And you can build all kinds of fun stuff. You can break some of the SaaS model stuff that you currently pay for apart. If you're a large organization and you're already paying a lot of digits after the first number in monthly recurring income to another company, you can consider, what's the opportunity for me to build my usage of that on my own? Does it make sense? for my business as an opportunity to save money, make money, or be more efficient? And the answers to those questions aren't an AI question all the time. They're pretty bottom line or, hey, you know, the company's handled so many outages, we can barely get usage during the business days. So there's things that service quality that come into the play here. And then what happens when you get back to like, I don't know if you're familiar with the book, The Phoenix Project. Okay, I'm dating myself, but that's okay. It's the beginning of DevOps, literally. It's a narrative of the beginning of DevOps. I was doing DevOps before it was called DevOps, but let's just leave it at that. And in the book, there's a notion of the master IT person. And IT is a loose term here for anything from, it could be a code-written piece to the person that plugs in a network plug, could be called an IT person or the person with the printout. And that person is named Brent. And when there's a breakage, they call for Brent. But if there's no Brent to know what to do when your AI isn't working or isn't working as expected, what do you do? Who are you going to call? So subject matter expertise is still relevant, even for Wile E. Coyote.
00:21:25 Mark Smith
Yeah, you're so right. I mean, yesterday, Anthropic had a major outage. And I was part way through doing something. No, don't kid me. Please say it isn't so. I was part way through something and all of a sudden I was like, you know, what the hell do I do here? Because I didn't have the tool that to do some assessment of what we've been building. And it definitely was a, it was insightful, the actual feeling that it created. So you're right about the expertise and that impact.
00:22:03 Martin Miller
Actually, you touched on an interesting area that I have a significant expertise on, which is SRE, site reliability engineering. And depending on the time of day of what you were doing and what happened and what caused the incident, what triggered it, and just keep going down. It makes you want to question who is running their SRE program. It's not me, that's for sure.
00:22:25 Mark Smith
Yeah.
00:22:26 Martin Miller
Or Was it really anything to do with their SRE program, or was it something to do with maybe the power company at the data center, which should be triple redundant power, by the way.
00:22:35 Mark Smith
Yeah.
00:22:36 Martin Miller
Or was it something about network transport, and was it related to something like an act of war, which we won't go down that path very deep, but just letting you know that networks do break under access war.
00:22:47 Mark Smith
So that was the response on social media, because it affected everyone globally for the downtime. And Everyone's like, hey, don't they run on AWS? And AWS had a data center hit in the UAE in the last 48 hours. And so, and I think that was a big stretch that the internet was making, but it was, it'd become a meme like within 20, 30 minutes of the outage starting.
00:23:17 Martin Miller
Well, you can use AI to create all kinds of fictional narratives you would like. And you can use AI to help spread it.
00:23:27 Mark Smith
Yeah, totally. And that's the thing, is that one of the things that I've noticed in the last couple of weeks, and I've noticed it exponentially grow, is that obvious use of AI to either suppress information or accelerate a narrative that is unsubstantiated. You can't substantiate that it's true or false.
00:23:51 Martin Miller
Well, not picking on any particular country and political and stuff, but we can talk about the scenario of it and think of, oh, here we are, campaign 2026, and here is, you know, you're getting the robocall, it's Grandma Susan. Now, I'm not going to do a voice impression here, but Grandma Susan says, I care about the future. And, you know, there's no real Grandma Susan, but then there's actually 10,000 copies of Grandma Susan making these calls simultaneously. So And the narration behind it, can sound very realistic. The dialogue can shift a little bit. It can actually react.And so you can do a lot of micro-targeted precision hits using AI. And let's just say the word nefarious is one of my favorite to use. And detecting it is fun. I love detecting it. takes a while to get an eye or an ear or a vision for how that could happen. But my mind is a terrible thing. I'll just leave it at that.
00:24:53 Mark Smith
So that's not something that would happen in my country. Like we, nobody ever uses a robocall type. So is that something common in the US?
00:25:01 Martin Miller
It doesn't have to be robocall. It could be robo SMS. It could be robo WhatsApp messages.
00:25:07 Mark Smith
Yeah. Wow. Like I said, not something that we hasn't obviously hit our shores yet as common.
00:25:14 Martin Miller
Give it, give it 2 minutes.
00:25:16 Mark Smith
Yeah, Very interesting. Although I tell you what, the blocking feature now on the iPhone is just amazing to filter out calls that are not in your contact list, as in even getting through. You said something then that was interesting, which is about the ability to detect. And I read a book some years ago called AI 2041 or 2042, I think it was, which was written by the former CEO of Google China. And in it, he talked about, he basically gives kind of like 10 visions of the future that are only 20 years out on his, what his observation were in 2021. So partway through COVID. And one of the things he talked about was deep fakes and the impact that would have in that he almost saw a scenario where it would get so good that humans, there was no way a human could ever detect. the fakeness and that we, like virus scanning software in the old days, it would be a cat and mouse game of being able to detect like a real live video feed. Was that truly the person we visually look, sound, everything like it, that we would need to have software that was like cat and mousing the whole thing in real time?
00:26:38 Martin Miller
So my first question was going to be to you was what was the year of publication? So 2021 is close enough to have recognized, I think, the large language model scenario was coming strongly along. You can even back up a couple of years on that. What was not completely understood is how easily the access gate for prompting would turn out within about a year and a half to two years of that book timing. That was an eye-opener. It was understood that people with the know-how could stand up their own LLM and play games with it. That was completely known. But the fact that we're going to make the world open to OpenAI's ChatGPT, basically give it away, and basically allow nefarious activity for free. And I say nefarious because there were people, I was brought in to a group, a room of investors eating, you know, multi-hundred dollar dinners talking about, is this going to take everybody's job away? Is this going to replace all? And it was funny to hear how they were salivating over the concept of how much saving they'll have. And I stood back and I listened to it and I'm going, this is what you're dreaming about? I'm looking at how I'm going to do good with this. How am I going to amplify work? How am I going to be more efficient with work? You're worried about bottom line numbers like slice, slice, slice, like a PE firm. That's okay too. There's a place for that.
00:28:07 Mark Smith
Yeah. But wrong lens. As we wrap up, what are you playing with when it comes to AI? What's your, you know, the way you're testing the boundaries, where you're innovating, What are those type of side projects that you've got on the go that you can talk about, that you can talk about?
00:28:29 Martin Miller
I hinted a couple of them, a couple ideas here. So, you know, a lot of things we pay for as little utilities or don't pay for and use for free as SASs, we don't think about simple things like that. We may be considering rolling those into our own command and control. So we're our own big brother, so to speak, of controlling our data, which is actually a powerful place to be. I don't need some company in your land holding my data for my project management or in their land holding my data for my e-mail and their land. It doesn't matter where the land is, it's conceptually and paying you for it when I can actually prompt for the scale of the project I have and create my own virtual SaaS effectively. And I don't want to say minutes, it feels like minutes, but literally minutes to create my mini SaaS solution and basically bypass your bill.
00:29:23 Mark Smith
Yeah. It's the opportunities to do that are unbelievably massive. And I'll give you an example of one that I just came across the other day. I was going away on a holiday with my wife to celebrate our 15 year wedding anniversary. So the kids were left with grandma and we're off on our way. And I commented, we're at the airport and pre-COVID, I used to be a habitual check-inner. What I mean by there was an app called Foursquare that you could check in where your location was. It would snap using GPS, kind of, were you in a cafe, restaurant, wherever, you know, in a major landmark. And you could put a photo with it, that type of stuff. And of course, the company behind it, massive data acquisition tool, right? And it was really cool at the end of, from about 2015 maybe to COVID. Then it just felt immoral when COVID happened. Who wants to check in anywhere? Like you were stuck, you were homebound, whatever. And I just never touched it again. I am sitting at the airport with a chatter, you know, I don't know which one it was. And I was just like, hey, do you remember Foursquare? Yeah, I remember Foursquare. what did it do? Blah, blah, blah. I'd like to export all my data for that period of time, put it on a chronological timeline. And I would actually like to rebuild that app, but just for me, I don't want to share it with anybody. But the way my mind works, I'm a very visually triggered person when it comes to restoring memories for me. And so if I take a photo at a location and I see it three years later, Boom, the surrounding memory, feeling, et cetera, comes back to me. I want that just for me. I don't want to give it to anybody. I don't want anyone to have that data. And it was crazy. In no time at all, I was able to write a spec and a brief. I haven't done it yet, but I could easily build that just for me.
00:31:20 Martin Miller
You can. And that's the novelty of today. And you can see the time is coming for where I think body, shop, or building solutions isn't about the number of people sitting in some room coding away, like, what is it, a million monkeys typing and coming up with Shakespeare. They'll never come up with Shakespeare, but we don't need the million monkeys, so to speak. We could get away with a good thinker. You might need a couple good contributors that understand their domain. Learning and knowledge is powerful. Let's just keep it really simple. And I did a deal with a university talking about AI and writing. And the exercise was as follows. It follows something that was done at MIT. And the writing exercise was to take a piece of writing, use pencil and paper. And that was one group for the test study. The next study test group was one that could have web search capability to write a paper. And the third group had OpenAI's ChatGPT or, it's not Bard, Gemini, excuse me, I get hung up on Bard still, to write their paper. So the test was turn in your paper at the end of the time, and three weeks later, let's query them on their paper. The people that used AI couldn't remember anything about the paper, nothing, because they actually didn't write it. So, what they gained and learned was degenerative, not positive further. It goes in that reverse order. So, the Google search, the researching using the web retained a little bit more, a lot more actually, and then the people that hand wrote it without the, they could go to the library, that was okay, and using books, they had the highest. So, fast forward, if you're going to depend on AI, you may not have a strong knowledge force of thinkers if they don't know how to do the core lifting and bailing and all the hard work.
00:33:28 Mark Smith
And it's crazy because it's at a time where critical thinking particularly is needed more than ever, I feel, in society. We've never needed the ability to think. And I love, you know, I used to work for IBM for a bit. And they all think, which the ThinkPad laptop came from, and which I think it was from, I forget who the CEO was at the time. But that challenged all humans to really get good at thinking and developing their thinking process, I think is one of those kind of personal skills that everybody should proactively be engaged in as we move further and further down this intelligence age.
00:34:10 Martin Miller
100%.
00:34:11 Mark Smith
Martin, it's been an absolute awesome time speaking to you and learning from you. Thank you so much for coming on the show.
00:34:18 Martin Miller
Thank you for having me.
00:34:20 Mark Smith
You've been listening to AI Unfiltered with me, Mark Smith. If you enjoyed this episode and want to share a little kindness, please leave a review. To learn more or connect with today's guest, check out the show notes. Thank you for tuning in. I'll see you next time where we'll continue to uncover AI's true potential one conversation at a time.




