How RAG Is Powering the Future of AI Agents
The player is loading ...
How RAG Is Powering the Future of AI Agents

How RAG Is Powering the Future of AI Agents
Farzad Sunavala

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM

🎙️ FULL SHOW NOTES
https://www.microsoftinnovationpodcast.com/717

What if your AI agent could not only retrieve facts but reason with them—just like a human? In this episode, Farzad Sunavala, Principal Product Manager at Microsoft, takes us inside the world of Retrieval-Augmented Generation (RAG), the architecture powering the next wave of intelligent agents. From solving hallucinations to building memory into AI systems, Farzad shares practical insights for professionals looking to build scalable, high-quality AI solutions that actually work in the real world.
 
🔑KEY TAKEAWAYS
- RAG is foundational for AI agents: Retrieval-Augmented Generation solves key limitations of LLMs by grounding them in real-time, private data.
- Metadata is your best friend: Rich metadata and filtering techniques dramatically improve retrieval quality and reduce noise in enterprise AI systems.
- Memory is the next frontier: Embedding memory into agents—via tools like Semantic Kernel—enables learning, unlearning, and contextual recall.
- AI engineering is evolving fast: Developers must move beyond conventional software practices and embrace ML Ops, vector databases, and open-source frameworks.
- Start small, iterate smart: Building ground-truth datasets and synthetic Q&A pairs is a high-ROI strategy for evaluating and improving AI agent performance. 

đź§° RESOURCES MENTIONED:
👉Azure AI - https://ai.azure.com/ 
👉Azure AI Searchhttps://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search or https://azure.microsoft.com/en-us/products/ai-services/ai-search/
👉 Microsoft Fabrichttps://www.microsoft.com/en-us/microsoft-fabric
👉 Semantic Kernel (Open Source)https://github.com/microsoft/semantic-kernel
👉 LangChainhttps://www.langchain.com/
👉 LlamaIndexhttps://www.llamaindex.ai/
👉 CrewAIhttps://www.crewai.com/ 

This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world.

Accelerate your Microsoft career with the 90 Day Mentoring Challenge 

We’ve helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.

Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!

Support the show

If you want to get in touch with me, you can message me here on Linkedin.

Thanks for listening 🚀 - Mark Smith

03:42 - From Oil Fields to AI Agents: Farzad’s Unlikely Journey

04:52 - Why RAG Is the Backbone of Enterprise AI Agents

09:23 - Fighting Human Error with Metadata and Smart Retrieval

20:37 - Building Memory into AI Agents: The Human Brain Blueprint

28:06 - The Future of AI Agents: Scaling Knowledge and Intelligence

00:00:01 Mark Smith
Welcome to the copilot show where I interview Microsoft staff innovating with AI. I hope you will find this podcast educational and inspire you to do more with this great technology. Now let's get on with the show.

00:00:17 Mark Smith
Welcome back to the Microsoft Innovation Podcast. Today, we're diving deep into the future of AI search with someone who's not just building it, but redefining how we access knowledge at scale. Joining us from the US, please welcome Hazard is a principal product manager at Microsoft. If you've ever wondered how AI systems actually find the right answers, or how they unlearn when they get it wrong, our guest is the one to ask. He's. The charge on retrieval, augmented generation or rag and helping teams build intelligent agents that don't just retrieve facts but reason with them. Full links are going to be available in the show notes for this episode. Welcome. Zad, how's it going?

00:01:00 Farzad Sunavala
It's going great. Thanks for having me.

00:01:03 Mark Smith
Good to have you on the show. Really interested in actually before we kick off food, family and fun. What do they mean to you?

00:01:11 Farzad Sunavala
Food. Family. Fine. OK. Family. Definitely. Very important from brother. Sister, Mom, dad to pets. Food. I have to say food is my fun. Big foodie love traveling love every single type of cuisine. Very ethnic foods. I'm a huge foodie.

00:01:29 Mark Smith
Nice, nice. Tell me. How you got to what you're doing now, like what was kind of just the the highlight tour of getting to this role that you're currently doing, what you're doing in the AI space?

00:01:43 Farzad Sunavala
Yeah, good question. So TLDR background is I have an engineering background at the petroleum engineering background actually and did a few courses in computer science and data science and machine learning when I was back in university and right about the time I graduated, you know the field of data science was just. Absolutely. Zooming took a few certifications, actually one of them by Microsoft only, and to get a data science certification learning fundamentals of Python And R and to Power BI and all that good stuff. Azure machine learning and then pretty much works for a very large international oil and gas company as. Actually a petroleum engineer, but specialized as a product owner to work with a bunch of software engineers, data scientists, machine learning engineers to build. On the digital transformation efforts going on at this oil and gas company and then Fast forward, you know a few months I figured, hey, this is great. But you know my hands on person, I'm an engineer by background. So I transitioned to actually being a developer. And so I was actually building business applications, machine learning models kind of a full stack developer kind of doing. Jack of all trades. Inside of this company and then projects I was using was actually Azure Cognitive search, now known as Azure AI search and so one of Microsoft Azure's probably most successful products. From a growth standpoint and Fast forward to my journey at Microsoft. I was a customer leading the development of this product. One of at the global scale, and then they had an opportunity for a product role and applied went through the interview loops and asked for it. Now I'm leading rag inside of Azure AI platform entirely, so that's my journey.

00:03:29 Mark Smith
I love it. I love it. How do you explain rag to people?

00:03:32 Farzad Sunavala
Yeah, I think the easiest way to explain rag is kind of more so everyone kind of knows nowadays with ChatGPT is it. If you don't, you live under a rock, essentially one of the limitations of ChatGPT and let's talk about old school ChatGPT maybe 2023 was essentially when you interact with the underlying. LLM or large land. Model these pre trained models don't have one access to your private knowledge or your private data and two they don't have access to real time information and so the concept of rag is essentially one of the best. Solutions to mitigate those two pain points in this generative AI world.

00:04:17 Mark Smith
And So what does that look like in a Microsoft context? As in, how do people like a data day user? How would they apply rag as opposed to, let's say, you're building an AI solution that's going to be used org wide and how you would do rag in that scenario might be quite different than a personal, you know, using copilot for example and using Greg.

00:04:39 Farzad Sunavala 
Right. So in general like retrieval augmented generation is probably one of the highest investments. At Microsoft in the world of, you know, people say 2025 is the year of AI agents and you can't really build an AI agent without grounding your knowledge. And one of the fundamental ways to ground your knowledge is through retrieval, augmented generation. And when it comes to an enterprise level scale, you have to think of things like you know. Tackle security scale compliance. Isms. And so when it comes to, you know, building an enterprise grade retrieval system, you really should look for out-of-the-box features that you know at the simplest level have all these sort of government compliance and security certifications. You have to look at things that have document and rule level security built in. To make sure that when I build my own AI agent and deploy it to people at my company. I'll see my knowledge that I have access to, but I won't see anything that you know is confidential for Mark only or maybe in your geography or your business unit. So all these things are very, very important when considering building a rack solution, especially in the. Enterprise world.

00:05:47 Mark Smith
One of the conversations I have with people when they talk about the eras that AI sometimes produces and this can go under the era of hallucination or just giving the wrong answer. And I say listen, if a average size or organization, let's say has 10 million digital artifacts and you are giving that data to your AI for use. There's gonna be a lot of human error in your data set that has built up over time. So for example, if I'm a project manager and on Friday I send out my project status report asking for feedback from a twenty person project team, I've now created 20 copies of. Let's say that spreadsheet. Each of them are going to have inputs etcetera from the various stakeholders and over the course of a year. You might create hundreds of copies of that Excel spreadsheet if you're doing it, you know that way and the error that happened in month one is being duplicated multiple times, and we give that to AI and say, hey, train on our data and then we get the wrong answer back. Because we've given AI potentially a whole bunch of human error introduced error over time, and I'm interested that data is not the correct data, it should be this bit of data over here in context of this not all that human introduced error in the data.

00:07:01 Farzad Sunavala
Yeah, that's a very good question. They're kind of answered in two ways, whereas there's the, you know, at the end of the day, QA, QC is always going to be a thing. So investing in quality, especially when you build a rack application, one of the first things you do is usually get your data inside a retrieval system or some people call it vector databases. You know, it's a buzzword been going on. And so as you do that a lot of times, data comes in structured, semi structured and unstructured formats and particularly on their price. All the spreadsheets, but it could be quite unstructured. Variety of different structures, mixed modalities. And so doing your data model activities and this is where tools like in solutions and platforms like Microsoft Fabric come into play where you know it's it's enterprise data Ready solutions, super easy to use, integrates very, very well with power BI and all the other Azure data tools that we have available investing in a common data model. To ensure that once you start building these data pipelines and maybe you use get really fancy and use AI models like GPT 4 or 41 to do metadata extraction or AI enrichment, OCR, whatever the list goes on. Investing in high quality understanding of your data and modeling it. That schema that you design when you're building an AI agent system is so important in an enterprise world, metadata is your best friend at the end of the day, because then they tell customers you know using Azure a search they come complain to me. Well hey, my quality is not that good and I say OK, let's take a look at your underlying data and some of these companies have literally the most rich ontologies and metadata and schemas that they've invested. You know, decades into building. And they'll completely ignore that. And so then the second part of the answer is going to be at retrieval time when you know the quality of the human error, of duplication, even in the event if you didn't invest in the underlying data ingestion phase, there's still techniques you could do at retrieval time to kind of make up for that.

00:09:10 Farzad Sunavala
Human error and this is even just pure, you know, software development disciplines without going too much into the kind of AI domain and just general information retrieval architectures where as I said, a lot of customers have such rich metadata and in the rag world of you know. There's the chunking, the vectorization, then the topk chunks that you send to the language model. Essentially how every copilot system in the world works. Today the the 101 architecture at least. And so when you have such rich metadata, A robust retrieval system should support metadata filtering. And when customers can leverage that. Filter to reduce noise like hey, show me all the HR policy documents inside of New Zealand. But if I'm a global enterprise and I type in that query and I get HR documents from US Hong Kong, Japan, it's like, Oh my God, I'm adding all this noise to my language model, right? Use a filter. That's a country, you know, equals New Zealand. And so things like that are just absolutely so, so powerful that regardless of, you know, everyone's going to have human. And we're in a world where humans are in the loop generating reports, and it may be long term in the future that might change where little AI agents are doing all this report generation and pipeline uploading and all that fancy stuff. But in the short term, I definitely think there's a mechanisms and best practices in place to make up for that. Just using fundamentals of kind of search technology and then AI just makes. Everything a lot simpler.

00:10:38 Mark Smith
Nice. Before we jump into why rag's important as we go into an agentic, you know, focused world, you mentioned vector databases and just for the listeners, can you explain how you talk about vector databases and how that's an important part of the process?

00:10:55 Farzad Sunavala
Right. So I like splitting it just into two words, so one vector. So if you go back to your math and physics days, you may remember what a vector is. But if you don't in the simplest terms I like explaining it is a vector is essentially, you know, just a universal representation of data. And this is usually represented as an array or essentially. It's a group of numbers. And this group of numbers is vector representation is at the end of a very powerful in an AI driven world, and it's very powerful because of this natural language processing technique called embedding models and this embedding models are essentially a term that's kind of derived from training large language models. Which allow you to send in some sort of type of modality for the most part. Text and you send it into this. You know transformer based model and the output you get back at inference time in the AI space is just a you know a very very large vector representation, a very high dimensional vector with a bunch of numbers. And that vector essentially is the embedding models way of saying if I say. You know, far as that and mark are on a podcast that's going to be translated in a very high dimensional vector space to a very long series of numbers.

00:12:17 Farzad Sunavala
And So what makes that really powerful is when it comes to now, I'll go to the database portion where essentially the the whole database portion, I won't go too much into that because at the end of the day, you talk about split database 2 syllables. You have data, which is essentially everything that humans generate on or read in some way. To perform and then a base someplace you could hold or base all that data that you have. And then the vector part is OK. I have all these series and numbers I need to store these in a database system and so there's all different types of databases. Now even you know traditional quote UN quote databases like SQL and Postgres now support vector as an actual data type. So I think in an AI driven world just adding just like there's. They can relax. Strings and ints and floats. There's also now vector data types, and so that serves as kind of the robust retrieval system when it comes to building generative AI applications.

00:13:15 Mark Smith
Nice. Nice. So when we think about agent systems and RAG, why is it important that people do the work to get their data and even and sometimes acquire additional data sets to make the best possible outcome for their agents?

00:13:35 Farzad Sunavala
Right. So one of the biggest lessons learned I tell customers that I speak to is when I ask, OK, walk me through your rag architecture or walk me through your generative AI strategy. And one thing that I often notice and sometimes even from the most mature of companies, is they sometimes forget this isn't just building a hello world, you know, web application. You know, this is an AI. It's very deterministic process. You have to evaluate, you have to iterate. There's this notion of ML OPS. You know if you.

00:13:59 Mark Smith
Yes.

00:14:08 Farzad Sunavala
Don't invest in that. You're setting yourself up for. And so when you start designing these AI agent applications, what I tell people is, you know, talk about acquiring data sets or even generating data sets which, you know, make AI and large language models make life a hell of a lot easier for doing that. And so I tell customers that, OK, you want to evaluate a rack? Solution one of the first things you need are what's called ground truth, Q&A pairs, and so you know what is the capital of France. I'm going to need a ground truth pair that says, you know, the capital of France. Paris, and so on. And so you do that at scale and you can start small. You can start at 10. You know Q&A pairs and then you know, scale up to 100 to 1000 depending upon you know how much ROI or F you want to put into your kind of ML OPS pipeline and acquiring you know ground truth data sets or investing in building. Is probably going to be one of the biggest high ROI activities that you can do when investing in building an AI agent. And so whether you acquire them or whether you do what what we do internally at Microsoft, a lot is leverage AI for building synthetic datasets. We actually have out-of-the-box tools that. Actually do that. For you and so that would be my advice when it comes to acquiring and generating data sets, use it for, not just, you know, augmenting or enriching your data, but the more bang for your buck will come from using it with your mill OPS pipeline and evaluation toolkits.

00:15:45 Mark Smith
Yeah, yeah. Really, really good. And of course, you're not talking about giving it access to all your data for every agent. You're being very specific on whatever the function of the agent, right?

00:15:55 Farzad Sunavala
Yeah, yeah. You don't want to give your, you know, agent everything to all the data that you made generated and that's kind of just, you know, traditional machine learning, deep learning practices where you have your training set and your test set and your validation set to just make sure that, OK, you're not doing the concept of overfitting as a data scientist would call it. So yeah.

00:16:16 Mark Smith
Tell me about error handling. How do you think about it? And also that concept of unlearning something that is no longer correct, perhaps.

00:16:25 Farzad Sunavala
Yeah, that's a good question. That several different angles. I could take this, but there's, you know, the general error handling that I think when it comes to building rack applications and again a lot of customers are using managed models and with manage model endpoints like Azure Open AI and you know you have to plan for capacity. You know are you anticipating? Very high spike volume. So the way I always frame just general. AI applications, when it comes to optimizing for certain things, is a framework called CSQ cost, speed, quality and you want to make sure that you have error handling mechanisms in place for each of those different pillars, and so you know when it comes to probably scale, which I hear from a lot of customers at the rate they're using AI and even sometimes when you use chat. CBT, or CLOT or whatever. Sometimes I'm using it so much they're like, but you've hit. Weight limit. You know you've maxed out your plan. You know there's you could go to the whole value chain of you know, how many GPUs are in the data centre that you're, you know, consuming and things like that. So having robust error handling and rate limiting strategies, I think are a good enterprise pattern. There's a lot of techniques for that, such as round Robin, especially if you're quite agnostic with the OR lucky with the variety of different data centres you may have in your given region. Of course, for Azure Open AI, one of our most popular features is provision throughput, so.

00:17:49 Farzad Sunavala
You know you have a large amount of queries that you're going to send to an Azure open AI service. Purchasing that is a great option as well as a kind of more economical option as well versus pay as you go. And then also the question on unlearning. So this I think realms more in that queue bucket for quality. And so if you're interacting with an AI agent, one of the core primitives that I think is extremely important and now kind of emerging as a very hot topic is the concept of memory. And memory when it comes to AI agents and even rag in general which can be used as a memory pattern for AI agents is quite important because at the end of the day, if AI agents really want to excel, you have to give it the human brain. Like how I remember to prepare for this podcast or, you know, outlook reminded me that I had this on my calendar. And so how can you give your AI agents memory the ability to learn new things? New salient facts from previous conversation history, and also recall them when needed on an as needed basis? And how do you do that at scale throughout the life cycle of? Agent so building memory blocks into your generative AI architecture is extremely important, especially if you want to invoke this concept of learning and unlearning certain things to really have your AI agent have almost like a human brain or some human like feeling.

00:19:20 Mark Smith
So when you think of memory, what are the levers that you have to pull push that type of thing? Cause if I think of memory and let's just take something simple like. Outlook and I get an e-mail from Tom Jones. The way I see AI running at the moment, it looks at the context of that e-mail and comes up with potentially a draft response, but I have a relationship with Tom Jones. It might span years, right? And the way it's currently working, it seems to focus on the thing right now. So is one of the levers going to be time like I want a nine month window here on the memory for the agent as an example, and then of course you've got this. If we look at neural pathways and how synaptic connections are made is by repeating something that becomes a stronger memory. It's long. Yeah. And you know, but we could also overwrite that like when it comes to habit forming and you create a new habit to replace an old habit. What do you think of when you think of this concept of memory and the levers that you can pull, push, etcetera with it?

00:20:24 Farzad Sunavala
Yeah, that's a really, really great question. And so at the end of the day, I think that in the world of building AI agents, the reason why I think memory is so important and also as an AI builder, probably one of the most cool parts because this is really taking your AI agent to the next. Well, when it comes to value creation and one of the levers that I think when it comes to you know the hello world of memory is honestly kind of just how ChatGPT works today where it automatically has these, you know, fact extraction capabilities where you interact with something, say, hey, my name as far as. Hey, I like dark mode. Hey, respond to me in concise bullet point. It's literally in the back end. The underlying implementation is really not that complex where all it's doing is extracting these from the conversation history that could be stored in something like a a search or cosmos DB, whatever. Some sort of, you know, key value pairs and then in language model is. Probably has some system prompt in the back end that is saying, hey, you're an expert memory system that is responsible for extracting key entities, key facts, maybe certain decisions that are made, and the world is your oyster with whatever memory means to, you know, in your use case. And so, you know, in your Tom Jones example. This gets kind of interesting because this kind of goes into the concept of kind of semantic or entity memory where the buzzword knowledge graphs kind of enters the room. OK, maybe if I interact with this agent system, what if in the back end I built something that automatically had this, you know, quote UN quote offline cooking. Mechanism where as I was interacting with my AI agent system or multiple people were interacting with my AI agent system. Them it suddenly started building these entities and nodes and relationships and observations that hey, mark is good friends with Tom Jones. Hey, his birthday just came up. Maybe I you should wish him happy birthday, you know? And so that's a very, very powerful concept when you literally are building neural networks.

00:22:25 Mark Smith
Exactly.

00:22:33 Farzad Sunavala
Or your AI agent. And the tooling there is honestly available today. Some of the open source frameworks 1. Of my favorite. Ones I was just playing around with the other day. I'm sure a lot of people have heard of semantic kernel and they actually just a month ago launched A whiteboard tool and this whiteboard memory tool. I highly recommend if you haven't looked into it, there was a. A bunch of Microsoft researchers who's really just spend, you know, they're 9:00 to 5:00 every day. Probably more than that, just looking into through. How memory can work for AI agents and you know what is the industry and society over complicating it? And what are the most high value memory tasks that you can actually have. And so one of them that came to mind was this whiteboard memory tool where essentially you just you know, the semantic kernel world you just say hey. Add whiteboard tool. You know whatever. Something super simple. But if you look at the underlying implementation, it's essentially just taking, you know, a white board. You can't see me, but I'm holding my whiteboard here and just having a scratchpad that, you know the agent is automatically taking certain decisions, certain action. And just mapping those as key value pairs and little to that note. Having that scratch notepad or whiteboard for the AI agent at any given time to do all the CRUD operations was so powerful.

00:23:53 Mark Smith
Powerful. I like it is, unlike so many metaphors, jump to mind on how I could see that working effectively like you know, you rub something out because it's not applicable anymore. And so that unlearning piece tell me about AI engineering versus conventional software development. What changes are you seeing in the market around how software is generated? These days, over really, the more you know the increasing role of the AI engineer. No.

00:24:20 Farzad Sunavala
Yeah, great question. So I honestly think the biggest I see all these you know, Rd. maps and things to like level up to be an AI engineer, what existing software engineers or conventional software engineers can do to upscale to, you know, maybe market themselves as an AI engineer. And at the end of the day, you know. Drag is literally what dominates the majority of AI agent use cases. So especially when it comes to, you know, practical forward deployment engineering and cloud solution architecture and things like that at the end of the day, clients and customers want enterprise rack, they want it to scale, they want it to be high quality and accurate to get the value creation and in some cases they want it super fast and say you have to know. From a rag architecture perspective. As a conventional software engineer, one what is rack? How do I go from zero to hero? Just understanding the fundamentals of the architecture you know, start with reading the original paper by meta. Then you know now there's probably thousands. Honestly won't be surprised if there's millions of, you know, archive AI universities and companies investing in just researching. Best practices for rag really cutting edge technique. And then, Oh my gosh, the open source community, that's honestly my best friend. I learned so much just from reading, you know, not just Microsoft open source like semantic kernel, but I'm a huge fan of lane Chain Llama index crew AI like of these open source contributions are absolutely. Phenomenal. And the way they react to market demand is absolutely insane, and so my best advice here. Conventional software developer learn about the fundamentals of rag build an AI agent application from scratch. Then you can start playing with frameworks and some of these advanced things make tooling a little bit easier and then build something into a production. I think personally what got me started just as a product manager again and very hands on product. Manager, I have a I assistants doing a bunch of things for me servicing my outlook. I have a graph rack over my outlook e-mail. I have a personal WhatsApp agent so you know wish all my friends and family happy birthday because sometimes I forget and the list goes on so just get hands on and invest.

00:26:38 Mark Smith
We're six months before the end of the calendar year in 2025. Where do you think we'll be in December? And I'm not asking for a Microsoft opinion here. I'm asking for opinion on where what are you seeing shape up with. I just feel the first half of the year has been phenomenal, particularly in the area of reasoning.

00:26:48 Farzad Sunavala
Yeah.

00:26:59 Mark Smith
You know, becoming such a bigger part of what we're seeing from all the major LLM providers in the market. But between now and the end of the year, where do you think we'll be in? Timber.

00:27:10 Farzad Sunavala
Yeah. So I would say the way I kind of highlight it is 2023 was all about learning about rack just in General 2024, we saw some general, you know, AI assistant chat bots that go into production in 2025. People are talking about, you know, multi agent systems. And I'm seeing a lot of these in production as well and so now it comes to, OK, can these agents scale? And So what that means is, OK, I'm building an AI agent. I have maybe 2 knowledge sources, let's call it Bing. And maybe, you know, Azure BLOB storage. And now I'm like, OK, I think this is really useful, but in order to take it to the next level.

00:27:53 Farzad Sunavala
For another set of stakeholders, I think I need to add Azure SQL, but wait now I actually have some data in Postgres. Oh wait, there's actually a bunch of data in SharePoint that I need to add and it's extremely confidential and there's purview labels and. And you know that list you know keeps on exponentially growing. And so I think by end of 2025 knowledge retrieval in general is going to be absolutely huge where the list of existing tools that you know a customer or a developer has to invoke today is going to 5-10, maybe even 100X by end of this year. As the ability and the pace of developers building AI applications is just rapidly increasing, especially don't even get me started on code generation tools and things like that. So that's one the concept of knowledge retrieval and. 2 is I touched on it earlier in the call memory. I honestly think memory is going to be huge investment for a lot of AI engineers as it really just give AI agents the next level of having the human brain.

00:28:59 Mark Smith
Yeah, I like it Zad before I let you go. Anything else? You want to add.

00:29:03 Farzad Sunavala
I think my biggest advice to everyone. Seeing it in the world today, just talking with a bunch of customers and students, people with, you know, experts and PHD's and machine learning and AI with someone who's just pivoting their career, or sometimes even in middle or high school, I think AI is incredibly powerful. Be responsible, be safe. But the best way to get started. Just by trying so get hands on.

00:29:29 Mark Smith
Hey, thanks for listening. I'm your host, Mark Smith, otherwise known as the nz365guy. Is there a guest you would like to see on the show from Microsoft? Please message me on LinkedIn and I'll see what I can do. Final question for you, how will you create with Copilot today? Kati Kite.

Farzad Sunavala Profile Photo

Farzad Sunavala

Farzad Sunavala is a Principal Product Manager at Microsoft, where he leads the product design and execution of Search and AI capabilities in Microsoft Azure. Farzad is passionate about empowering individuals and organizations in creating cutting-edge experiences that revolutionize the way we interact with technology.

Prior to joining Microsoft, Farzad held a variety of positions in engineering and product management at Chevron Corporation. Farzad holds a B.S. in Petroleum Engineering from Louisiana State University, accompanied by minors in Computer Science and Technical Sales. Additionally, he earned an M.Eng. in Engineering Management from Cornell University.