

Why Small Language Models Are the Future of AI
ILya Venger
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
🎙️ FULL SHOW NOTES
https://www.microsoftinnovationpodcast.com/702
What if the future of AI isn’t bigger—but smaller, smarter, and more specialized? In this episode, Microsoft’s Ilya Venger pulls back the curtain on the rise of small language models (SLMs), the evolution of intelligent agents, and why clean, governed data is the real fuel behind AI success. From industry-specific copilots to the power of Microsoft Fabric, this conversation is packed with practical insights for business and tech leaders navigating the AI revolution.
🔑 KEY TAKEAWAYS
SLMs vs. LLMs: Small language models are leaner, more efficient, and better suited for domain-specific tasks—offering lower hallucination rates and higher cost-performance ratios.
Data Quality Matters: AI is only as good as the data it’s trained on. Microsoft Fabric and Purview help unify, clean, and govern enterprise data for more reliable AI outcomes.
Industry Agents Are Coming: Microsoft is enabling partners to build industry-specific agents—like healthcare assistants or retail copilots—using modular skills and tools.
Copilot Studio’s Role: This low-code platform empowers teams to build, test, and deploy AI agents across Teams, Slack, and custom environments.
The Future of AI Interfaces: Expect dynamic, on-the-fly UI generation tailored to user intent—blending visual and conversational interfaces for seamless interaction.
đź§° RESOURCES MENTIONED
👉Microsoft Fabric – Unified data platform for analytics and governance
https://www.microsoft.com/fabric\
👉 Microsoft Purview – Data governance and cataloguing solution
https://www.microsoft.com/purview
👉Azure Marketplace – Access to industry-specific AI skills like financial document analysis
https://azuremarketplace.microsoft.com
👉 Copilot Studio – Low-code platform for building and deploying AI agents
https://aka.ms/CopilotStudio
OTHER RESOURCES:
ILya's GitHub: https://github.com/ilyavenger
ILya's Medium: https://medium.com/@ilya.venger
ILya's Research gate: https://www.researchgate.net/profile/Ilya-Venger-2
ILya's Startupsnapchat: https://www.startupsnapshot.com/writer/ilya-venger/
This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world.
Accelerate your Microsoft career with the 90 Day Mentoring Challenge
We’ve helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.
Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!
If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith
00:27 - Welcome to the Co-Pilot Show
01:04 - Meet Ilya from Microsoft Israel
02:33 - Leading Industry AI Solutions
06:14 - Small Language Models Explained
09:58 - Data Management and Fabric
16:12 - Industry-Specific AI Agents
20:09 - Copilot Studio's Role
23:30 - The Future of AI in 2025
30:34 - Closing Thoughts
Mark Smith: Welcome to the Co-Pilot Show, where I interview Microsoft staff innovating with AI. I hope you will find this podcast educational and inspire you to do more with this great technology. Now let's get on with the show. In this episode, we'll be focusing on small language models, fabric and intelligent agents. Today's guest is from Israel. He works at Microsoft as a data and AI product lead. You can find links to his bio and socials in the show notes for this episode. Welcome to the show, ilya.
ILya Venger: Hello, hello, thanks. Thanks for having me.
Mark Smith: Good to have you on. Tell me, I always like to get to know the guests to start with. Let's see your audience. Kind of get a bit of background before we get into the tech side of things food, family and fun. What do they mean to you? They're a bit of everything right. It's like having food with family is fun yes, indeed, indeed, but like what's the best food to eat in israel?
ILya Venger: oh well, everybody would like falafel, right, falafel? We've exploded it everywhere, so I'm sure that this is the thing that comes to everyone's mind, and I won't dissuade people.
Mark Smith: Yes, yes, I tell you what the minute you said it. I go straight to a place in New Zealand where I know they have good falafel, and you're so right, it's known everywhere as a great cuisine. So what part of Israel are you based in?
ILya Venger: I'm based near Haifa, so this is towards the northern part of Israel, sort of in the mountains just above the sea. Great sea views not at this time of night, but yeah.
Mark Smith: Nice yeah, really nice neighborhood, and does Microsoft have a big footprint in that region?
ILya Venger: So essentially, israel is a small country right. So we've got about 80 kilometers to the largest office, which has about 3,000 people, and then we've got an office just here nearby which is about 300 people, something like that. So it's an office, regional one, here.
Mark Smith: Yeah, yeah, okay, I didn't realize that Markashot had so many staff based there, but it totally makes sense. It totally makes sense. Tell me about your day job.
ILya Venger: What's your role, what do you do and what is your focus at the moment yeah, so I lead the product team within this ai, so business and industry solutions, particularly on the ai front.
ILya Venger: So we are working essentially along several different directions. On one hand, we're dealing with all the data that is needed for agents and for co-pilots, and particularly how do we transform the data according to industry standards and how do we offer our partners and customers within different industries, such as financial services, manufacturing, healthcare, retail, etc. How do we offer them the right data tools? And we're going to talk about this a little bit later. And then we've got another contingent of small language models, which are specialized models where, again, we're working very closely with our partners on wanting to train third party models, so models that partners themselves train with their own proprietary data, train or fine tune. And then we've also got some of our own models that are industry-specific for particular tasks. And then, lastly, my team is also doing what's called RAG, so retrieval and degeneration. That is again geared towards specific documents that are domain or that are related to a particular domain Mm-hmm.
Mark Smith: Just to jump in on the industry piece. To start with, is this what was formerly the industry clouds, correct?
ILya Venger: It still actually is the industry clouds. So this is business and industry solutions. Yes, so that encapsulates industry clouds as well as the AI ERP business. So all the ERP within Dynamics 365 is sitting under the same organization, and right now, one of the things that we're doing is actually bringing all the goodness of the industries into ERP. That is kind of the next big strategic move.
Mark Smith: Yeah, yeah, perfect. Correct me if I'm wrong about five, or is there seven different industry cloud focuses that have been in play now for some years?
ILya Venger: So I think there's about seven right now, yeah, and we're sometimes unified and sometimes dispersed. So we've got financial services, retail. Retail includes CPG, so consumer goods, as well as agriculture, which is also from plate to table eventually, if you think about that. And then we've got manufacturing, automotive. Healthcare. Sustainability also is a specific segment and specific area.
Mark Smith: Yeah, so I've been involved in Australia in implementation on the health cloud for a major company, there as in. That was a two-plus-year project. I did that while I was working at IBM and so very familiar with that. And I've actually demoed the finance cloud for the banking sector and I was involved quite a bit with the sustainability cloud internally with Microsoft about probably 18 months ago. Yeah, so I have a familiarity with them, particularly in the health side. You know Nuance was a company that microsoft purchased. I've just heard a big announcement of a co-pilot which will help doctors, you know, dictate notes and things like that. Is that sitting under your area as well, because it obviously would feed into that cloud piece or is it more than a standalone product?
ILya Venger: so actually, the cloud for healthcare is running out of the larger what used to be Nuance larger organization so for health. So it's a very large organization now with the acquisition very influential. We are doing quite a lot of work together with them. So the data foundations that they have in the healthcare data solutions, essentially running on top of the infrastructure that our team is developing.
Mark Smith: Okay, okay, interesting. Tell me about small language models. Yeah, we've heard a lot about them. Microsoft seems to be the leader in the discussion. Everyone wants to talk about LLMs out in the market, but, as I'd know, maybe 18 months ago that Microsoft started talking about small language models. Can you just explain to the audience what's the difference between an LLM and an SLM and what's your thinking around this?
ILya Venger: Yeah. So very, very briefly, the difference is actually encapsulated within the name Large language models LLMs versus small language models SLMs. Now, small language models are significantly smaller. So if we actually look at the size of the model, so the leading models have rumored to have about one trillion parameters, potentially some of them more, or in the hundreds of billions. Small language models are, I would say, in the up to tens of billions of parameters. So they're significantly smaller, which means that they have a smaller memory footprint. They require much less processing power. They require sometimes well, actually not necessarily less data and we're going to talk about this in a second but they crunch the same amount of data to produce more distilled results. Crunch the same amount of data to produce more distilled results.
ILya Venger: And small language models are particularly effective when sort of doing either general tasks that are sort of inside the consensus, what I would say of the large language model. So very complex reasoning, for example, would be complex for a small language model, which is more doable with a large language model. Also, small language models would have less knowledge. Large language models would be more accurate on the knowledge. However, in particular domains you can have significantly lower hallucinations because you would have trained the model to be a specialist in a particular domain. So you know, think about this generalist versus a specialist. So you've got a generalist who is that is, the large language model can do everything, and you know, because they have been trained on huge budgets and they require huge budgets to run. Then you've got generally high quality.
ILya Venger: And then specialists are specialized in a particular area and we've got some very interesting numbers sort of coming also from industry analysts, and things that we're observing in the market is that customers are expecting to see a very, very large number of different small language models in different domains.
ILya Venger: So I think Gartner is predicting that about more than 50% of the models are going to be outside of not large language models, but are going to be domain specific small language models. And with the work that we're doing, we're saying OK. So there are particular tasks that are, for example, answering questions on financial reports or analyzing. So one of the models that we've released with one of our partners, so Safer. So this is within Fidelity Labs. They have released models that are evaluating specifically marketing communications that are going out to clients within the retail banking domain or the retail investment domain in order to decrease mis-selling, etc. So you've got specialized models that are good at specialized tasks and therefore they are a really good fit for industries, because this is where we, as industries, can excel, because we can combine data that is relevant to the industry, train the model for particular tasks that are relevant to the industry and then achieve significantly better cost-performance ratio ratio, as well as just general performance quite often.
Mark Smith: Yeah, what are your thoughts then on fabric and data management For a long time? One of the illustrations I have with a customer is that when you feed, for example, copilot and it's fed by the Microsoft Graph, an average size organization let's take an arbitrary number it might have 10 million data artifacts inside that organization. They take those data artifacts and then they start querying it and go well, the answer's incorrect. You know that it's coming backwards. And then we look at the underlying data and those 10 million artifacts.
Mark Smith: Human error has been introduced over years and years and years of data input. You've got an Excel spreadsheet. Someone's created a formula wrong. It gets copied by the colleague. The colleague copies, sends it to their friends and then all of a sudden there's 10 copies of error data. And then along comes the model. It looks at it and it sees the reinforcement it gets is a reinforcement of this error introduced by humans, and this has happened potentially over decades of that organizational data. And then we go oh, the model is hallucinating and I go well, hang on a second, it's just trained on data that actually has the error already in the error in it. Why is data management so important and how you need to really distill down the data sets that you provide to a model so it is actually the correct data and not with all this human error that's been introduced over years.
ILya Venger: So there's, first of all, it's a journey and we're still sorting things out right, and I think that this is important. So let me first quickly talk about Fabric, because maybe not everybody's familiar with Fabric and it is related, but it's not the full solution to the problem that we've been talking about. So Fabric is Microsoft's essentially unified data platform. So you can bring data from multiple different sources, whether these are Microsoft sources, whether these are in your blob storage, in your lake houses, both within Microsoft or outside of Microsoft. You can symbolically link anything that you've got on AWS or if you've got something in BigQuery, so you can bring it all into one environment that is unifying the different personas there. So you've got essentially the whole pipeline, from starting from data architects, data engineers, data analysts, business analysts that are working with Power BI, so all of them have the tools that allow them to operate on data on top of Fabric. So this is what Fabric is, and that is one environment where you can manage your data and that you can transform your data, you can clean your data, etc. So to your question of okay, how do we deal with non-clean data? So this is one thing that is okay. So let's make sure that we've got one place where we can work on the data. So this is first thing. Second thing is that you need to have your governance processes in place. So, again, fabric is connected to Microsoft Purview, so Purview is a data governance platform and a data security platform, and so that is going to be your data catalog where all your data assets are registered, so that you know to actually so the data assets themselves available in Fabric, registered in Purview that's the simplest way to look at this or registered in this way, so that will allow you to build processes in order to be able to process your data and clean it and deduplicate it and trace errors, etc. And then we come, of course, to AI, and I think that this is, you know, and AI comes in two places, right? On one hand, you want to use AI as much as possible in order to make sure that your landscape is clean and is usable Downstream, also to be consumed by AI, right? So you've got AI on the entrance for as a filter, as a cleaning as well as, and organizing the data, as well as getting the data out and feeding it back into your agent.
ILya Venger: Now to the specific questions. I think it's like the journey of, you know, just taking all your data and just filing it in. It cannot work, right. It just cannot work because, exactly as you said it's like. So you need to essentially filter and understand, okay, which data sets are you going to be dealing with, which data sets are the more important ones? Then start working through these and establish your own governance programs and so some things that have not yet been. You know, we have not invented many new things in that space yet within the last 10 years. I think so the last 10, 15 years, we're actually on that same journey.
ILya Venger: So, starting from when we defined big data that, okay, we need to get our hands around that, get a program around this, make sure that we govern those particular data sets Now it's much easier to get access to those data sets, much easier. We've got better tools to curate. We've got better tools that allow us processes, but that's a mandatory part of data governance. Taking my own lens, we are saying, okay, we need, once we're bringing in data, if you are in a particular scenario, like an industry scenario, you need to make sure that you've got semantics on top of that data, right? You need to understand what the data actually means, right, and you need to make sure that you harmonize and that the customer from your I don't know branches in New Zealand and the branches in Australia, right, that the definition of customer eventually can be merged. And you need to even to be able to put all of them into Lakehouse such that they talk in the same language and such that you harmonize data from various different systems.
ILya Venger: And so our team is responsible for industry data models. So these are pre-built industry data models that are humongous, like 3,000, 4,000 tables for each one of the different industries, highly normalized, that allow to bring in and represent a variety of different data points, with rich semantics, essentially saying okay, so this is the table. This table means X, y, z, this is the column, so a customer means the same thing for everybody who brings data into the lake house. So this simplifies the work of data architects. And then we also simplify the work of data engineers that need to bring data from multiple different sources. So we've got very efficient transformation tools that are allowing to bring the data in. That's kind of how we're looking at it. So, government architect, engineer yeah, yeah, brilliant.
Mark Smith: Where do you see you know, being that you're focused on industries, industry specific agents? Like you know, with the clouds that microsoft have, we've seen the data model being the, the kind of a common data model per an industry. Are we going to now see like standardized industry specific agents? So, let's say, use the healthcare use case. Could I have a nurse agent right that has the knowledge, the training of a skilled nurse, in neurology for example, and they're going to be alongside maybe a physical nurse, you know a human that might be doing their work and assisting them. Are we going to see these like? But they're not just going to be trained on the education of, let's say, one university, it's going to be everybody from all the major universities, potentially as a training. So we're going to get this massive amplification of knowledge, but just on a specific topic, and then that agent is going to assist people in their day-to-day roles that work in those areas.
ILya Venger: Yeah. So this is exactly what we've got now with the DAX Copilot agent, where you've got an assistant for a doctor to take notes, okay. So it's still a doctor's assistant, so it does not replace, so it does not do diagnosis retro directly, but it is an assistant. So this is one thing that already we're starting, that is brewing there. I think the important bit is that Microsoft itself wants to facilitate, via the partners, the creation of these agents right In most of the years. So, specifically within healthcare, we are very, very deep because we've got the nuance acquisition, so we are very deep in that space and we've got very particular IP there that came with that acquisition. In other areas, quite often, what we have is we have a I would say, a starter agent, right. So, for example, we've got a retail shopper personalization agent right. So it's like for an e-commerce retailer if they want to build an agent that is going to be sitting inside their website or on top of their website, we are providing a starting point that provides the scaffolding, but we will require the customer to connect to their own systems, to connect to their own, to maybe augment within their own workflows, etc. So within many of the industry cloud, we've got an agent that is intended to be amplified either by customers or by partners, and for partners and customers to build on top. So this is one bit.
ILya Venger: That's when we're talking about agents, and then the other element that needs to be taken sort of a good way to think through this future world, I would say is skills and tools.
ILya Venger: Right Is that we are going to be offering skills and tools and, for example, my team one of the products that we have developed is financial document analysis skill for agents, right?
ILya Venger: So it is a particular skill that can be plugged into any agent. So this currently is available within Azure Marketplace. You deploy it into your own tenant and then this can be connected to any agent that you want and it will provide significantly better quality RAG solution than your standard out-of-the-box ones, because we have provided a lot of metadata and document crunching capabilities into this skill. That is then going to be available for building agents wherever you're building them. So that is relevant to the industry, right, because this is a financial agent or financial agent skill, and then we might have manufacturing standard operating procedures analysis skill, so you will have different these kinds of building blocks with which partners in each one of the industries are going to be able to build their own agents that are going to be sieving knowledge from everywhere, but we do need to smartly sieve that knowledge. Yeah, and each area requires knowledge representation in their own way where does copilot studio come into?
Mark Smith: all this from your perspective yeah.
ILya Venger: So copilot studio it's an interesting one, right? So, first of all, sort of non-marketing wise right. It's like Compiler Studio eventually is an evolution of what used to be part of virtual agents, and it has taken the best bits of that and essentially added all the AI goodness into the mix. So what do I mean? An agent relatively, you know, with low code agent that has different topics about which it can reason and it can act within these topics. These topics are being selected by the orchestrator, so a built-in capability based on LLM within Copilot Studio. So every maker within every company should be able to build either from scratch or from starting from a starter template to build their own agent, and what it allows is both to relatively simply create from basic building blocks an agent for yourself, so it could be an agent for myself or an agent for my team usually more effective, obviously, than just me creating for myself but creating something for the team.
ILya Venger: An agent that will be able to execute within the workflows and, because this is within the Power Platform, you are able to connect it into your flows with standard Power Platform building blocks. And what this allows additionally is, after you have built your agent, you've tested your agent, then it allows you to publish that agent to a variety of different channels, right? One channel could be, of course, teams. Another channel could be Microsoft 365, copilot. And the third channel could be Slack right, it doesn't have to be within the Microsoft ecosystem or it could be a custom website, right? So you can publish that agent and can have many appearances in different places. It doesn't have to be within the Microsoft ecosystem or it could be a custom website, so you can publish that agent and can have many appearances in different places.
ILya Venger: And that's what makes Compiler Studio very powerful, because you've got this composition, you've got AI, so youMs into your flows as well as connection. You know 3,000 connectors, or 3,500 connectors that we already got in the Power Platform ecosystem, so bring data from wherever. And actually talking about curating data, right, you expect that the specialists on this spot, in the places that know their data, will not be bringing on all the data, and they're actually the best position to curate the specific data sets that they want to provide to their own Copilot built within Copilot Studio, and then they will be also building it into the flows that are most logical for them within their organization, within their smaller organization, within their business unit and then publishing it out into the channel that where they communicate, rather than the channel that is forced on them by an external provider. So that's, yeah, kind of where we are and that's where we're going, but it's a journey, as always.
Mark Smith: Yes, yes we're fast closing in on the end of q well q3, and we've got a quarter to go. What's your feeling between now and entering the next FY? Even if we took it from a calendar perspective, what are you excited about that's going to happen in 2025? Well, within 2025 overall yeah, let's go for this calendar year. Yeah, as in. From how far do you think we're going to progress in this AI space? Not just necessarily Microsoft, but what are your thoughts in general?
ILya Venger: So I think, first of all, I can always be wrong, even within the nine months right, and as a product manager, my job is to try and look forward. I used to have a horizon of two or three years. I remember when the whole chat GPT came out, I sort of said, okay, my horizon actually shortened to four weeks. I could not predict what new things are going to come within the next four weeks. I think now our horizons have lengthened a little bit, which is, I think, very positive, definitely positive in the product space, because we are less randomized, so we're building more purposefully. So I think, the two things that need to be taken into account so, first of all, is that the models that we are getting and that we have been building over the last, let's say, year or relatively new generation of models, continuous improvement that we have seen was in, I would say, within the business space. Right Within the business space. Continuous improvement was within coding right, and coding is very important, not because it's going to replace coders, that's not the reason. The reason why coding is important and writing code is because then you can bridge what's code and create neurosymbolic architectures right. That's the simplest way to create that right when you've got the large language model, which is associative but it can write code and then once this code is written, this code can be executed and then you don't have all the limitations that you usually have on LLMs, that they are stochastic, probabilistic et cetera, because I've hardened the probabilistic part into code and now it stops being probabilistic, now it is ironclad and it's getting executed. So we've had significant improvements, definitely on smaller portions of code. So maybe you know, writing a huge, large enterprise-grade application we're not there yet and again, this is more of compete with standard human developers. But we've given our models and our agents right, and that's the second thing. We've given our agents an ability to traverse between the probabilistic and stochastic to the solidified and symbolic agents. Right, and that's the second thing. We've given our agents an ability to traverse between the probabilistic and stochastic to the solidified and symbolic methods, which are okay, write some code, evaluate that code, get a definitive answer right, and already on small snippets of code, it reasons very well. And so we've given this very powerful tool of you know properly giving them formal logic. So this is one thing that I'm very excited about and how it's going to be developing and it's going to be developing in.
ILya Venger: I think all the models that we're going to see and all the agents that we're going to be seeing are going to be executing code on the fly, and we can see more and more of this. This started with code interpreter. Now we're seeing some wonderful things also on the UI. I think that this is beyond 2025, but still, you know, we can talk about this in a second. But sort of executing code and for the models to be able to execute code is one.
ILya Venger: And second thing is agents, right, and agents, what they allow, and they allow that significantly better than where we were, let's say, two years ago, because I think it all started with AutoGPT and Baby AGI. You know, for those that remember those wonderful days where they always looked at this and said, oh, this is magic, but it would actually, you know, go astray immediately. Right now, we're already having much better agent patterns that allow for cross-validation, that allow coding again inside the loop, and much better platforms that would be hosting agents, and I think that what we're going to be seeing Copilot, studio as a place where you can generate agents with low code, etc. So you've got all those elements maturing into something. That is where you're going to have significantly more agents for significantly better tasks, with larger and larger portions of work that can be offloaded to the agent and that you don't need to worry about and that you know you validate. As a human, you validate, but then you also validate the edge cases, essentially, and you've got sufficiently good systems that are telling you okay, this is an edge case, you, you know you need to validate that these I don't know 1,000 are not exceptions. So you can start managing exceptions rather than managing just the general sort of acceptance and reviewing everything, as long as we've got sufficiently high fidelity systems, yeah. So I'm very excited about that.
ILya Venger: And I think one thing that people are starting to notice and it appears here and there I don't know how many have played around with cloud artifacts or with packages like, or services like, lovable or DevZero.
ILya Venger: So there's multiple different that allow you to generate, essentially, ui on the fly. I think that if we look a little bit further out and this is where I think Microsoft strategy also is going eventually with Copilot is that what you're going to be seeing is that more and more is going to be generated on the fly a UI that fits your specific needs yeah, and specific needs of the user in the moment, because it's not logical to always talk to your agent or to your co-pilot, right? What you want is you want to convey in the best possible way, in the shortest time span, to convey your intent. Okay, you know, if I need now to choose a color, it's not logical to say oh. It's not logical to say oh, red, 131, green, 25, blue. You know it's like you don't want to go there. What you want is you want the color picker right.
ILya Venger: So you, you either wanted to magically guess what exact color would go and would be the most beautiful to you or, yes, you know, give a person a color picker and they would need to pick a color, because we operate on the visual as well as audio and nobody takes our eyes away, and I think that this is something that is going to be happening more and more. I think that it's not going to be okay. You know, all the code is all the time generated on the fly, although even there, we can see already with the recent demonstrations of games that are being developed by the Ledge Language Model and that actually plays. It generates dynamically the stages and the levels. But what we're going to be seeing is probably it's going to start with particular elements being recombined.
ILya Venger: Also, some elements are being put aside. Okay, this is a good piece of code, let's reuse that. Okay, so here's a checkbox picker And's, because you don't need to reinvent these things. You can just take these off the shelves and then you're going to having a proliferation of libraries of these elements that right now require you know quite often, design and you know somebody to select them and to pick them and to make sure that they work together. So I think that all these things are going to be significantly simpler, because code can be generated on the fly, including the front-end code. So very exciting times, I think, from that perspective.
Mark Smith: Very inspirational. I love it it's. You've just a whole bunch of ideas gone off in my mind. Elia, thank you so much for coming on the show. Thank you, Thanks for having me. Hey, thanks for listening. I'm your host, Mark Smith, otherwise known as the NZ365 guy. Is there a guest you would like to see on the show from Microsoft? Please message me on LinkedIn and I'll see what I can do. Final question for you how will you create with Copilot today, Ka kite.

ILya Venger
Ilya Venger is a Principal Product Lead for Industry AI at Microsoft, where he leads teams in designing and building data and AI products and platforms, tackling strategic challenges by bridging technology and business. He is passionate about shaping our AI-driven future—thinking, writing, speaking, and engaging in meaningful discussions about how we get there and what it means for all of us.