Build an AI-Ready Culture Without the Hype
The player is loading ...
Build an AI-Ready Culture Without the Hype

Build an AI-Ready Culture Without the Hype
Sam Fankuchen

Spotify podcast player badge
Apple Podcasts podcast player badge
YouTube podcast player badge
Amazon Music podcast player badge
RSS Feed podcast player badge
Spotify podcast player iconApple Podcasts podcast player iconYouTube podcast player iconAmazon Music podcast player iconRSS Feed podcast player icon

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM 
 
Sam Fankuchen shares how organisations can move beyond AI hype to build real capability. The discussion focuses on creating an AI-ready culture, using AI agents with human oversight, and adopting AI responsibly in regulated environments. Sam explains why transparency, ethics, and experimentation matter, and why delaying adoption carries its own risks. The conversation is grounded in practical experience, showing how AI can scale human impact while keeping people, trust, and quality of life at the centre. 

👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/805

🎙️ What you’ll learn  

  • How to build an AI-ready culture without waiting for perfect certainty 
  • Where AI agents deliver value beyond chatbots in day-to-day work 
  • How human-in-the-loop design reduces risk while increasing speed 
  • Ways regulated industries can innovate responsibly with AI 
  • Why slow AI adoption can be more dangerous than thoughtful experimentation 

Highlights 

  • “We have more AI agents than we do employees.” 
  • “Ethics means understanding where we are in the curve of possibilities.” 
  • “We are not going to turn back the clock.” 
  • “Companies that do not figure this out will not be around.” 
  • “There is no such thing as total comfort with AI.” 
  • “You need a safe place where people can share what they are learning.” 
  • “AI is an extension of culture, but also something new.” 
  • “Every minute that we wait diminishes value.” 

🧰 Mentioned 

✅Keywords 
artificial intelligence, ai agents, ai culture, responsible ai, human in the loop, regulated industries, ai adoption, business transformation, ethics, productivity, collaboration, ai strategy 

Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption

I’m Mark Smith - nz365guy - Helping people reach their full potential 

I have been a Microsoft Applications MVP for over 14 years. I am passionate about helping people reach their full potential, through training, coaching and mentorship. 
 Accelerate your Microsoft career with the 90 Day Mentoring Challenge (https://www.90daymc.com/) 
 
Support the show
 
If you want to get in touch with me, you can message me here on LinkedIn (https://www.linkedin.com/in/nz365guy).  
 
Thanks for listening 🚀 - Mark Smith 

00:07 - Cut through the hype: what “real AI” looks like

02:47 - From service to systems: building mission-driven tech that scales

08:27 - Ethics as strategy: choosing the future you want to live in

14:20 - Build an AI-ready culture: make a safe space to learn fast

18:25 - The leap from chatbots to agent networks and why it matters

21:44 - Deliver business value: where AI adoption really gets unstuck

30:24 - The real trade-off: risk of AI vs risk of waiting

00:00:07 Mark Smith
Welcome to AI Unfiltered, the show that cuts through the hype and brings you the authentic side of artificial intelligence. I'm your host, Mark Smith, and in each episode, I sit down one-on-one with AI innovators and industry leaders from around the world. Together, we explore real-world AI applications, share practical insights, and discuss how businesses are implementing responsible, ethical, and trustworthy AI. Let's dive into the conversation and see how AI can transform your business today. Welcome to the AI Unfiltered Show. My guest today is Sam, the CEO of Golden, a platform redefining volunteerism through technology out of Los Angeles in California. Links are in the show notes. As always, Sam, thanks for joining me.

00:00:57 Sam Fankuchen
Thanks for having me, Mark.

00:00:59 Mark Smith
Cool to have you on. And the last time I was at Anaheim was, I watched a baseball game, Anaheim Stadium, my first baseball game ever, and it was 30 years ago.

00:01:12 Sam Fankuchen
Wow, it must have been right after the team started.

00:01:15 Mark Smith
I don't know. Who's that? Man, I was working for a company called New Horizons, which is head office in Anaheim, a computer training company back then. And yeah, it was my, I think that was my first trip to America even. Yeah, it's a long time ago. But I remember the city well. Tell me about food, family, and fun. What do they mean to you?

00:01:36 Sam Fankuchen
Probably family first. At this point, I have two young kids, a one-year-old daughter and a three-year-old son and a beautiful wife, and we try and spend as much time as we can together doing meaningful things and appreciating the beauty in the world that surrounds us. For fun, it'll usually include doing something with them, but I also really love traveling and restoring old cars, surfing, exploring the world, catching up with friends, and food. For me, it's a chance to appreciate the thought and passion that others put into their creations. I wish I were interested and good at doing it on my own. It's just not at the top of my priority list, but happy to enjoy wonderful restaurants like we have in LA and also explore the world and understand what people consider to be their home-cooked cuisine.

00:02:29 Mark Smith
Nice. identify with the young kids. My boy is 3 years old and I've got a five-year-old, so just started school in New Zealand. So all since COVID, of course. Tell me about this platform that you have.

00:02:47 Sam Fankuchen
Sure. Where would you like me to start?

00:02:50 Mark Smith
How'd it come about? Why'd you come up with it?

00:02:53 Sam Fankuchen
Well, this is a long story that's been quite well documented on a number of other podcasts. So I would, for anyone who's interested, encourage folks to hear my founder story in depth. Today, I'll touch on it a little bit, but I won't go as deep as there is room to go. But the short version is, I grew up largely in Southern California and had an opportunity to go to high school in Boston or just outside Boston. And so my parents and three siblings went to move me in there. And through a strange sequence of events, I said goodbye to them and they were all scheduled to take flight 11 on American Airlines home to Los Angeles from Boston on September 11th, 2001. As some may remember, that was one of the flights that was hijacked on 9-11. And for several days, I believed I lost my entire immediate family, only to discover later that they went standby the night before and didn't tell me, and I had no way of knowing that they would have done that. The experience of living through all of the events related to September 11th, but going through my own journey of emotions, not just with a close call with our family, but being very close to the action. In fact, we have other, like my sister's fiance lost his father in 9-11, and there are a lot of other people close to my family and friends who were directly affected in ways I was fortunate not to be directly affected.

00:04:24 Sam Fankuchen
And the experience of being that close to that disaster taught me a whole lot about what we do with our lives and what's the meaning of them and in them. And that led me to be very interested in service as a means of discovering worthwhile uses of time and also where we can advance quality of life for other people. And through my interest in service, beginning at a young age, I started to discover where most people would start on that journey and what it felt like to go through it, what people got right, what they missed, and what was yet to be accomplished, and just knew that I had always been interested in entrepreneurship and especially technology entrepreneurship. And I knew that there was a lot of opportunity to imagine how this should work and started systematically working on it as I began to graduate high school and move on to college. In the time since then, obviously we have accomplished a lot and evolved a lot and still have plenty of room to grow. For me and for our whole team, the idea wasn't necessarily to bring a product to market and be a silver bullet, but was instead to prioritize certain things that we felt should be true and real. And among those things are the ability for anyone of any background to live in their quote unquote golden moments through acts of service in all their forms. And another being people who are on a mission, having a real-time understanding of their productivity toward achieving that mission. And those were not things that were feasible prior to Golden. And I'm not saying we solved the problem for absolutely everybody, but we set an example that changed the arc of collaboration and development in the space, even before we started getting to the era of AI. And now we're just deeply excited that we can use everything we've learned in building mobile-first tools for anyone, whether they're a participant, volunteer, donor, advocate, or an organizer at a small local scale or a global institutional scale or anywhere in between in the nonprofit sector or in any other sector that does any kind of public service, which can mean government, corporate social responsibility, education, disaster relief, so many other categories. We've provided end-to-end real-time tools that are compliant with all the major regulations that put the human being in the center, manage their data appropriately, make it interoperable with other systems, and now use AI to help everyone understand their potential and be more thoughtful about it, whether it's as an individual participant or an organizer or somebody at the systems level who's thinking about interoperability and potential to solve real problems. 2026 is an incredible year for the applications of AI, not just the theory of AI. And we're very fortunate that all the work we've done leading into an understanding of human capital, labor markets, and collaboration are directly portable to a world where you have superpowers that multiply what a human being is capable of doing with an image of a highly capable human in the form of an AI agent and networks of AI agents. And regulations are advancing in different ways in different environments. But even in the last 48 hours, we've seen what AI can do in the real world. We're speaking today on March 2nd, and there were major changes to global powers over the weekend using AI. And there have been for the last couple of months. And there's all kinds of opportunity and considerations surrounding that opportunity. And We are a tiny fraction of a percent of a slice of the opportunity, but we're outsized in our eager interest to move the world toward what our potential for quality of life can be.

00:08:27 Mark Smith
You mentioned ethics there. How does ethics play into this?

00:08:33 Sam Fankuchen
I think the whole point is to start from a position of imagining what future we would like to live in and making it real, especially if we haven't had a chance to have an improved quality of life, which many people who are close to this technology see as a possibility. There are also other possibilities. And ethics means understanding where we are in the curve of possibilities and options and making the choices that optimize toward goals that we believe we would be proud of sometime in the future. And I don't think we're going to make perfect choices, especially if we take on the ownership of making choices to give people a shot at something. We're going to learn A lot. And I hope that the consequences are minimized and risks are mitigated everywhere where they can be. But understanding how to operate in that arena requires somebody to think ahead of time about who they are, what they would accomplish, what they will do, what they won't do, how they'll do it, what will happen in a challenging circumstance. and how to make sure that we uplift populations who would be vulnerable and how we maintain the greatest pleasures and satisfaction in human life as we know it, while also helping us get beyond the day-to-day requirements of living and breathing and thinking about what the role of human civilization in the future can be.

00:09:51 Mark Smith
So give me some examples of where you're seeing this applied.

00:09:56 Sam Fankuchen
I'll speak from concrete examples first, and I'm trying not to be affected at all by discussion of contemporary events because we simply don't know what the consequences of these choices in the current moment are. But they're significant, and I think everyone's paying attention. The place that we started from before we even did anything specific with AI was coming up. We have an internal culture document that dictates everything we do as a company. And we added to that culture document a section on artificial intelligence and what it can do, what we would like it to do, what our role in the ecosystem is, and how we're going to think about processing it. And we originated that, discussed it, understood it, tested it before we even started building AI products. When we make AI products, we do it in a way where we allow users to activate it when it's appropriate for them. There are certain baseline technologies that we provide that are not explicitly AI. In the world of software in general, though, where your software stack begins and ends and somebody else's begins and ends, sometimes it's in flux. And you can't totally control what other people choose to do with AI. But within the confines of our organization, the products that we do R&D on and support in production,

00:11:20 Sam Fankuchen
We try to make it extremely clear to where our users are. We have policies that comply with all the applicable regulations for AI specifically, but also more generally privacy and security and so forth in all the environments where we operate. And we operate in very high profile environments sometimes, like we have HIPAA compliance and we're SOC 2 compliant and we work with children and refugees and hospitals, you know, so sometimes, not all the time. But we do for a lot of people who are really serious about that mission-critical work, and we always want to be somebody who's seen as being an asset when the stakes are real. However locally significant or globally significant results of those programs may be, we just want to be a great player who's additive and not subtractive of all the value that organizations who use Golden can create on their own. However, there's also a responsibility to give people the best of what's available when they're ready. And pushing the boundaries and understanding what AI can do so much better than humans with the right supervision and human in the loop and so forth is a responsibility that we have as an aspirational market leader. I mean, we are today, we consider ourselves to be unquestionably the market leader. We would love to always be in that echelon of the conversation. And there's no way you can be there without a posture that looks to AI and is excited about it, understands what the future state could look like and is making responsible choices to get the best version of the options prioritized. So in our company, everything I just said would be repeated in its own way by every single team member. We do not hire people that don't have those beliefs about AI. Everybody in our company uses it operationally for different things. We have more AI agents than we do employees helping process our regular work. And we have the highest concentration of capable people supervising all that, doing things by hand whenever it's appropriate to do things by hand, and learning. Very often, things that involve AI have us thinking about what's possible and testing it and building infrastructure, more so than it has to do with what we do with an individual's data to personalize or control their experience. That's really up to the end user. It's not really our call, but we want to have architecture that makes it possible for people to have the cutting edge of whatever experience is appropriate.

00:13:46 Mark Smith
In your organization, where you've obviously got a lot of buy-in from your staff, how do you create that AI-ready culture? And how do you suggest other organizations create this AI-ready culture?

00:13:59 Sam Fankuchen
I think what I just described about your own culture, fundamentals, whatever those are, mission, vision, values, or something much more robust like what we have, understanding that AI is an extension of, but also something new related to the culture, because not everything you do is directly portable into AI. So you should think about how to port the things that make sense and then create a safe space beyond that to do what's possible that hasn't been possible before AI. And after you do that, you have to understand that nobody is totally comfortable with their understanding of AI. And I say that having had the great privilege of being in room the people who've developed frontier models and thought about the philosophies before those frontier models were possible, or have worked for decades with supercomputers before they were considered AI, or are thinking at the policy level about decades from now what the implications will be based on today's choices at the state or federal or global level related to AI. So there is no such thing, as far as I've seen, to the best of my ability to assess, but of total comfort. And anybody who's ever contemplated what AI is, aware of what it's capable of doing if it's misguided. So those things are all real. We shouldn't dismiss them. But we also are not going to turn back the clock. We live in a world with AI in it, just the way we live in a world with the internet in it and electricity in it. And the world is going that direction. So we can make a bunch of choices that shape the benefits that each of us get and we should take ownership of it. And before we do that, there's kind of an intermediate step where you have to get everybody comfortable with that being the reality. You got to have all the discussions and all the time and all the places with all the stakeholders, the ones you know, the ones you've overlooked, the ones you don't know exist. You don't need to be slowed down by it, but you need to do the work to understand where to begin. And for most people, that means in the context of Microsoft world, spinning up a Teams channel where you can have an open dialogue about AI and people can share news or just have discussions or talk about what they did with it or what they'd like to know about it. So that through osmosis, even if somebody's not taking an active part or they're not going to trainings or they're not in a function where there's a requirement, they're still seeing and they're understanding the velocity of the change. I spend 2 hours in the beginning of my day reading AI news because sometimes, and more than sometimes, almost every day, there's an insight in there that fundamentally changes how I would spend a lot of the rest of my day and week and year. It happens really quickly.

00:16:52 Mark Smith
Yeah.

00:16:54 Sam Fankuchen
So understanding how to find those insights, what to dismiss, where to go, what news seems trustworthy, what changes on a daily basis, what doesn't change is an important diagnostic. Otherwise, it can just seem like a whole bunch of noise, or you just go really far down a certain direction and you realize the slot that comes with learning. And that, if you don't have broad enough perspective, can slow you down or make you feel like it's not worthwhile, when in reality, it's totally worthwhile. And it's not always a big reveal, like the first time somebody used ChatGPT three years ago now, like this is the week that it came out three years ago. I think all of us remember where we were the first time we used it. and what you realized was different. That's a huge reveal. It's a huge reveal in a way only a handful of things in my life have ever been. And there may or may not be more like that. It doesn't feel like that every day. But every few days, there's something that really changes how you think about it, in my view. I'm curious what you think.

00:17:51 Mark Smith
Yeah, I think the last six weeks, some of the changes that have gone on and the pivot heavily into The talk about agents last year to really, like you said, I've got many more agents in my business than I have employees now, and that wasn't there six weeks ago. So the scaling and what is achievable now is just mind-blowing to me.

00:18:18 Sam Fankuchen
That's good for you. So in your business, and when you have agents, to what degree are those agents networked and doing tasks with each other versus one-off tasks between you and the agent?

00:18:29 Mark Smith
They all are. They're all networked. They have an orchestrator across them, and then I'm the human in the loop, and other staff are human in the loop on anything that needs to come up to that level, but otherwise they are, they all talk to each other.

00:18:47 Sam Fankuchen
Yeah, I'm really glad to hear that. A lot of people talk about agents, and we do too. We have some who are not networked because of the stacks they're on and the kind of tasks they have to do. or the constraints around the business process. There are some reasons why there's sometimes for people limitations, and for us, we're at a point where there are, but we also try and network them together. And so, because I think you're probably, even though you have enthusiasts listening, not everyone is in the place that you're at, but we could certainly talk about where there's synergies and certain choices about architecture for agents. That makes sense. I think that would be very productive because when we have this conversation in a year, I think most people will be at that level.

00:19:31 Mark Smith
Yeah. I think that it still requires a technical skill set and a, what would you say, a will to work through the sometimes shortcomings and the needing to relearn various parts. of what you're doing or going down an architectural path that you just have to roll back because it's not going to work. But you learn from it and redo it. And that's where I'm saying the will and the initiative to keep going. Tell me about, you know, I feel it's a move beyond chatbots now and as I say into this area of agents, which are really tangibly impact a business. But how should organizations be looking at delivering real business value? And I suppose I want to separate this from infrastructure type value, which is, going into an organization and you can clearly see where you could build things with AI that would transform the organization. But more the day-to-day staff, what are you seeing? You know, I've seen some CEOs of the large providers say within 18 months there'll be no white collar work. But if we look at that white collar level, and I'm not worried about the fear of that because I'm pretty comfortable with the things that will evolve in the months and stuff ahead. But data from the start this year shows that still AI adoption's only around 3% globally, right? So there's a lot of companies that are stuck between maybe thinking they need to do something, but not doing anything. And they're not getting any benefit, any realization.

00:21:10 Sam Fankuchen
There's a lot in those comments. So I will start with what's top of mind, but feel free to course correct if I miss the point of your your thought processor question. The importance of having transparency about AI and a place, a safe place where people can go to share what they're learning or ask questions or just be surrounded by it is huge. Because until you have intellectual honesty and comfort with understanding that that's today's reality, it's very hard to do any kind of thoughtful adoption whatsoever. And In a world when I wasn't an entrepreneur, I spent some time working in big company and then as a management consultant advising big companies. And big companies meaning hundreds of thousands of employees. And a lot of times, employees in companies like that, particularly ones with resources who are thinking about investing in the future, you have to make a case to network in those companies. And so there's a lot of informal process that goes to building a case for something. building a coalition of people to work on it. And you just have to be comfortable having what are called like hallway interactions and those fortuitous kind of bumps into other people and conversations like you might have on a college campus, but instead in a big building. And when you have a culture like that and AI is part of it, that's when you find like the really practical kind of cool prototyping experiments where we start to talk about why is something that we do in a certain way today. So process oriented and time consuming if we can expedite it or if we had access to certain types of information that empower decision making on time instead of in a cycle. There are just a bunch of moments like that are eventually going to work themselves out in any competent company because companies that don't figure out how to do that now will not be around in a couple of years. They're just, it's going to happen really quickly. It's going to be like Blockbuster disappearing with Netflix. And it's not even hypothetical. The only exceptions to that rule are when the data is so proprietary and so segregated from everybody else and mission critical for its delivery and the workflows specify certain kinds of interactions among the stakeholders, it has to happen in a certain way. And it's completely resistant to all the other forces, like all the other, like Porter's 5 forces on your business. Then maybe there's a pathway forward. But if you look at what the examples of that would be, healthcare, government, federal government. Those are exactly the places where AI is being deployed now. There may be smaller examples of things that are offline. There are, of course, but precisely those insulated environments are the ones where you're seeing a lot of innovation today or the rapid rate of innovation because they've been very thirsty for it and they understand the potential. You know, the first Nobel Prize related AI had to do with, you know, mapping the genome. And so that we could do all kinds of things to improve quality of life for people. It's super, super exciting. So we'll pause there. I do think that the adoption is going to happen a lot more quickly.

00:24:15 Sam Fankuchen
It's going to do all kinds of really mission critical things, but nobody is to blame and nobody's totally behind the curve if they haven't even opened any kind of LLM outside of ChatGPT. I mean, like we're so early in the game, but the velocity is changing so quickly.

00:24:32 Mark Smith
Yeah. So true, so true. Highly regulated sectors that, and I probably hear it more from the people I speak to in Europe, have a worry about anything from data integrity, security, how we'll use the risk of leakage for them. So you mentioned HIPAA with healthcare, but you know, the financial sector have a similar type regulatory control around them. many, many sectors of the market do. How can they innovate still with AI that accomplishes their goal of regulation, but also the potential for speed that they might want to innovate at?

00:25:20 Sam Fankuchen 
Yeah, until the regulations exist, they don't exist. And the best way to inform a regulation is with some firsthand experience of how it works or it doesn't work, where the risks are once you're in there. It seems like Things are changing a lot, but I'll speak to a sector I understand well that's very, very hesitant about AI, which is nonprofits for a variety of reasons. There hasn't been a lot of force to adopt technology in general. Everybody who has made choices in their life to prioritize the needs of others has a default setting of empathy. There are consequences for the work we do. We're trying to move things forward. But if you spend a career moving things forward in an instant, that can move them back to zero. then that's terrifying. And it's very easy to speak your beliefs into practice, especially when you haven't challenged those beliefs in practice. And that's kind of the place where we are now.

00:26:14 Sam Fankuchen
I mean, there's this saying in car sales where I spent some of my time early in my career, when you race on Sunday and sell on Monday. And in our world of public service, you know, trial by fire is disaster relief. because you need all the same tools as in blue sky settings and gray sky settings, but it's mission critical, do or die. It either works or it doesn't. People need all the resources and they need them now. You know, every choice diminishes in value the longer you wait or the fewer options you have. And so operating in settings like that forces a better understanding of what we should spend time on and what we shouldn't. And something I notice is a huge disparity between everybody who is an active voice online or in think tanks or on campus or in boardrooms about how AI works and should work and what we will do and won't do and how righteous and just we are for preventing something from happening. And that's true because risks, like I said, can take a lifetime to earn in a moment, you know, to to bear the consequences of making the wrong choice. However, what people don't see and don't make an economic argument for as often in settings like that is every minute that we wait or every idea that we have that we don't test doesn't have validity, doesn't get to market, people of rare diseases die, you know, like all kinds of things that we can be working on. And you have to, a saying we use internally in the company a lot is a quote from the writer F. Scott Fitzgerald. that the true test of intelligence is the ability to hold two different ideas in mind and still retain the ability to function. In AI, there's never really been a moment as present for most people like this.

00:28:08 Sam Fankuchen
And that's why I think the movie Oppenheimer, a year or two ago when it came out, hit such a nerve because the last time there was a central conversation societally about the significance of these choices, It was when the nuclear arms race was top of mind for everyone. And the difference was there were a couple thousand people in Germany and New Mexico, maybe a couple other places thinking about this. But today, billions of people use AI every day. And we're, again, only at, to use your statistic, I don't know if it's true, but let's say it is, 3% utilization of AI in organizations.

00:28:45 Sam Fankuchen 
We are just at a really interesting moment in time where everybody is aware. But not everybody is taking the responsibility of using it effectively or understanding what to do from here. And there are AI regulations in the EU. And the EU was early with GDPR for privacy regulations, and all those carry forward. But the US does not have standardized AI regulations. Other countries do not, and some never will. And some will institutionalize that control within the government itself, which potentially could be the worst risk. And so I don't think there's a right answer. The right answer is be aware and then be aggressive about moving the compass of, moving along the compass of true north for quality of life for human beings, which means protecting interests. We're totally clearly appropriate, knowing what we know. And It also means being really aggressive about pursuing things and knowing that it's not going to be perfect if we think there's a chance at giving everybody a better quality of life.

00:29:50 Mark Smith
Yeah. Sam, you've sparked so many ideas in my mind, which is why I love podcasting. People think it's, you know, mainly to educate other audiences. That is, but boy, I get a lot out of it myself. And so thank you so much for coming on. I have learned a whole lot from you today. Appreciate it.

00:30:10 Sam Fankuchen
It was really kind to have me. I'm not sure I'm the world's expert, but I've certainly met a few of them. And we are excited about it and particularly excited because at Golden, it doesn't matter so much what we do. It matters what our users of the technology are able to accomplish according to their missions. And in this day and age, a lot of that isn't just volunteering. A lot of people know us for volunteering in disaster relief and so forth. But There are 50 plus thousand organizations on 6 continents using Golden as their system of record to recruit, screen, schedule, track, and engage different supporters over their life cycle, whether they're volunteers, donors, advocates, partners, anybody else. And if that can be helpful for you, obviously we would encourage you to go to goldenvolunteer.com and use a free dashboard, download some free apps, or go crazy and get the enterprise stuff that include copilot integrations and all the other stuff. But we're just delighted and excited to be in a collaborative future like this. Prior to AI, it was really hard to do things together, and now it's way easier. So hopefully we can put some points on the board and improve quality of life all over the place.

00:31:23 Mark Smith
Yeah. And listen, we'll make sure we put those hyperlinks in the show notes. And we'll also, if you can send through the hyperlink for your origin story, in detail, just for folks that heard at the start and they really want to drill into it, we'll make sure that's in the show notes as well. Thanks again, Sam.

00:31:39 Sam Fankuchen
Thank you so much, Mark.

00:31:41 Mark Smith
You've been listening to AI Unfiltered with me, Mark Smith. If you enjoyed this episode and want to share a little kindness, please leave a review. To learn more or connect with today's guest, check out the show notes. Thank you for tuning in. I'll see you next time, where we'll continue to uncover AI's true potential, one conversation at a time.

Sam Fankuchen Profile Photo

As Founder and CEO of Golden, a leader in volunteer management and fundraising technology, Sam Fankuchen oversees engagement strategies for more than 40,000 organizations across nonprofit, government, corporate, education, healthcare, and disaster relief sectors. Golden has earned recognition from The Gates Foundation, Fortune, The Webby Awards, Meta, TIME, and IDEO, and partners with University College London, Microsoft, and Stanford HAI to advance AI for human well-being. Sam has spoken on social entrepreneurship, collaboration, and digital ethics at Stanford, Harvard, and UPenn, and most recently the UN’s High-Level Political Forum, and holds Board Director and Advisory roles for The United Nations Volunteer Groups Alliance, The Webby and Anthem Awards, Engage Journal, GivingTuesday, Gates Greater Giving Summit, and The Giving Platform Collaborative.