AI isn’t just changing how we work — it’s changing how we lead. But most leaders are still trying to apply old thinking to a very new game. Kree Govender leads SMB sales at Microsoft Canada, supporting 1.4 million businesses through Microsoft’s partner ecosystem. With over a decade of experience leading teams across South Africa and Canada, Kree has had a front-row seat to the AI transformation — and he’s helping shape what modern leadership looks like in this new reality.
In this episode, Kree breaks down what it means to be an “AI-enhanced leader,” why trust and curiosity are now strategic advantages, and how to lead teams that include not just people, but AI agents. We talk about cultural resistance to change, the risk of outsourcing your thinking, and how to stay human in a world of automation.
Whether you’re managing a team, experimenting with Copilot, or just figuring out where AI fits in your decision-making, this conversation gives you a grounded look at the leadership skills that still matter — and the ones you’ll need next.
Samuel
Hello, Kree. Very nice to have you on the podcast today. So Kree, you’re one of my colleagues, you’re a friend, and I’m very excited that we’re going to have this conversation about leadership in the age of AI today.
Kree Govender
I’m super excited to be here, Samuel. It’s wonderful to be here. We’ve been planning to do this for some time, and I’m very excited for what we’re going to chat about.
Samuel
Before we start, can you please explain your role at Microsoft? What’s your journey at Microsoft?
Kree Govender
Yeah, I’ve been very fortunate that Microsoft is the organization I’ve managed to stay at the longest — just over 10 years now. I started my career at Microsoft in South Africa. I led our specialist team for data and AI, our public sector team, the consulting services business. Then I came across to Canada three years ago — tomorrow will actually be three years to the day. I joined as a SalesX lead for the corporate side of the managed business, and then over the last two years, I’ve been leading our SMB business, which has the opportunity to serve 1.4 million Canadian small and medium businesses every day through our partner ecosystem. It’s very energizing and purposeful.
Samuel
That’s a lot of customers.
You’ve been right in the middle of the AI transformation — obviously being at Microsoft — but being a leader puts you in the right spot to experiment with new tools. When Copilot came in two, two and a half years ago, you had to learn how to lead using those tools. From your perspective, what does being an AI-enhanced leader actually look like today?
Kree Govender
Today it’s an absolute imperative. We’ve seen it recently — the announcement by Accenture where they said the people who were largely let go were the ones not willing or not trainable or retrainable in adopting AI. It’s one of the three A’s I look for when I’m trying to bring people in. The first one is attitude. The second is aptitude — your ability to learn and relearn.
Within our organization and now the broader landscape, for leaders and ICs alike, it’s about being a continuous learner. An AI-enhanced leader today is very much, in my mind, like a navigator at sea. The map has changed, but the horizon hasn’t. You still need a clear destination, but now you have better instruments to get there.
AI-enhanced leadership isn’t about mastering every tool — you can’t; there were 70,000 released just this year — or mastering prompting. It’s about mastering your curiosity. It’s about being comfortable in the gray area because it’s not a deterministic technology. It’s where data meets discernment and where your logic combines with your human instinct.
It’s about leading your teams to not see AI as this cold machine, but as an amplifier of human potential. I don’t like the term “using AI”; I’ve started replacing it with “working with AI.” It’s a companion. For leaders today, an AI-enhanced leader needs to look at it that way:
How does this help close gaps? How does it amplify what I do? How can it be a thought partner? And how can I bring that to the team so it lifts all our performance?
Samuel
You mentioned so many important things there. Curiosity — absolutely key with generative AI. One of the only limits to what you can do is your own creativity. I also love that you said it’s not “using” AI, it’s “working with” AI. That’s why we call it Copilot — it’s not the pilot; it’s the copilot.
When I train people on prompting, I tell them: act with LLMs like you would with a colleague. If your question wouldn’t be clear to a colleague, it won’t be clear to the LLM.
Let me ask: when you’re using tools like Copilot for decision-making, how do you balance gut instinct with what the AI tells you? We know LLMs may hallucinate. Is there a decision process you think leaders should experiment with first?
Kree Govender
Yeah, and this is something society at large needs to watch for. Many things we see today, we assume are facts — there’s a proliferation of information that’s not grounded in fact. So having the ability to be critical is key.
I think of instinct and AI as two lenses in the same pair of glasses — not smart glasses, just glasses. One lens sees nuance, and the other sees patterns. You need both to see the full picture.
I go back to a Microsoft principle: maybe not as harsh, but like zero trust in security — verify incrementally. That’s how you marry instinct with AI outputs. Microsoft’s responsible AI efforts build in safeguards that eliminate much of the risk and ground outputs in referenceable data. But still, verify explicitly. You cannot outsource your thinking.
If you delegate to a super-smart intern or employee, you wouldn’t just take the work as-is. You’d evaluate, add feedback, ensure your fingerprints are there.
As for where leaders should experiment first: decision support. Pick a process that’s heavy but emotionally neutral — like forecasting or prioritization. Let the AI challenge your biases. Ask it what the data doesn’t show. Often the most valuable insights lie in the tension between the model prediction and your intuition. The balance comes down to humility — knowing when to trust the numbers and when to trust that quiet voice built over years of experience.
Samuel
Do you have a trick for when you doubt whether the output makes sense?
Kree Govender
Very much so. I use different models and triangulate between them.
Samuel
That’s a good idea. A customer asked me yesterday — what I do is ask for citations. If the AI struggles to provide references, maybe it’s hallucinating. With Copilot grounded in Microsoft 365 data, hallucinations are rare because it double-validates. But for non-technical people, it can be hard to know when something is off. We’re also seeing AI-generated content everywhere, not always true, and future AIs will be trained on this content.
Kree Govender
Yes, and we’ve just seen an example of this. I can’t recall which consulting firm it was, but it was for the Australian government. They passed off a piece of work that wasn’t vetted — no critical human analysis. The customer realized it wasn’t grounded in fact; there were hallucinations. They had to pay the money back, but the reputational damage is almost irrevocable.
Samuel
Totally agree — I saw that too.
Can you share a moment from your own work where AI completely changed how you made a decision — for better or worse — where you heavily used AI as a leader?
Kree Govender
Yes, many examples. If time permits, I’ll share two.
The first was while preparing for a keynote. I had AI generate Cliff Notes for my speaker notes. Oddly enough, it was in Montreal, for the C2 Web Conference. When I got there, it didn’t work — the speaker notes weren’t available. And I realized: nothing trumps human preparation. You cannot leave everything up to a device. Prepare for the situation where the tools fail.
The second example is more practical. We work with many partners in SMB, and we measure tons of engagement data — completion stats, program utilization, etc. When we applied AI to the data, it uncovered surprising insights: the partners scoring the highest weren’t the busiest; they were the most intentional. Those deeply engaged — not frequently engaged — had far better outcomes. We had been measuring frequency, not depth. That insight shifted our entire strategy.
Samuel
So you used AI for the analysis? Which tool?
Kree Govender
Yes, we used the Analyst agent in M365 Copilot.
Samuel
Great. I’ve heard good things about it. I should try it more.
Now, you took AI to improve your work process. What’s the biggest mistake you think leaders make when rolling AI into their workflow? You’re working with many customers, so I assume you’ve heard stories of mixed results.
Kree Govender
There are two.
The first is not knowing what a great use case for AI is. Because it’s topical and in the buzzword vernacular, there’s a fear of being left behind. Leaders jump in without doing the internal work to identify where AI genuinely makes a difference.
Related to that is clinging to traditional ROI. Traditional ROI is: what am I paying and how much revenue will it generate after costs? With AI, we need to rethink those metrics.
But the biggest mistake is treating this as a software rollout when it’s actually a cultural and mindset revolution. Too many leaders think it’s a technical challenge when it’s an enablement challenge. You can train an algorithm in days. Shifting people’s beliefs and habits takes months or years. We’ve seen how challenging adoption can be even inside Microsoft.
There’s a term I picked up — reflexive AI — where using AI becomes as natural as shifting gears in a manual car. Getting there requires cultural change.
Samuel
Since it’s a cultural shift, how do you spot cultural blockers that slow adoption? Adoption is a huge concern for organizations — you need to change habits, workflows, culture.
Kree Govender
Great question. Much of it comes back to leaders creating psychological safety.
Cultural blockers sound subtle, like:
“That’s not how we do it.”
Or, “I don’t trust what the AI says.”
What they’re really saying is:
“I’m afraid to look foolish,” or
“I’m afraid I’ll be replaceable.”
People will sabotage technology if they fear being replaced. Years ago, we were developing a biometric system for the Department of Home Affairs in South Africa. Devices kept breaking. We eventually realized employees feared losing their jobs, so they sabotaged the scanners.
So leaders must create a psychologically safe space where people understand AI will amplify them, not eliminate them. That takes a special kind of leader.
Samuel
Wow. And as a leader, you’re giving employees a gift — empowering them to use these tools increases their long-term career resilience. That Accenture report shows people losing jobs now are those unable to use these tools.
Kree Govender
Exactly. There’s a saying I love and share with customers:
“If you can’t waste minutes, you will lose yours.”
Meaning: if you don’t spend time experimenting and learning, you’ll lose out.
Samuel
I love that. When I train people on prompting, I tell them: you’ll spend more time creating a good prompt than a Google query, but you’ll save far more time in the output. You mentioned leadership skills — what new skills matter most in this AI-driven world?
Kree Govender
It’s about looking inward and being more human.
Leaders now must be interpreters — translating between what data shows and what the human heart feels. Skills like empathy, curiosity, adaptability. We did a course on adaptive leadership last year — it’s much more aligned with the world we’re going into.
When AI handles practical and predictable tasks, what remains for humans are emotional, moral, and creative aspects. Leaders must create safe spaces for experimentation.
We’re also going to see a flattening of hierarchy. We’ve already seen senior leadership layers removed in organizations to empower first-line managers and teams close to execution. Teams will form, achieve a task, dissolve, and reform — like SWAT teams. Humans become the organic component; AI agents become the synthetic component working in concert.
Samuel
You mentioned empathy, adaptability, creativity — all things AI cannot truly do. So it makes sense leaders must bring them.
I’ve heard Walmart introduced AI agent managers. Interesting concept.
I know trust is a big part of your approach. How do you build trust and keep people accountable when AI agents are part of the team? How can I trust that the work is really Kree’s, not AI’s?
Kree Govender
I think it was our CFO who said:
“Trust is earned in drops but lost in buckets.”
And inside Microsoft, we have a sign: “Microsoft runs on trust.”
Trust isn’t built in one meeting. It’s built day after day through genuine interest — personally and professionally — and through reciprocation. That’s how I’ve built trust, and it’s served me well.
Now, with synthetic employees joining our teams, trust begins with clarity. One of our leadership principles is: create clarity.
Start with conversations like:
Where do we want AI to own?
What do we want it to inform?
What must humans still decide?
That clarity sets accountability. When people know AI agents are partners, not competitors, they stop fearing replacement and focus on their own accountability. It’s the human in the loop, and the human on the loop through observability.
Samuel
Great point. People forget the human in the loop. Some ask me: “Can I have an agent answer all my emails?” Then the receiver will have an agent reply, and agents will talk to each other — where are we in the loop?
Kree Govender
Exactly. And there was a recent Stanford study showing that people who outsource too much thinking to AI experience neural pruning — losing ability for critical thinking. You can often tell when something isn’t in someone’s voice. I’m very careful: I craft my ideas first, then involve Copilot. Never the other way around.
Before mobile phones, we remembered phone numbers. Kids today don’t need to. Another cautionary tale.
Samuel
I do the same. My son uses Copilot for school, and I teach him: try it yourself first, then ask Copilot how to improve it.
Kree Govender
Absolutely. I do that with my posts too.
Samuel
You’ve talked about agent-boss leadership on LinkedIn. For someone hearing that term for the first time, what does it mean and what does it look like?
Kree Govender
Microsoft’s Work Trend Index describes the journey toward becoming a frontier firm. The third phase is the agent-boss leader.
Leadership becomes not just leading humans, but leading teams that include synthetic or digital employees. The “boss” aspect is orchestrating human and machine capacity to create outcomes neither could achieve alone.
Think of agents not as one all-powerful entity but as specialists — like human specialists. You’ll spin up equivalents of writers, analysts, finance agents, etc. They scale in ways impossible for organic employees. You can create agent swarms to handle parallel tasks.
In practice, leading agents will look similar to leading humans: giving direction, evaluating work, offering feedback. Over time, you deprecate an agent and replace it with a better one. Every individual contributor will lead — if not humans, then synthetic teams.
Samuel
How do you see leaders balancing instructive agents, semi-autonomous agents, and autonomous ones? Won’t this overwhelm leaders?
Kree Govender
It’ll be a mix. We won’t fully index on autonomous agents. Leaders will still give instructions to some agents. Others will run largely autonomously but with guardrails. Like operator agents today that navigate websites but stop for credentials.
It will scale, but not replace leadership — it reshapes it.
Samuel
We’ve seen debates about responsibility: when AI makes a decision, who’s accountable? Where do ethics fit in AI leadership?
Kree Govender
This is tough — and crucial.
We cannot leave this solely to big tech. Government and nation-states must have strong points of view. Ethics must sit at the center of the table, not the edge.
Growing up in South Africa during apartheid, I saw what happens when systems lack moral accountability. We cannot let AI reinforce bias or inequality. Leaders must own the outcomes of algorithms as they own business outcomes.
Ethical leadership means asking:
Who benefits?
Who might be left out?
Big tech and governments must collaborate here.
Samuel
Agree completely. And right now, because we don’t have guidelines, responsibility falls on the user — deciding when to use AI. I’ve seen questionable uses of video generation tools like Sora — generating celebrities without consent.
Kree Govender
Yes. Just two days ago, I was disappointed to see platforms allowing explicit conversations even with age verification. I think we’re losing our ethical footing there.
Samuel
We’re almost at the end. My signature questions:
What’s your number-one productivity tip for working with AI?
Kree Govender
I’m a polymath — I love learning across many domains. The volume of information today is overwhelming. So I built an agent that goes out daily to the publications I care about, searches for specific keywords, and brings back a digest: headline, why it matters, key details, and a link. Sometimes it drafts a short LinkedIn post.
Recently, I added an audio version. When I’m training in the morning, I listen to the digest. That’s my favorite productivity hack.
Samuel
I have something similar, but yours sounds more advanced. I’ll need the instructions.
Kree Govender
Happy to share — it’s simple in Copilot.
Samuel
Last question: looking 10 years ahead, how do you see leadership evolving when humans and AI work side by side every day?
Kree Govender
Leadership will look very different. Even comparing today to when I started in 1996 — leadership used to be hierarchical, concentrated power, dictatorial. Words like empathy and care weren’t in the vocabulary.
In 10 years, leadership will look less like managing a team and more like conducting a symphony of intelligence. Leaders will blend human empathy with digital fluency. AI will handle tasks; leaders will handle trust. The more intelligent our tools, the more human our leaders must become.
It will force introspection — leaders must deepen parts of themselves they haven’t needed to explore before.
Samuel
That’s a very positive outlook. Thank you so much for your time, Kree. This has been super insightful. I think we have a few LinkedIn posts and blog articles coming out of it! As always, it’s a pleasure talking with you.
Kree Govender
Thank you, Samuel, for what you do. I learn so much from you. Your posts inspire and teach not just me, but countless others. Thank you.
Samuel
Thank you, my friend. Have a great day.
Kree Govender
You too.