Microsoft’s Future of Work Report 2025: What Leaders Need to Know | Dr. Jenna Butler, Microsoft

We gave people AI and no user manual. That’s the problem Microsoft researcher Dr. Jenna Butler sees playing out in every organization right now.

In this episode of Mastering AI with the Experts, I sit down with Dr. Jenna Butler, Principal Applied Research Scientist at Microsoft and one of the editors behind the New Future of Work Report 2025. Jenna has spent years studying what actually happens to humans when the tools around them change, from cancer biology simulations to developer productivity to the AI wave reshaping every workspace today.

We dig into what AI literacy really means for non-technical people, why the shift from creating content to evaluating it changes everything about how we work, and what the productivity pressure paradox is doing to teams that were told to “go be more productive” without a roadmap. Jenna also shares why she thinks technology never removes toil, it just shifts it, and what leaders can do this week to change the conversation.

You’ll learn:

1. Why most people don’t understand that AI is a probability engine, not a search engine, and why that distinction matters

2. How the shift from content creation to content evaluation is redefining what “doing your job” looks like

3. What the productivity pressure paradox is and how it’s quietly backfiring inside organizations

4. Why AI models carry deep cultural and gender biases that most users never notice

5. How one researcher fell for a complete AI fabrication while writing a paper about AI hallucinations

6. What minimum viable AI literacy looks like for a non-technical employee in 2026

7. Why leaders should reframe AI adoption as a chance to experiment, not a mandate to perform

8. How Jenna builds her own custom tools with AI when existing software falls short

 

Whether you are a leader trying to drive AI adoption without burning out your team, or a professional figuring out how to use these tools effectively, this episode gives you a research-backed framework for navigating the shift.

My guest today is Dr. Jenna Butler, a principal applied research scientist at Microsoft and one of the editors behind the New Future of Work Report 2025. She spent years studying what actually happens to humans when the tools around them change from cancer biology simulations to developer productivity to the AI wave we’re shaping every workspace right now. I’d like to mention that while she works at Microsoft, she’s here today sharing her own perspective and research insights. In this conversation, we dig into what AI literacy really means for non-technical people, why the shift from creating content to evaluating it changes everything about how we work, and what leaders should do to help their teams actually benefit from these tools.

Before we jump in, I have a small favor to ask. If you’re getting value from this show, hitting subscribe, dropping a comment, or leaving a like is one of the best ways you can support me. It helps more people find these conversations, and honestly, it means the world. Thank you. Now, let’s jump in. Hello, Jenna, and welcome. Uh, thanks for being here. I’ve been looking forward to this one.

Jenna, you’re a principal applied research scientist at Microsoft. Uh, your work sits right at the intersection of technology and human behavior like productivity, well-being, and um how AI is changing the way people work. I think you uh specialize in developers and teams in hybrid and remote environments.

You’re also one of the editors on the Microsoft New Future of Work Report 2025, which is what we’re digging into today. But just before we start, can you please introduce yourself? Uh, tell us a bit more about your background, your path as a scientist, as a as a Microsoft researcher.

Yeah, absolutely. Thank you so much for having me today. I’m excited to be here and have this fun conversation to tell you a bit about my background. So, I have a PhD in bioinformatics, which not everyone has heard of, but that’s basically biology and technology. So, I’ve always enjoyed the intersection of multiple fields.

With the bioinformatics degree, I did computer simulations of cancer growth and looked at what types of drugs you could combine to halt early cancer.

Yeah, it was totally different. Uh, it was really great. So, my undergrad was biochemistry. Then I did cancer biology and I still work at Belby College as an adjunct professor in their radiation therapy program teaching cancer biology because I just still love that and miss that. Uh, I think our solutions in today’s world to the complicated problems we have are often at the interface of multiple disciplines. So I really enjoy being involved in more than just one thing. So I did that but then wanted something different. So after graduation I came to Microsoft as a developer and spent four or five years just working as a developer working on actually the installation and update platform for Office. Some people would think that was boring but it was pretty fun. Uh, but as I was doing that I started getting very — I had a lot of questions about how we wrote software and why we did it the way we did it and if the way we assigned bugs was efficient and optimal and what’s the impact on a developer of having this number of open bugs versus this number, like how are the humans in the system impacting the system. And so then when I annoyingly asked these questions repeatedly I was pointed over to Microsoft Research, MSR, where they had a team called Research in Software Engineering and this is what they do. They study the human factors of software engineering because obviously software is a technology but it’s written by humans, often written for humans, and humans have a lot of impact on it. So I moved into that research domain. I spent a year at Microsoft Research sort of learning under that team to be able to do this particular style of research and then came back to the product side of Microsoft where I’ve been a researcher now for the last six years or so studying mostly developers and their productivity but sort of all things humans in work and of course the last couple of years that’s been largely about AI.

Wow, that’s — it’s an incredible path and journey. Wow. Congratulations. And actually it’s very different from — well, not that different from what you’ve studied. But it’s not biotechnology, right? It’s more technology.

Yeah. It’s definitely just straight up technology, but the only reason I did bioinformatics was because I wanted to help people with cancer and I was afraid of being in the actual lab with the real cells. And so I simulated it. And I feel like that that’s what I like to do is help people. So now I’m looking at how can people be happier at work. It’s always sort of a variation on the theme, but computers just give us these interesting ways to solve the problems.

You’ve worked on the Future of Work Report, the 2025 has been released last year, obviously. Mhm.

So to set the stage, uh, can you tell us a bit more about what the report was actually trying to answer? What was the core question you were chasing like from day one and what surprised you most uh once you saw the results?

Yeah. So, this is actually our fifth annual New Future of Work Report. I’ve been an editor or an author every year and each year we’re basically just trying to answer like how has work changed in the last year due to these massive shifts.

So, it originally started during the pandemic. So we were looking at how has remote work impacted work and then people went back to the office and so then we were looking at hybrid work and then it just sort of so happened that there was another monumental shift in how we worked which is the introduction of generally available Gen AI. So this year we’re looking at sort of over the course of 2025 what does all the research say has changed about how we work, about how these tools impact work. We use a fairly broad definition of work. So that includes education and different types of work like blue collar work, white collar work, all different things. And we do limit the focus to areas where we have a researcher. So Microsoft Research has labs all around the world and we try very hard to only report on things where we know for sure, like this is what we’re seeing and we have someone with authority who can speak in that space. That’s why the report has over 50 authors. It’s really a collection of how work is changing over the last year all around the world.

And you just mentioned it that there’s 50 authors to this report. So it’s very extensive. I I read through it and it’s very very interesting. It’s uh, there’s a lot of, you know, obviously citations to other reports that I’ve been reading through. It’s very very interesting if you want to understand where we are and where we’re going. And obviously the report covers a lot but AI literacy keeps coming up as the foundation in the report.

So why did that rise to the top and is this already a real gap inside of organizations today or, you know, more of a warning sign for what’s coming?

Yeah, I think you’re right. AI literacy — you can sort of see it woven throughout the report. We talk about how education should change for young students to understand more about technology and AI. And then we talk about all the change management in massive organizations: how do we uplevel our employees to be AI focused. And I think that is a really key thing because people are now using this tool and without a basic understanding they might not use it efficiently. They might overutilize it. They might underutilize it. They could even have some dangerous outcomes from not using it correctly. And so I do believe that everyone should really at least have a basic knowledge of what these tools are and how they work. And I do think it’s already a gap we’re seeing. Now I also think that the people who build these technologies should make them easy enough to use and safe enough and have the right guardrails that you don’t have to understand everything about how they work. But just having sort of a basic understanding I think will really help people use them in the most productive, effective, and safe ways.

And where do you think there’s a biggest gap of knowledge around those tools? What people don’t know?

Yeah. Well, I am always surprised when people don’t know that they can make things up. So, we call it hallucinating, which I don’t know if — for the general public — if that’s a great term because it sounds like it’s sort of doing something on purpose or just coming up with some fanciful ideas, but in a very basic way, the way these large language models work is they’re just pattern matchers. So, they’ve consumed huge amounts of text and video and audio from the internet and they know, hey, when I see this sequence of words, the next most likely word is this. And then they can keep building that up, then the next most likely sentence and paragraph. And so all they’re ever doing is following sort of a probability model of what’s likely to come next. And so they are never necessarily being grounded in accuracy or reality. So I often hear people say, “Well, it didn’t tell me it didn’t know about this.” And it’s like, it doesn’t know about anything. It’s a probabilistic model, right? It is just giving you some probabilities. And so I think the idea that it is giving you a probability output as opposed to a search engine is just very different than what we’ve used in the past. And a lot of people who are using it don’t know that that’s how it operates. And when you know that that’s how it operates, you can actually use that to your advantage because you can use it to do things that are creative or where you know there isn’t a single right answer on the internet that I’m trying to find. I’m more brainstorming or looking for different perspectives. And so something that’s just generating content is great for that because there isn’t a ground truth that I need to find in the end. So I can use them more effectively and not risk relying on incorrect information.

Yeah. So yeah, I like the fact that you said it doesn’t know something. It’s guessing basically. And it doesn’t know what it doesn’t know. People think it’s like withholding information or it’s not admitting that it doesn’t know something, but it’s all just probabilities and it might — the models have higher or lower probabilities presumably — where there’s a lower probability it knows less, but it’s not thinking at all. So, we really need to kind of get away from that way of viewing it. I think it sounds so much like a human being that we tend to anthropomorphize them. Absolutely.

Exactly. Well, and it’s hard because the builders of these things, you know, the engineering they put on top of the models is anthropomorphizing them. Like they don’t have to — the way these models present is more human than it needs to be, I think. But it makes it easier for people to use. It makes people like it more. So, even myself, I mean, I’m I’m working daily with those tools. Um, having a podcast on AI and I’m still sometimes forgetting that oh it’s a machine, it’s an algorithm, it’s not — it’s not conscious, actually, not to my knowledge.

Yeah, I read a great Substack recently about this. Um, one of the top researchers in AI was working on a paper about how our cognitive defenses that would normally tell us when something is false or incorrect or “don’t trust that, it seems sketchy” can sort of fail us when we’re working with AI. And the funny thing he wrote — I think it was a Substack — was how while actively writing this paper, he got totally sucked into something totally made up about AI when he was reading how to repair a hat. And it told him that like for thousands of years, these types of hats have been repaired with wax. And he ordered wax from like Amazon. And then finally, he said, “Actually, can you just send me to like a website or something?” and it said, “Well, okay, there’s no website, but in theory, this should work.” And he was like, “I just fell for this total baloney while writing an article about not falling for AI hallucinations.” So, yeah, it’s something that everyone is, you know, susceptible to because of just the way our brains have been trained. So, I think it’s something that everyone should be aware of.

A really good point. And if you had to define what’s the minimum viable AI literacy for a non-technical employee in 2026 — because like we’re — we’re emerging tech, we’re surrounded by it but most people out there that need to use these technologies are non-technical people. So what do they need to understand, um, other than what you just mentioned obviously, to use AI safely and think clearly about the outputs and, you know, get uh value on a day-to-day basis?

Yeah. So when I teach uh non-technical people about these, I think first they should understand the pattern matching, and that’s actually simpler than you might think because as humans that’s what we do all the time. So, for instance, when I come into work on a Monday and I see my colleague, I might say, “Hey, how was your — ” and what word would they think I’m probably going to ask about if it’s a Monday? Yeah. Weekend, that’s the most likely next word to come out of my mouth because it’s a Monday and we’ve seen this pattern a hundred times. It’s possible I was going to ask how was your bar mitzvah, but it’s not as likely, right? And so, that’s how these tools work. So, understanding how they work is pretty simple. So I think people should have that kind of basic understanding that they’re just prediction machines, pattern matchers. I think the other minimum thing to understand about them is that they were trained on a lot of data created by humans and so they have a lot of biases that humans have and that’s not always obvious or clear. So we can’t assume that we’re getting a maybe a neutral response out of them. There’s definitely bias built into the models and even when we try really hard to fix that, that’s our own bias that we’re using to try to fix it. So they’re not sort of a neutral vanilla slate, which I think is important to recognize.

That’s interesting. I had another episode um with the co-founder of Empress.ai and she was mentioning how LLMs are biased toward men just because most of the content they were trained on was about men.

Exactly. That’s right. Yeah. Same thing about the English language. Most of what’s available on the web is in English. So for that reason they will be biased toward English as well.

Right. And when you’re biased towards a language, that’s not just a problem for people who don’t speak the language but for a lot of the people in the global south who don’t speak English, the different languages they use also represent very different ways of thinking or different social norms that are not built into our models because they were trained on English language documents. So when they go to use the model, it might be very different. You know, if I ask the model, uh, is it a red flag that the 30-year-old guy I’m dating is living with his parents? You know, a model trained on Western cultural ideas would say, oh, yeah, super red flag. But in totally different parts of the world that would be considered completely normal and it might be odd if they weren’t helping their parents. So it’s not literally just the words that they can’t access. The way of thinking, the way of viewing life is different because it was English trained. And so it doesn’t always work well for folks from other cultures. Even if they can speak English, it doesn’t translate into their way of thinking and operating.

Switching gears a bit. Uh, in the report you frame the shift that we’re uh living right now as moving from creation to evaluation. Can you unpack that in practical terms? Like what does it look like in someone’s workflow when their job shifts from producing content to now all of a sudden like reviewing, refining, approving what AI just generated?

It will totally change the way people are working.

Yeah, absolutely. So, I think we’ve all faced like the blank page problem where maybe we have a report we have to write or my kids have something for school and you’re just sitting there in front of a blank page and you need to write something and you need to create from scratch. This is not something we need to face anymore for the most part. So, if I need to write a report, I would probably start with an LLM and give it some context. Say, hey, here’s my couple meetings that I had. Here’s a few documents. I have a few ideas. I think I want to go in this direction, write me something, and it’ll write the report. And now there’s absolutely no way I’m going to just send off that report without taking a good look at it. But now my job comes more to that architecture overall. What are the ideas I’m trying to get out? What’s the intent? Did it get that right? If not, how do I change my prompt or the additional things I’m giving the model, the context, so that it can get it right? Is it accurate? Does it reflect what I want it to reflect? And so I’m looking more at evaluating the output of something else’s creation. And so we’re sort of calling this an intent-driven way of working where I have an idea of what I’m trying to get out. I don’t necessarily have to make each of the building blocks along the way, but I need to know what I’m trying to do so I can articulate that to the model. And then I need to know how to evaluate what came back out of the model to see if it is indeed what I was trying to do. I think when LLMs first became widespread, probably like two and a half, three years ago, people were feeling that it was cheating and I think some people still feel like that. What will you tell those people?

You know I know I’m in such a bubble because if I didn’t use AI in one of my days at work I think I would feel like what did I even do today? So it’s a little different but yeah. I don’t — I think the intent probably really matters in whether you’re cheating or not. So, as an adjunct professor, I give a lecture to my students every year on Gen AI, which I’ve been doing for the last 3 years, telling them how to use it. And they the first year were looking at me like I was from outer space. And I said, “Oh, you know, why do you look like this?” And they said, “Well, our other teachers told us not to touch it with a 10-foot pole.” And I thought, well, that’s — that’s not going to help you. That’s not reality. You’re going to go and type something into Google and you will get an LLM response as opposed to a link. Like, that is what is going to happen for you in the future. So you need to learn how to use them. That’s another reason why I think the literacy is so important. So if you’re using it to write a report and you’re not giving it context and you’re not evaluating it and you send it to your boss, maybe that’s cheating, but you’ll probably also look kind of like an idiot, right? You’re still responsible for your work and what you produce and what you put your name on. So I think there’s ways to use them to enhance learning. There’s ways to use them to build critical thinking skills. None of that’s cheating. When you use them as a shortcut and you don’t want to engage meaningfully with your work, that’s not great. But I think a lot of people want to learn, want to do things properly, want to succeed at their job, care about their work, and so they’re using them in ways that are just another tool at their disposal as opposed to a shortcut.

And I think it’s creating a tension between — like jobs are changing and the reality that some roles are completely disappearing. How do you hold both truths at once, you know, without minimizing the disruption people are feeling right now?

I think that this is like such a critical question right now and as a builder of technology but also a consumer of technology, I ask myself this a lot because the introduction of Gen AI into society is not just a technical shift, right? We call this a sociotechnical shift. And what that means is that humans are involved. And what humans choose to do with it, what we accept and what we don’t accept will impact what happens with these tools. And so I do see how work is drastically changing. And some work may not have to exist for very much longer. And I think the story that we’re being told by leaders is — and I don’t mean Microsoft leaders, I just mean leaders of technology in general — is, oh well, every major technology caused more jobs than it took away. It created more new jobs than it took away. And people forget sort of the “but.” So it’s true, every major technology shift did create more jobs than it removed, but there was a long period of instability before that happened. There was often societal unrest or things like child labor because the factories came before we knew to make unions or to put protections in for people. And so whenever we have these shifts, I think we should be asking ourselves like what are we building? How are we comfortable using it? What guidelines should we put into place? What am I as a consumer willing to give it? Willing to pay for, not willing to pay for. Those will all impact what the big companies build and provide and sell. So they are both true and I don’t — I don’t really feel like it’s difficult to hold both. I feel like it’s just a difficult reality and it’s going to be very interesting to see how work changes in the next decade or so.

Yeah. And even next months, you’d say.

Oh yeah. Yeah. I’m hiring some PhD interns this summer to work for us and someone asked, oh, what are they going to work on, and I was like, I have no idea — like, who knows what’s going to matter by June. I don’t know what they’re going to work on. We will cross that bridge when we get there. Whereas I used to be able to get their projects set up like six months in advance. Not anymore.

And you described it as a sociotechnical shift. Now, who’s responsible for shaping it? And let’s say at the organization level — I don’t want to talk about government and agencies, but who’s responsible in an organization for shaping it? Like is it HR? Is it the business leader? Is it the executive?

That’s a great question. I think there are sort of two ways about it. So, there’s a bottom up and a top down. The bottom up. Uh, everyone has a part of that. So, one of the things that I do is I have a survey where people can anonymously tell me their experiences with using these tools at work. And sometimes they’ll say kind of what you mentioned with the cheating. They’ll say, “I think I’m being judged because I use this” or “I don’t want to tell people I used it because they’re judging me.” And so right there, your own co-workers are influencing how these tools operate inside your workforce. Whether they operate in secrecy, whether the results are shared, whether the gains, the productivity gains you experience from learning a new way to use them get shared or get hidden. Right? So there is a certain amount of just how we talk about the tools, what we expect from other people, what we share back out will influence uh how they’re being adopted and used. And this is just like a widely known research outcome is that how a tool is rolled out and how people use it in turn informs how it’s rolled out and how people use it. It’s sort of this weird uh cycle. And then from a top down perspective, I think HR definitely has a role to play with perhaps protecting human jobs or protecting our digital minds. So if I’ve given a lot of what I know to agents in this place and I want to leave the workforce, does the digital version of me that doesn’t have any proprietary information get to come with me or do you get to keep digital Jenna? Uh, in which case why did you — how much longer would you have needed to employ me anyways? Right? So there’s going to be some HR policies where I think companies that want to compete are going to have to have digital labor and digital knowledge as somewhat of a commodity. And I think there’s also of course a role to play with what things are allowed to go to models, what aren’t, what things stay on your machine. So maybe I feel more comfortable with the AI Jenna living locally, but it doesn’t go to the cloud. So there’s going to be some rules there that are going to need to happen. I think there’s going to be a lot of different roles to play.

Very interesting this concept of like having a digital twin or a digital AI of ourself. And this is what it is at the end of the day. If you look a couple of months, years in the future, that’s what will happen with us using those tools and basically making them or sharing with them our knowledge and what make us the information worker that we are right now.

Yeah. There’s a quote in software that says software companies don’t ship code, they ship people or they ship org charts. And basically what that’s saying is it’s all the knowledge in your developers heads that really makes the code as opposed to the raw code itself. Like you could share the code and that doesn’t always mean that you’d be able to sort of recreate the company. And so now we’re getting to a point where we can put that knowledge into the computer as well. So it’ll have the code and all of the extra information that’s just sort of floating in my head and that’s a really different world. So that’ll be interesting.

It’s a big change in paradigm and how we were building this.

Absolutely. Yeah, definitely. The report also argues that the future of work isn’t predetermined, right? It’s shaped by decisions leaders will make now. So what are the two or three decisions you think matter the most in 2026? I’m not talking for the future to say but right now with the current trend and the state of AI right now — I don’t know about what decisions or choices are most important.

I think that some skills are very important. So I would say for individuals and for organizations making sure that their employees and ourselves are able to adapt and learn quickly is going to be really really critical. So things are going to change fast I think and our ability to learn new technologies to adapt to change what we do from creation to evaluation or from you know mostly typing to maybe voice input like however it’s going to shift — those of us who can shift, that’ll be really important. So I think being flexible and giving yourself time to experiment and learn is going to be important and for organizations providing that time. There’s an issue we’ve seen that we’ve dubbed the productivity pressure paradox. And that’s where employees have been told, “Hey, you have these tools now. You can be 10 times more productive. So, go be more productive.” And they think, “I I don’t know what to do with this tool. I don’t know how to be productive. I’ll just double down on what I already did and work twice as hard to be more productive because apparently I have to be more productive.” And so instead of the tool bringing about the productivity improvement, it’s really just sort of like a fear about the tool that’s bringing the productivity improvement. And the way to get the real productivity gains is to give people some time to learn how to use these tools and to experiment so that it can fundamentally shift how they work to be more productive. And so I think choices like that that organizations can make to maybe create a safe space or give time or learning Fridays, whatever it might be, I think that’s going to be important. And then for the humans, individuals to give themselves some time to learn and keep on top of this stuff. That’s going to be important as well.

So, it’s all coming back to experience, learn um yeah, and and just make those tools your own. I think we’re we’re still at that stage even if it feels like it’s been there forever because for some of us it’s been so so much embedded in our daily workflow that it just — you mentioned it at the beginning — like I I don’t even know how I’ll do my job without those tools. Uh, it’s still very recent but it feels like it’s been there forever.

Yeah, the acceleration is just wild. I know. I feel like every day at work is busier and I think it’s just because my brain has so many more things to keep track of now because I can spin off an agent to do one task and have this do another task and we all have to become sort of multifunctional. It’s a little — it’s definitely different.

If a leader listening takes one idea from the report and acts on it this week in your opinion, what should it be? Something concrete they can do immediately that will change the outcomes of um how they’re driving AI in their organization.

I think for something concrete, changing their language to be about this chance to learn and experiment would be very important. So communicating with their people like we know that these tools are exciting and new and you might have no idea what to do. You can — you can think this was basically the first technology where we put it in people’s hands, told them to be productive, and didn’t give them a user manual. Like, it’s not like you just gave people cars and never even showed them what it looked like to get in a car and drive. Like, we released these to individual humans working in companies and told them, “Apparently now you can be a lot better at your job. Good luck with that.” They didn’t know how. Like, and we were being asked this all the time, like, “Well, what should they do? What are the specific use cases? How do they do it?” And we’re like, let me just kind of figure it out. And that’s really different than every other technology. And so sort of owning that message of like, hey, we’re all learning together. The pace of innovation is crazy fast right now. Take some time, experiment. It’s okay if you try to use an agent to do something and it doesn’t work out because these agents are the worst they’re ever going to be. They get better every single day. Like the models are improving all the time. And if you don’t feel comfortable trying, seeing what works, what doesn’t, trying again in a week, they’re not going to reap the benefits of the productivity improvements. So, I think that leaders really need to communicate with their team that like we know this is a big shift. We don’t have all the answers, but we want you to feel empowered to try things, take a little bit of your time. It might not work out. That’s okay. I think that would really help people to be able to take advantage of these tools.

Agree. This narrative will help because I feel people are trying stuff, doesn’t work at first try and then they just go back to the old way of doing things because they’re scared of losing more time.

Right. I don’t want to waste any more time. Exactly. And they’ll never be able to take advantage of them if that’s what they do. And especially if they don’t return to them. So some people try a tool and they think, “Oh, this was useless.” And they don’t realize that in a month it’s going to be vastly different and they should try again.

That’s another very good point. Like um I’m on the field, right? So, I’m working with customers and often I’m being asked, “Yeah, I know Copilot can do XYZ because I’ve tried it 6 months ago and it works.” I’m like, “Yeah, no, but now it does and actually it might not do it today but it’ll do it tomorrow because it’s exactly.” Yeah.

We’re delivering faster than we ever did and the technology as a whole has just evolved, but it’s it’s just a different paradigm. I think it’s hard to understand for most people not in tech how fast it’s going and how everything from designing to delivering those products has changed.

Yeah, I totally agree.

Jenna, we’re almost at the end of our time together but I have my two signature questions. First one being on a personal level, what’s one practical way that you use AI that makes you more productive obviously every day, you know, something that might be repeatable and uh can actually stick.

So, one thing I’ve really enjoyed and I don’t know if this is too broad, but when I’m using a tool at work and it doesn’t do like exactly what I needed to do, I just build a new tool now. And that’s been pretty fun. So something I do a lot is I read a lot of surveys and verbatims from humans about their experience and I don’t think that that is something that I want to give to a model and ask it oh what are the themes — like I want to read what the human said and I have a contract in my mind with the human that a human is going to read it. So when I read these things I can label them. Oh, they’re talking about learning or they’re talking about productivity or challenges. And there’s a little tool I use to label them, but it doesn’t allow me to do multiple labels or it doesn’t allow me to flag quotes that I want to remember for later. So, I spun up Claude and I said, “Hey, this is the type of work I do. These are the features that the tool doesn’t have right now that I needed to have. Build me a new one.” And so, now I have the tool that does exactly what I wanted to do. And so, I don’t get hung up anymore on not being able to work in the way I want to work because I’ll just write something to do it for me. So, that’s been really fun. It’s very empowering feeling. Like I’ve been using the same tool for 5 years and it’s only this year when I thought I can just change it. I can just make whatever I want. And so that’s that’s been exciting. So that’s one thing. And I think a more maybe simple or easier to get started space is I regularly go to an LLM when I’m starting something to brainstorm. So I might say this is my idea. These are things I’m considering. And I’ll ask am I missing any major areas? Is there work in this space that I should know about? Are there questions I’m not asking myself that I should consider? Am I looking at it from the right angles? And so I ask the LLM questions to prompt back to me so that I can think more deeply and engage more deeply with my work. And I find that to be really helpful because again, as probability engines, they’ll come up with lots of questions. They’ll just make stuff up. And so that helps me to think more broadly. And there’s no right answer. I’m just looking to improve my own thinking. So that’s another way that I really like to use them at the start of any new project or endeavor.

It’s really great tips to put in place like rapidly if you want to start like embedding that into your your workflow.

If you’re looking ahead, how do you see AI reshaping how we live and work over the next 10 years? Like which changes are most significant and what do you think most people are still underestimating? You’re seeing a lot from working on those reports and from your own research. You mentioned it — you started the Future of Work reports at the beginning of the pandemic. It wasn’t about AI at all at that time. How do you foresee it in the next 10 years?

You know, I knew you were going to ask me this and I still do not have a great answer because I just do not think that anyone can predict what it’s going to look like in 10 years and I wouldn’t listen to anyone who tells you that they do know what it is. I I will give you my best take and things that I see. Some of these are contradicting each other, but it is just so rapidly changing that I don’t think we can give a good answer to this. Some things I do feel more confident in are that humans are very resilient. So I think it’s easy to get stuck in some fears of like everything’s going to change and we’re going to be out of a job and it’s going to be a bunch of robots. It’s like okay there may be discomfort and job loss and humans tend to figure it out like we create new jobs or we go to space like there’s whole new areas that never existed before electricity, you know, that now do and I don’t like that as a simple answer. So when technology leaders are like, “Oh, don’t worry about job replacement because we’ll have new ones.” It’s like, “Okay, someone has to come up with those and that’s effort.” You know, it’s not simple, but it gives me a lot of hope that humans just do tend to solve problems and stay pretty resilient. So I think that the workforce will adapt. Uh, I also think that — we like to say in technology — people often think that these new technologies will remove toil and usually they just shift toil. So I think that’s something else we’ll see. There’s a great link in the New Future of Work Report last year I believe that was an ad from a women’s magazine a hundred years ago that said now that you can can your own tomatoes, toil in the kitchen is gone. It’s like what? You know, people still are trying to get a kitchen robot, right? Like there is still toil. It’s just vastly different. It really shifts to something new. So, we like to think AI will get rid of all of the boring parts of work we don’t like, but then there’ll be some new boring parts. So, that’s something I think will be interesting to see how that shifts and what kind of maintenance work we have to do. And I think there will always be that kind of work to a degree. Uh, and then I also like to hope that in 10 years science will be able to move a lot faster. I really hope that these tools will unlock the discovery of new things. Uh, they allow papers to be written and shared faster, people to get access to data quicker. There’s so many brilliant people who have ideas but they can’t write the code to go do it and they’ll be able to do that. So, I’m hoping that innovation moves more quickly and especially in areas like climate science, cancer biology, medicine, all of that. I think that would be really great, too. But would I be — am I confident enough to put a prediction on it? I am not.

I think nobody does. But, uh, I really like your take on it. We’re almost only talking about LLMs, but the truth is AI as a field is going fast and might finally find a cure for cancer. We don’t know — that would be amazing. Yeah, just protein folding — um, you know what we’ve been able to do with AI has totally changed the way we were developing vaccines and yeah like that’s — the New Future of Work Report moving from the pandemic to AI is not really three different shifts — like that was really a continuation. Like when we all went remote, companies had to massively scale their cloud infrastructure so that people could work from home, right? It accelerated Teams and online document sharing and all of that. I remember Word actually was involved directly with the massive document on the vaccine. So there were doctors all over the world who had to access the same document, but we didn’t have that scalable uh co-creation yet. And so the Word team was literally told like you need to make all these people able to access the same document so that we can get a vaccine basically. And so it really accelerated the push to the cloud which allowed AI to sort of flourish. And so I think it’s all to a degree a continuation of what happened during that time. And so we have these new vaccines now, the RNA vaccines that we didn’t have because of that. And I am hoping more really positive changes will come from this rapid development that we’re seeing.

I hope as well. I’ll do a follow-up in 10 years to tell you — if you want. Yeah, we’ll see.

Jenna, thank you so much for your time today. Uh this was a very valuable conversation. Uh, thanks so much for making the time.

Yeah, thank you again for having me. Have a great day.

What I love about this conversation with Jenna is how grounded it is. No predictions dressed up as certainty. Just a researcher who studies how humans actually respond to change telling us what she’s seeing right now and what we can do about it. Three things I want you to walk away with from this episode.

First, understand what these tools actually are. They’re pattern matchers generating probabilities, not search engines delivering facts. Once you internalize that, you’ll use them better and trust them more appropriately.

Second, watch out for the productivity pressure paradox. If your team has been told to be more productive with AI but hasn’t been given time to learn and experiment, the pressure is doing the opposite of what you want. Give people permission to try, fail, and try again.

Third and last, come back to tools that didn’t work before. These models improve constantly. What failed three months ago might work beautifully today. Don’t let one bad experience lock you out of real gains.

If this episode gave you something useful, subscribe to the podcast wherever you listen and sign up for the newsletter to stay sharp between episodes. We have some great conversations coming up and the pace of change means there’s always something new worth understanding. Thanks for listening. I’ll see you in the next one.

See you.

Listen to or watch on your favorite platform