How AI Is Failing Women (And How to Fix It Now) | Miri Rodriguez CEO Empressa.ai

Unlock the future of Inclusive AI: What does it truly take to build systems that are responsible and equitable from the ground up?

In this essential episode on Gender Bias in AI and Responsible AI, we sit down with Miri Rodriguez, former star storyteller at Microsoft and the groundbreaking CEO/Founder of Empressa.ai.

After 13 years crafting narratives within Big Tech, a personal health journey — including battling cancer —pushed Miri to pioneer a solution for one of technology’s biggest challenges: the AI gender gap.

Miri shares the emotional and strategic story behind Empressa — a pioneering AI platform built by women, for women. She exposes how reliance on traditional data sets has created deep-seated biases, consistently excluding female perspectives.

You’ll learn:

  • Why the legacy of male-designed technology (like the car airbag) proves that gender-neutral is often just male-default.
  • How Empressa.ai is rewriting this reality by implementing a royalty model that fairly compensates women for contributing their insights, stories, and lived experience, transforming silence into scalable knowledge.
  • Actionable design principles for building inclusive AI agents and foundational models, ensuring your next product is equitable.
  • Why the true barrier for women in tech and AI adoption is often confidence and the need for safe learning spaces—not capability.

If you are building with Generative AI, leading product teams, or focused on accelerating gender parity in the workforce, this episode is mandatory listening. It will fundamentally challenge how you think about data, inclusion, and the systems we are all shaping.

Samuel
Hello, Miri. Thank you so much for joining us on the show today.

Miri Rodriguez
Hi Samuel, it’s a pleasure to be here. Thank you so much for inviting me. I’m excited to have this conversation.

Samuel
Super exciting to have you today. Miri, you’ve spent the last 13 years—actually, you’ve left Microsoft now—but you spent 13 years at Microsoft as a storyteller. And not so long ago, you decided to make the move to leave Microsoft and build Empress.ai. Can you share what part of your journey made you realize this was the moment to create your own platform and leave Microsoft?

Miri Rodriguez
That’s right. You know, that’s a great question—the first question right off the bat. Thank you for such an insightful question. I knew at one point I would leave Microsoft; I just didn’t know when. In fact, I had been working on Empressa for about a year and a half before I left. And what was very interesting in my journey at Microsoft—and I want to call this out for people who may not know my story—is that I spent 13 years there, six of those as a storyteller. I did storytelling in engineering, in sales, in HR, and in operations.

So I covered a wide range of storytelling for the enablement of the digital age and the era of AI, which is exciting. I also went through a medical journey—a personal medical journey that is very specific to women. I went through breast cancer, and I had it in both breasts. I was also diagnosed with the BRCA gene mutation. BRCA stands for “breast cancer.” It’s a genetic mutation, and my journey was about two and a half years, involving five surgeries in total, including breast removal, breast reconstruction, and a total hysterectomy.

Each time I had a surgery during this period, I was at Microsoft. I was in and out, taking leave. And every time I was recovering, I had almost a “download” of insights—things you think about when you enter this space where you’re wondering what’s next, what happens if life ends, what your legacy is. I looked back at the 20 years in tech that I had compiled, the work that women had done—including myself—and wondered, where does that go if I’m no longer here?

And then I thought: we have to scale this. We have to bring a legacy that is tangible, no longer one-on-one. I’ve had many conversations with women; I’ve helped them in many ways with personal branding and workshops. How can we scale this? And it dawned on me: this was going to be an AI. And then I was scared. I thought, “No, Microsoft can take my IP. I don’t know if I can leave Microsoft and do this outside of Microsoft. How do I disclose this?”

Having all these conversations, it actually turned out wonderfully. Once I disclosed it, Microsoft invited me to join the Microsoft for Startups program. So we became a partner. They gave us a hundred thousand Azure credits to build our AI on Azure—so we did. We’re on the Azure stack. And it was just one of those moments when you know it’s time to do something beyond yourself.

So how did I know? I didn’t. I just followed intuition and the alignment of things that happened. And you realize, “Okay, this is beyond me. This is beyond my wildest dreams. This is something for the world and for the women of the world.” So we took the leap. We launched just a couple of weeks back, and it’s been an incredible journey. I have not looked back once—not once.

Samuel
Do you think you would’ve had the same inspiration if it hadn’t been for cancer? I mean, I assume…

Miri Rodriguez
I don’t know. You know, it’s a great question. I’ve talked to many women who reached out once I shared my story about breast cancer. They’ve said it is life-changing. It pauses you and makes you think beyond yourself. What is that legacy? What are we working for? And I love Microsoft. I love that it’s mission-driven, and I’ve always aligned with Microsoft’s mission to empower every person and organization on the planet to achieve more.

I think Empressa—well, I know Empressa—is a baby-girl extension of Microsoft. Everything I learned, I put into motion specifically for women, because that is important and there is a lot of work to be done. So I can’t say I would be here without having gone through something like that. It made me pause, reflect, and get courageous about what I would do with a second chance at life. Well, here I am—I’m doing this.

Samuel
This is such a great story. You’ve said it’s built for women, and you already mentioned that AI is trained on historical data and it’s not built for women. I can understand how biased an LLM can be based on the fact that it’s trained on the crust of the web, which is biased by nature. So from your experience, what are the most harmful gaps that you’ve found in AI systems today, specifically for women?

Miri Rodriguez
Yeah. We’re actually about to publish a report this week at Empressa from the data we’ve been collecting for a year and a half, and now through the launch. We have great insights from conversations, quantitative data, surveys, and gaps we’ve seen with women coming in.

One thing that was surprising—but shouldn’t have been—is that women are far more advanced in using AI tools than we tend to believe. And that’s not because the surveys are wrong; it’s because women historically downplay their abilities. So if you ask them how comfortable they are with AI tools, they may say “moderately comfortable” when they’re actually super-users. That skews data. It affects how women approach AI personally, which affects the results.

So the first harmful thing is ourselves—our own view of how we approach AI. We can be our own barrier. And the “why” behind that is compelling. We fear AI will be a bad actor toward us. We fear it will expand on malicious behaviors that historically have harmed us. And we’ve historically had to contend with technology that didn’t consider us biologically, mentally, or emotionally. So we approach it with caution, and that slows our adoption. We don’t trust the systems. We know they are biased.

Someone recently asked on LinkedIn: “How could Canva, founded by a woman, still return male-driven results when I search for certain elements?” I said: because it doesn’t matter who is at the top. It’s the data informing the machines. When 80% of internet authorship is by men, when 90% of GitHub engineers are men, and when only 12% of the AI workforce is women, the outputs won’t be safe for us—no matter how much we want them to be.

We know this intuitively. So the harm is in the established systems we must carefully undo. And it’s really up to us—we can’t just blame systems. At Empressa, I love that women are showing up with courage, asking, “How do I get my fingerprints on this line of code? How do I expand data that isn’t skewed anymore?” That’s the main factor.

The second is the next generation. Gen Alpha will be AI native. How will girls age 5, 6, and 7 develop a relationship with AI that empowers rather than harms them? It’s education—helping them not fear it, helping them code it, and helping them be part of it.

Samuel
You mentioned that women perceive LLMs as bad actors more than men. Why is that? Why don’t women trust AI as much as men?

Miri Rodriguez
I always give the example of the car as a technology. Cars were built over 100 years ago—built by men, for men. Today, 51% of drivers in the U.S. are women. When I get into my SUV, I’m small-framed, so I have to push the seat forward to reach the wheel. The airbag system—the machine designed to save my life—is still calibrated to the standard male body: 175 pounds, about 5’11” to 6 feet tall.

This means that if I’m in a crash, the airbag may not deploy in time because I’m too close to the wheel. That’s 100 years after the machine was built, while more than half of drivers are women. Yet the foundational design still isn’t built for me.

It’s the same across medical technology. It wasn’t until 1983 that medical research even started including female biology. Before that, women were given medicine tested only on male physiology.

So women know technology historically does not consider them. That’s why we approach AI with caution.

Samuel
I told you I have a daughter and a wife in tech. Can you give me a concrete example of how my daughter, using Copilot or ChatGPT, might encounter harmful behavior from an LLM?

Miri Rodriguez
Sure. Let’s say she’s using Copilot for a school research paper on animals in the zoo. She searches, “Help me find information about animals in the zoo.”

The machine sources reliable content—but 80–90% of that content is male-authored. So if she’s exploring animal depression in captivity, for example, a male author statistically may not have written about it the way a woman might have. She misses half of the insight that didn’t get published or wasn’t authored by women.

Women often bring more communal perspectives, more empathy, more context. If that isn’t in the dataset, then the output lacks that nuance. It’s not about good or bad—it’s simply incomplete. And incomplete information can lead to incomplete understanding.

Samuel
We tend to forget men and women don’t necessarily process information the same way.

Miri Rodriguez
Exactly—and that’s good. When you blend perspectives, you get a fuller picture. Women think communally; we consider safety, children, elderly people, disabled people. If a woman had invented the car, she might have thought differently about who would be inside it. Not better—just different. And when you blend those perspectives, everyone benefits.

Samuel
To help fill that gap, you built Empressa.ai. You’re using women’s experiences rather than traditional datasets. What made you realize this was needed?

Miri Rodriguez
Our idea is multifaceted. First, there’s a lack of accessibility to women’s knowledge. And historically, women have been conditioned to keep their “secrets” for survival. Even today, women either don’t share—or they share everything for free. Both extremes are harmful.

Empressa is a solution for experts. We invite women with 10+ years of experience in any industry to join and share insights in a digital library. These insights inform our AI. On the other side, we have subscribers—women early in career, building businesses, pivoting careers—who would never otherwise have access to these insights.

They can ask targeted questions, like “How do I scale my business with storytelling?” Then myself and three other storytellers might respond with our lived experience.

It brings accessibility worldwide and helps the experts monetize their knowledge. Anytime the AI uses a woman’s insight, she gets paid a royalty. She can track every usage. It creates an economy where women’s expertise is valued.

Samuel
You mentioned that historically women share less. I agree—I see it on LinkedIn. Could that contribute to the bias in knowledge?

Miri Rodriguez
Absolutely. This is multifaceted—women, men, history, systems. Now we all have to undo it. Everyone has a part to play in addressing this. Women need to step in courageously and share more.

Samuel
Not to get technical, but how do you take these stories and put them into an AI model without losing context or creating new bias?

Miri Rodriguez
We’ve created systems around reliability and fairness. For example: if 50 storytellers have insights in storytelling and one question comes in, how do we decide who gets paid?

Our system evaluates relevance and reliability. Women upload quality insights—books, frameworks, posts, anything they own. The system reads them and determines what’s most relevant to answer the question. Then we use a rotation-based justice system so that over time all contributors in that category have equal opportunity for usage and payment.

We built and tested this with 100 women before launch. We iterated to make sure it’s by women and for women—based on what they need, not what I assume they need.

Samuel
Today everything is AI. Bias is a huge topic. We’re heading into agentic AI. What design principles would you suggest for organizations building agents?

Miri Rodriguez
Our upcoming report outlines this. The first thing: enable women to learn their own way. Women learn differently. On-demand content doesn’t work as well because women still hold 65% more of the household workload. We’re exhausted. So we deprioritize our own upskilling.

We found cohort-based and live learning works far better. We launched AI Foundations for Women—it exploded. Women are eager to learn; they just need an environment tailored to them.

We created Empressa Playground because there was no safe environment for women to practice using AI tools without bias interrupting the experience. It became a whole product—it wasn’t planned.

Enterprises now want to partner with us because women will engage when they know the environment is built for them.

So first: create learning environments tailored to women.
Second: create hackathons or hands-on opportunities where women can build agents and bring solutions to the table.
Third: in leadership and hiring for AI-first roles, include women—because they naturally think about ethics, inclusion, sustainability, and bias.

Samuel
So it’s about giving women more space in the decision-making process when building agents.

Miri Rodriguez
Yes—and recognizing that women will often say, “I don’t know enough.” Women apply only when they meet 100% of requirements; men apply when they meet 60%. Women often undersell themselves.

So leaders must intentionally invite women in and reassure them they belong at the table.

Samuel
I love your royalty system. I think it could apply to everyone. How did you come up with it?

Miri Rodriguez
It came from personal experience. Years into speaking at conferences, I realized I wasn’t being paid—even when male speakers with smaller roles were. Simply because I didn’t ask. Many women don’t.

So royalties do two things:

  1. They break the psychological barrier—showing women their knowledge has monetary value.
  2. They model to the world that women’s insights deserve compensation.

We don’t want women to feel AI takes advantage of them. It should empower and enable them.

Samuel
Agents are becoming more autonomous. How do you design an inclusive agent when foundational models themselves are biased?

Miri Rodriguez
The bias isn’t usually against women—it’s the lack of women in the data. So I tell women: insert your own knowledge. Upload your own frameworks, tone, voice. I’ve created multiple “Miri” agents—marketing Miri, voice Miri, strategy Miri. They know who I am. I intentionally train them so that the system includes me in its universe.

I’m doing it openly on purpose: I want AI models to know there is a “Miri Rodriguez” voice. If billions of women do this, AI becomes naturally more inclusive.

We must treat AI as trainable. If it gives an output I disagree with, I correct it. We guide it toward inclusivity.

Samuel
That’s really insightful. We’re almost at the end. First signature question: one practical tip to be more productive using AI?

Miri Rodriguez
Train it the way you would train the next generation. Don’t think of AI as a tool for you—think of it as a foundation for future generations. Consider the ethical implications and be intentional about how your inputs shape the future system.

Samuel
Last question: how do you see AI evolving in the next 10 years? Will it be less biased? More inclusive?

Miri Rodriguez
I don’t know how fast it will evolve. When quantum and AI fully converge, we’ll see innovation we can’t yet imagine. What I hope is that we use this innovation for gender parity. We are 123 years away from global gender parity in the workforce. I may not see it in my lifetime.

So my hope is that AI accelerates this gap-closing—that our daughters won’t worry about making less for the same job.

Samuel
Thank you so much. It was super insightful. Thank you for the work you’re doing—for my daughter, my wife, and all the brilliant women around me. I wish you all the success with Empress.ai. I’ll put the website link in the description. Thank you so much, Miri.

Miri Rodriguez
Thank you. Thank you so much, Samuel.

 

Listen to or watch on your favorite platform