Will AI hinder gender equality at work?
By Sandra Lewin | Sponsored by Howden Gender Balance Group
Is the future built on yesterday’s biases?
AI is reshaping the workplace, from how we hire to how we perform tasks, make decisions, and get promoted.
But here’s the catch 👇
“Technology may be neutral, but the data it learns from isn’t”.
When we train tomorrow’s tools on yesterday’s behaviours, we risk building a future that automates inequality instead of fixing it.
At a recent roundtable I hosted, a critical question emerged:
If AI is eliminating the very roles women stereotypically hold: admin, ops, support, what does a gender-equal future of work actually look like? (Ref World of Economic future job report).
Jobs are not disappearing, they are evolving, but not for everyone.
Let’s look at the data.
According to the World Economic Forum Future of Jobs Report (2025), 43% of administrative roles are projected to be replaced by AI within the next five years.
According to research, 70-90% of these roles are held by women due to long-standing social and structural patterns, are being automated faster than others.
So while AI brings efficiency and opportunity, it may also erase opportunities for the people who’ve historically held these jobs.
In the recent 100 Women in Insurance roundtable series sponsored by Howden's Gender Balance Network, one woman in the room put it simply:
“If we’re not careful, AI will move women out of the workplace, not up in it.”
Can we trust AI to be fair?
“If we rely on existing data, we’re just repeating the past” said one of the roundtable attendees.
That insight, shared by a male participant, captured the room’s sentiment. Because here’s the truth: AI doesn’t think. It learns. And what it learns is our past behaviour.
So if an algorithm is trained on historic promotion data from a male-dominated business, it might “learn” that men make better leaders.
If it analyses job performance reviews that underrate assertive women, it may see assertiveness as a flaw in female candidates.
One roundtable participant shared a chilling example: An AI-based hiring tool that consistently recommended male candidates over women for technical roles, because it was replicating the company’s previous hiring patterns.
The Risk: We lose the human filter.
When humans make decisions, we can spot a flawed process and course-correct. But when we hand that decision to a machine, and assume it’s “objective”, we risk embedding bias into systems that scale.
Automation doesn’t eliminate bias. It industrialises it.
So, will AI hurt gender equality?
It depends, here’s how to avoid it.
The good news?
This isn’t inevitable. But it does require intentional leadership and ethical design.
Practical Tips: making AI work for everyone
🔸 Audit your tools. Regularly test AI systems used in hiring, promotions, and performance to see if outcomes differ by gender or ethnicity.
🔸 Upskill women for the future of work. Invest in training and mentoring programmes that help women move into AI-adjacent fields, not just admin replacements.
🔸 Involve diverse teams in design. From developers to decision-makers, diversity in the AI creation process is critical to avoid blind spots.
🔸 Retain human accountability. If an algorithm makes a biased decision, you’re still responsible. Don’t delegate ethical thinking to code.
🔸 Build AI literacy across the business. Equip HR teams, managers, and leaders to understand how AI works, and where it can go wrong.
AI has incredible potential to remove bias, if we design it that way. But without checks and accountability, it may simply mirror inequality back to us at scale.
Let’s not assume that progress is inevitable. It’s not. It’s designed.
And if we want a future of work that includes everyone, we need to build it, not just code it 🙌