Artificial intelligence is rapidly reshaping society, yet its development remains overwhelmingly male-dominated. This isn’t just a matter of representation; it’s a systemic issue that risks embedding existing biases into the very technologies that will dictate how we work, learn, and even receive healthcare. The problem isn’t just about flawed datasets — it’s about who builds the systems in the first place.
The Gender Gap in AI Development
Currently, only 25% of computer science students in the UK are women, and the situation is worsening in Silicon Valley. This isn’t a new phenomenon: technology has historically been a male-centric field. However, recent events suggest a regression, with policies and attitudes actively pushing women out. For example, former US President Trump issued an executive order targeting “woke AI,” advocating for the removal of diversity, equity, inclusion, and climate change considerations from AI standards.
This hostile environment has led to experienced female leaders being sidelined. Rumman Chowdhury, a former ethics and accountability lead at Twitter, was fired after Elon Musk’s takeover. She notes that anti-diversity sentiment existed in Silicon Valley long before Trump’s order. The reality is stark: many in the field already operate in a world “without women,” as several experts at the Women and the future of science conference at the Royal Society bluntly stated.
Why This Matters: The Gender Data Gap in Action
The consequences of this imbalance extend far beyond fairness. History is littered with technologies designed for male bodies and needs, from crash test dummies to medical research that prioritizes men’s health. This is the gender data gap, and its effects can be fatal. AI will impact everything from job markets to healthcare, yet only 2% of venture capital funding goes to women-led AI projects, and less than 1% of healthcare research focuses on women’s conditions.
This disparity means that AI risks perpetuating inequalities, reinforcing the idea that technology serves a select few rather than all 8 billion people on the planet.
The Path Forward: Rethinking AI from the Ground Up
Fixing this requires more than just tweaking algorithms. Experts like Rachel Coldicutt argue that current AI models are too deeply biased to correct and that alternative, more inclusive approaches are needed. Rather than focusing on existential risks, we should prioritize AI systems that care for people and the planet.
Humane Intelligence, a non-profit co-founded by Chowdhury, is working to make AI more accountable. However, systemic change requires shifting the incentives driving AI development. As David Leslie of the Alan Turing Institute points out, we need to address economic and political frameworks that discourage young people from pursuing AI for the social good.
Ultimately, even our definition of intelligence may need reevaluation. The foundational ideas of AI stem from a 1950s meeting at Dartmouth College — a gathering of all men.
To create truly beneficial AI, we must acknowledge that innovation thrives on diversity. Without it, we risk building a future designed for the few, not the many.
























