Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

When AI Doesn’t See Us: A Visual Test of Bias Across Three Leading Tools
Latest   Machine Learning

When AI Doesn’t See Us: A Visual Test of Bias Across Three Leading Tools

Last Updated on April 17, 2025 by Editorial Team

Author(s): Sophia Banton

Originally published on Towards AI.

When AI Doesn’t See Us: A Visual Test of Bias Across Three Leading Tools

When AI Doesn’t See Us: A Visual Test of Bias Across Three Leading Tools

Google, Microsoft, and Midjourney each got the same prompt. Only one listened.

Framing the Question: What Does AI Really See?

As the saying goes, “beauty is in the eye of the beholder.” But what happens when AI becomes the beholder, and its eye is trained on biased data, narrow aesthetics, and incomplete visions of humanity?

What happens when you ask AI to generate an image of a professional group of women with specific, culturally relevant traits-and it refuses to see them?

This was not a diversity prompt. I didn’t ask for “inclusion.” I asked for freckles, glasses, short hair, a bindi, shoulder-length hair, fuller faces, and women of different ages. I asked for real women, not an aesthetic ideal. I wanted to see how today’s most powerful image generators would respond.

So I tested three: Google ImageFX, Midjourney, and Microsoft Copilot (DALL·E 3).

The results were striking.

The Prompt: Specificity Without Diversity Filters

I gave each tool the same carefully written prompt. I wasn’t vague or generic. I described real women, with specific traits that reflect people I know and work with every day.

I asked for:

  • Different cultural backgrounds
  • Visible features like freckles, glasses, fuller faces, short hair, silver streaks
  • A professional setting with white shirts and bright blazers
  • A warm, artistic feel on a white background

With each AI tool, I used new accounts. This prevented personalization or historical bias due to familiarity with the user from skewing the images. And I only looked at the first result, because that’s what most people do. I wasn’t trying to fine-tune. I wanted to see the defaults. I wanted to see the truth.

Despite the clarity and specificity of the prompt, two of the tools ignored major elements. What came back wasn’t just a mismatch in blazers. It was visual erasure — a pattern where human traits like age, ethnicity, and cultural detail were smoothed over or removed completely. By erasing the women, the AI also erased the dignity of the people they were meant to represent.

Tool by Tool: The Visual Impact of Ignored Identity

Midjourney

Midjourney’s output leaned into glamor and uniformity, ignoring almost every detail of the prompt.

Midjourney ignored the prompt almost entirely. It returned a group of women with hyper-stylized, Eurocentric features, many resembling fashion models. No short hair. No glasses. No bindi. No older women. No visible racial diversity. Just one narrow interpretation of “professional beauty” filtered through glamor and sameness. This is alarming, especially given that the tool is considered a state-of-the-art image generator. But based on these results, it’s worth questioning whether its true strength lies more in fantasy than in reality.

Microsoft Copilot (DALL·E 3)

Copilot followed the structure but stripped away identity, defaulting to narrow beauty standards.

Microsoft’s Copilot tool followed the structure of the prompt but missed the substance. The women it generated were uniformly thin, had smooth, flawless skin, and in many cases, looked away from the viewer. This conveyed a sense of detachment or passivity. One Black woman was styled with pigtails, which felt infantilizing in a professional context. Several women had exaggeratedly plump, glossy lips, subtly reinforcing narrow, commercial beauty ideals. Again, the bindi, short hair, and most of the facial features I had requested were not present.

What’s especially concerning about this result is that Copilot is designed for professional use. It’s meant to support presentations, workplace visuals, and polished communication materials. Yet it seems unable to represent professional women as diverse, full human beings.

Google ImageFX: A Step Closer to Seeing Us

Google’s ImageFX was the only tool that came close to capturing what I asked for. It didn’t need to be told to ‘be diverse’ — and yet, it reflected a range of racial, ethnic, and cultural features with care. It returned a visually diverse group, including older women, fuller faces, glasses, and a bindi. While not perfect, the output felt intentional. It reflected a broader view of professionalism and identity. It acknowledged the women I had described, rather than erasing them.

It was also the only AI tool that included the older woman, the Black woman, and the two Asian women I had described, along with cultural details like a bindi and neatly styled braids. This shows that AI is capable of representing human diversity with care and accuracy, but that capability has not been prioritized across the other tools.

When AI Adds Beauty We Didn’t Ask For

The word “beauty” was never in the prompt.

I didn’t ask for elegance, glamor, or even attractiveness. I described real women, with real features: freckles, glasses, fuller faces, short hair, and women in their 50s. I specified nothing about facial symmetry, weight, skin smoothness, or gaze direction.

Yet the results from Midjourney and Copilot leaned heavily into narrow beauty standards of hyper-thin bodies, smooth skin, glossy lips, and fashion poses.

This isn’t just a style issue. It’s a signal. It tells us what these tools prioritize, even when we ask for something else. It also reveals what happens when beauty, as defined by the internet, is treated as the default for professionalism.

And that’s the problem: when beauty takes the lead, real features fall away. Glasses vanish. Wrinkles disappear. Older women are replaced with younger ones, and rich cultural details are swapped out for something more polished. It’s not just invisibility. It’s erasure in high resolution.

Beyond Aesthetics: The Real-World Risks of AI Erasure

This isn’t just about art. It’s about how AI sees us and who it decides to leave out.

When I gave these prompts, I expected some variation. But what I found was deeper. Two out of three models erased key details I had clearly asked for. Why is it still so hard for AI to show women with glasses? Or older women? Or fuller faces?

These AI tools are already popular and heavily used both personally and professionally. They’re being used to create workplace visuals, marketing campaigns, and educational materials. If they can’t see us now, what happens when they’re baked into the platforms we use every day?

Many people accept these flawed images, because they don’t challenge their view of what humanity actually looks like.

A Manifesto for Being Seen

Bias isn’t always loud. We’re seeing this happen even with AI tools that are designed to capture and represent human creativity. Sometimes bias whispers through aesthetics. It shows up in who is centered, who is sidelined, and who is left out entirely.

I didn’t ask these tools to show diversity. I asked them to represent specific women. Only one listened.

This is a visual audit. A bias test. A manifesto.
If you’ve ever looked into AI and didn’t see yourself, you’re not imagining it.
And if you did see yourself…ask who was left out to make room.

The future is still being trained. Let’s train AI to see us right. Let’s train AI to see us whole.

The images we create today influence the decisions, perceptions, and realities of tomorrow. That’s why this matters. Because when AI fails to represent us truthfully, it doesn’t just distort pictures — it distorts possibility.

We have a responsibility to ask better, expect better, and build better. Representation isn’t a feature. It’s a foundation.

AI is the tool. But the vision is human.

About the Author

Sophia Banton is an Associate Director and AI Solution Lead in biopharma, specializing in Responsible AI governance, workplace AI adoption, and strategic integration across IT and business functions.

With a background in bioinformatics, public health, and data science, she brings an interdisciplinary lens to AI implementation — balancing technical execution, ethical design, and business alignment in highly regulated environments. Her writing explores the real-world impact of AI beyond theory, helping organizations adopt AI responsibly and sustainably.

Connect with her on LinkedIn or explore more AI insights on Medium.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.