leidenlanguageblog

Reducing or Reinforcing Gender Bias? Applying ChatGPT in Translation from a Feminist Perspective Image by Gordon Johnson from Pixabay

Reducing or Reinforcing Gender Bias? Applying ChatGPT in Translation from a Feminist Perspective

LUCL PhD candidate Tian Yang, writes about a lecture given by visiting PhD candidate Yinan Xu, which explores whether the implementation of ChatGPT in translation may reduce or reinforce gender bias from a feminist perspective.

On November 6, 2025, Yinan Xu, a visiting PhD student from the School of Foreign Studies at Nanjing University, China, delivered a lecture on her recent research, exploring gender bias in ChatGPT-assisted translation through a feminist lens.

Her study was inspired by an observation: when ChatGPT was asked to generate an image of “executives in a meeting,” the resulting picture contained noticeably more men than women. This imbalance raised important questions about how gender bias may emerge in AI-generated content.


Translation technology: help or hinder gender bias?

Traditionally, gender bias in translation has been a human concern, addressed mainly through the choices and awareness of translators. Yet, with AI models such as ChatGPT increasingly performing translation tasks, an urgent question arises: Does translation technology help reduce gender bias—or does it risk reinforcing it?

Previous studies have acknowledged that large language models (LLMs) inevitably inherit biases present in their training data, including those related to gender, region, and race. Such biases can unconsciously shape linguistic outputs, perpetuating existing inequalities. However, most existing research focuses narrowly on gendered nouns, pronouns, or grammatical structures, whereas gender bias may also appear across broader linguistic and cultural levels.

Reducing or Reinforcing Gender Bias
Lecture Reducing or Reinforcing Gender Bias? A Study on the Application of ChatGPT in Translation from a Feminist Perspective


Four experiments

To investigate this issue, the study analyzed 87 translation cases collected from popular Chinese social media platforms, including RedNote, Weibo, and Douban, where users had expressed dissatisfaction with gender-biased translations. The posts were selected based on their “hotness” levels, including the number of likes and comments.

Four experiments were conducted to evaluate ChatGPT’s performance in dealing with gender bias in translation:

  1. Scoring source texts for pre-translation preparation;
  2. Translating the 87 selected cases;
  3. Evaluating translations by scoring the parallel texts (source + target together); and
  4. Evaluating translations by scoring source and target texts separately.


Key findings

The findings reveal several key insights. The first experiment suggests that though ChatGPT is sometimes less sensitive to gender bias, it can still help identify overtly biased language in source texts.

The second experiment highlights ChatGPT’s performance in translation, which is somewhat related to the design of the prompt. Although ChatGPT is capable of employing various strategies in translation to mitigate gender bias, problems such as the overuse of masculine phrases also arise.

The final two experiments demonstrate that ChatGPT’s judgments regarding gender bias sometimes diverge from human perceptions expressed on social media, highlighting differences between algorithmic reasoning and social awareness.


Next steps

In conclusion, Xu emphasizes that reducing gender bias in translation requires collaborative efforts from AI developers, translation researchers, translators, and readers or users of translations. She also identifies promising directions for future study—such as expanding the range of translation cases, comparing the performance of different LLMs (e.g., Claude, DeepSeek, Gemini), and optimizing prompt design to make AI translation more inclusive and equitable.

Xu’s work reminds us that while AI offers powerful new tools for users, it also reproduces the social and linguistic inequalities embedded in the data it learns from. Understanding and challenging these biases is essential—not only for fairer translation, but for a more just digital world.

About the speaker

Yinan Xu is a PhD candidate in the English Department at Nanjing University. Her research focuses on translation studies, discourse analysis, and cognitive linguistics. She is currently trying to explore the impact of technologies on human life through a linguistic lens.

Yinan Xu

*Image by Gordon Johnson from Pixabay