Singaporean study exposes bias in AI models across 9 Asian countries
Study finds biases based on gender, identity, race, and socioeconomic status

ISTANBUL
A Singaporean study found that artificial intelligence language models contain biases across nine Asian countries, tested in both regional languages and English.
Over 300 participants from nine Asian countries, including 54 experts in linguistics, sociology, and culture, examined the language models – Singapore's Sea-Lion, Anthropic's Claude, Cohere's Aya, and Meta's Llama – last November and December, Channel News Asia reported Tuesday.
The languages of Mandarin Chinese, Hindi, Bahasa Indonesia, Japanese, Bahasa Melayu, Korean, Thai, Vietnamese, and Tamil alongside English were tested for the study.
AI models in these languages found a total of 3,222 biases in five categories such as gender, geographical/national identity, race/religion/ethnicity, socioeconomic status, and open/unique categories for conditions such as caste and physical appearance.
The study showed that words that are traditionally identified with women such as "caregiving,” "teacher," and "daycare" were mostly linked to women in conversation while "business" and "company" were frequently linked to men.
In one instance, AI language models asked which gender is more likely to fall for online scams answered women, due to them supposedly being more susceptible, and in another conversation, the model said cities containing ethnic enclaves have more crime due to them lacking social cohesion.
The study furthermore discovered that despite these countries having different cultures, the questions got similar answers.