News: Is algorithmic bias hurting Southeast Asia?

HR Technology

Is algorithmic bias hurting Southeast Asia?

As more organisations adopt AI in decision-making, irresponsible development resulting in algorithmic bias could leave certain demographics marginalised.
Is algorithmic bias hurting Southeast Asia?

Southeast Asia is taking the plunge into a future powered by artificial intelligence as more countries in the region incorporate the technology on a massive and unprecedented scale. But could such widespread adoption of AI tools lead to serious ethical issues?

In this analysis, Nuurrianti Jalli – an assistant professor at Oklahoma State University and expert in media and communications – explains how Southeast Asia’s headlong rush into automation leaves companies vulnerable to AI bias when standard ethical checks and balances are ignored.

When AI bias occurs

AI bias occurs when a system produces discriminatory results despite supposedly being designed to provide precision and objectivity. It is often caused by issues in the datasets, design and processing, or other technical limitations.

Left unchecked, AI bias can skew screening processes in favour of, or against, a particular demographic.

Jalli cited the use of facial recognition tools as an example where AI biases can occur in the workplace. Systems trained predominantly to screen for Caucasian faces may falter when identifying those of other races and ethnicities.

As more organisations in the region adopt AI in decision-making, Jalli warned irresponsible use of the technology could leave people in marginalised communities even further behind and locked out of opportunities.

AI bias in candidate screening

In Southeast Asia, companies often use AI systems such as speech and facial recognition for candidate screening and credit risk assessment. However, Jalli pointed out that these tools are prone to algorithmic biases that lead to unjust outcomes.

In Indonesia, an AI-based job recommendation system failed to provide a fair assessment after it unintentionally excluded female applicants from certain job opportunities. Experts believe the error was caused by historical biases in the data that the system used.

Jalli said AI models that primarily use Western-centric training data can overlook or inaccurately represent people in Southeast Asia given the diversity in languages, skin tones, and cultures.

AI bias in credit application

There is also potential for biases when using AI for credit risk evaluation, as highlighted by digital media company KrASIA. Financial institutions, such as those in the Philippines, use AI algorithms to analyse large quantities of data to find out whether a loan applicant is likely to default.

However, KrASIA warned AI can propagate socio-economic prejudices when fed with biased training data. Systems that use algorithms for residential location, for example, may unjustly disqualify people living in less affluent neighbourhoods for credit. This can result in digital redlining, where certain communities are denied access to financial services because of systematic disadvantages.

Preventing AI bias and discrimination

To prevent biases in AI systems, Jalli called on governments, companies, and community groups to have sustainable collaboration with each other in building more inclusive technologies.

While there is no singular solution to algorithmic biases, Jalli believes having representation, accountability, and transparency in the use of automated systems can help address the issue. She said civil society groups and scholars in Southeast Asia are becoming more vocal in guarding against automated discrimination.

Likewise, KrASIA urged organisations to use diverse and representative data for their systems. They should identify and address historical biases, as well as conduct regular audits of their AI tools.

The group also stressed the importance of acknowledging potential risks for AI biases. By addressing issues with AI head-on, Southeast Asian countries can ensure such technologies are developed responsibly and ethically.

Read full story

Topics: HR Technology, Technology, #Artificial Intelligence

Did you find this story helpful?

Authors

QUICK POLL

How do you envision AI transforming your work?