Article: AI Ethics: Are HR leaders doing enough to foster ethical AI?

HR Technology

AI Ethics: Are HR leaders doing enough to foster ethical AI?

Keeping AI systems free of biases requires the concerted effort of HR leaders and AI developers.
AI Ethics: Are HR leaders doing enough to foster ethical AI?

Southeast Asia’s growing interest in artificial intelligence is causing concern among experts who warn that such blanket integration of the technology could lead to serious ethical issues.

The academic and media expert Nuurrianti Jalli urged companies in Southeast Asia to practice caution when it comes to adopting AI in their business processes.

If not done correctly and responsibly, automation in areas such as talent recruitment and performance management, could lead to AI biases, especially when standard ethical checks and balances are ignored, Jalli said.

Ethical issues linked to AI

As an example, Jalli noted a case in Indonesia where an AI system that was meant to recommend jobs to users unintentionally excluded women from certain work opportunities. The failure was supposedly caused by historical biases ingrained in the data that was used for the AI.

There is also growing concern about the potential for AI to invade users’ privacy. Digital media company KrASIA pointed to the popularity of AI-based learning apps among students in Singapore. These platforms offer personalised, interactive language lessons through the help of AI.

However, the amount of data that these learning programs require from users run the risk of privacy violations. To use the apps, students need to provide access to their performance data, learning styles, and even their personal details. This increases the potential for sensitive information to be leaked when the data is mishandled.

A similar dilemma exists for employees whose companies implement AI systems to track their work productivity. In 2022, PwC revealed that 95% of HR leaders have already adopted or are planning to adopt AI tools to monitor the performance of their remote workers.

In both instances, students and workers become more susceptible to invasion of privacy rights and personal data because of the use of AI.

Read More: AI anxiety? The gap between optimism and concern

AI Ethics: How businesses are responding

The ethical issues associated with AI use is not new to companies. In fact, a recent Deloitte report showed 88% of business leaders are taking an active role in communicating the ethical use of the technology to their employees.

Many executives believe publishing clear policies and guidelines about proper AI use is the most effective way of educating the workforce. Other popular policies include offering workshops as well as training for workers about using AI responsibly.

The same Deloitte study revealed that more than half (55%) of C-suite leaders believe that it is very important to have ethical guidelines regarding the use of emerging technologies such as Generative AI, since they are relevant to their company’s revenue, brand reputation, and marketplace trust.

Top company leaders are also very much involved in the implementation of ethical AI use, as 52% of board directors and chief ethics officers have had a say in the creation of policies and guidelines.

This conscientious effort among business leaders may have contributed to nearly half (49%) of organisations having policies or guidelines on ethical AI use. Meanwhile, 37% of those surveyed by Deloitte said they already have plans to roll out such policies and guidelines.

Read More: The job most at risk of AI takeover isn't what you think

Safeguarding against unethical AI use

Keeping AI systems free of biases requires the concerted effort of HR leaders and AI developers.

Before implementing automation tools in the workplace, HR leaders need to communicate clear policies on how data will be collected and used. It is also best to have the consent of employees prior to adopting any AI processes.

Having regular human oversight over the use of AI ensures the information the AI provides is accurate and fair. It also minimises the risk that the data is used inappropriately.

Meanwhile, AI developers and technology providers should regularly audit their AI models to make sure that biases do not occur in their systems. They should also undergo diversity training so that they will know how to properly handle information from different sources.

Businesses should also be able to ask HR leaders and AI vendors how they address potential biases in their systems before engaging with them. They need to explain clearly how data will be collected and used, as well as how they can safeguard employee and customer information against possible leaks.

HR and AI vendors should also communicate how they can address issues with their AI tools if a situation arises.

Veteran HR leader Amanda Halle offered valuable insights on how companies can navigate the ethical issues of AI. “It all begins with learning, education, and training,” Halle said in her advice to business leaders.

“Trust and support come when you clearly understand, define, and communicate the what, the why, and the how of AI at your organisation.”

Read full story

Topics: HR Technology, Technology, Business, #Artificial Intelligence

Did you find this story helpful?

Authors

QUICK POLL

How do you envision AI transforming your work?