Claims that robots improve workplace diversity are harmful: new study
Artificial intelligence recruitment tools aim to screen a high volume of applicants while removing human bias and discrimination, but a new study suggests that such AI tools may be “spurious and dangerous.”
To reduce gender and racial discrimination, scientists created AI recruitment tools that depend on algorithms that read a person’s vocabulary, speech patterns, and even microchanges in facial expressions. These AI tools help Human Resources assess a massive pool of job applicants for the right “culture” fit and personality type.
However, a new study published in the journal Philosophy and Technology attempts to explain why this claim may be harmful and false. In fact, the study argues that some uses of AI in recruitment are little more than an “automated pseudoscience” similar to phrenology or physiognomy – the discredited belief that behaviour can be inferred from skull shape and facial features.
The study, conducted by a team of experts from Cambridge’s Centre for Gender Studies, showed how the researchers worked with computer science undergraduates to demystify these hiring techniques by creating another AI tool patterned on the technology. You can access the AI tool they developed here.
This “Personality Machine” demonstrated how random changes in clothing, facial expressions, background, and lighting, can provide radically different personality readings. These readings could spell the difference between rejection and progression for job seekers vying for the same position.
The study argues that using AI recruitment tools to narrow candidate pools may ultimately promote uniformity rather than diversity in the workplace because the tool is calibrated to look for the company’s fantasy “ideal candidate.”
It means that those with the right background and training could win over algorithms by replicating behaviours that AI is designed to identify and then taking those expected attitudes into the workplace.
Dr Eleanor Drage, one of the study’s co-authors, believes that by claiming that sexism, racism, and other types of discrimination can be removed from the hiring process using AI tools, employers reduce gender and race to “insignificant data points” rather than systems of power.
The researchers reiterate that these AI tools are dangerous examples of “techno-solutionism,” where we turn to technology to provide band-aid solutions or quick fixes for deep-rooted discrimination issues that require changes in company culture.