Artificially intelligent hiring tools do not reduce bias or improve diversity, according to a study.
"There is growing interest in new ways of solving problems such as interview bias," the Cambridge University researchers say, in the journal Philosophy and Technology
The use of AI is becoming widespread - but its analysis of candidate videos or applications is "pseudoscience".
A 2020 international survey of 500 human-resources professionals suggested nearly a quarter were using AI for "talent acquisition, in the form of automation."
Some companies have found these tools problematic, the study notes.
In 2018, Amazon announced it had scrapped the development of an AI-powered recruitment engine because it could detect gender from CVs and discriminated against female applicants.
Of particular concern to the researchers were tools to "analyze the minutiae of a candidate's speech and bodily movements" to see how closely they resembled a company's supposed ideal employee.
The researchers built their own simplified AI recruitment tool, to rate candidates' photographs for the "big five" personality traits:
But the ratings were skewed by many irrelevant variables.
"When you use our tool, you can see that your personality score changes when you alter the contrast/brightness/saturation," researchers wrote.
Other investigations had reached a similar conclusion.
A German public broadcaster found wearing glasses or a headscarf in a video changed a candidate's scores.
Research from the Chartered Institute of Personnel and Development suggested only 8% of employers used AI to select candidates. "AI can efficiently help increase an organization's diversity by filtering from a larger candidate pool - but it can also miss out on lots of good candidates if the rules and training data are incomplete or inaccurate,” a report concluded. "AI software to analyze candidates' voice and body language in recruitment is in its infancy and therefore carries both opportunities and risks."