So-called emotion recognition technology is in its infancy. But artificial intelligence companies claim it has the power to transform recruitment.
Their algorithms, they say, can decipher how enthusiastic, bored or honest a job applicant may be — and help employers weed out candidates with undesirable characteristics. Employers, including Unilever, are already beginning to use the technology.
London-based Human, founded in 2016, is a start-up that analyses video-based job applications. The company claims it can spot the emotional expressions of prospective candidates and match them with personality traits — information its algorithms collect by deciphering subliminal facial expressions when the applicant answers questions.
Human sends a report to the recruiter detailing candidates’ emotional reactions to each interview question, with scores against characteristics that specify how “honest” or “passionate” an applicant is.
“If [the recruiter] says, ‘We are looking for the most curious candidate,’ they can find that person by comparing the candidates’ scores,” says Yi Xu, Human’s founder and chief executive.
Recruiters can still assess candidates at interview in the conventional way, but there is a limit to how many they can meet or the number of video applications they can watch. Ms Xu says her company’s emotion recognition technology helps employers screen a larger pool of candidates and shortlist people they may not have considered otherwise.
“An interviewer will have bias, but [with technology] they don’t judge the face but the personality of the applicant,” she says. One aim, she claims, is to overcome ethnic and gender discrimination in recruitment.
The algorithms of Affectiva and Human are based at least partially on Facs. A specialist first labels the emotions of hundreds or thousands of images (videos are analysed frame by frame), before letting an algorithm process them — the training phase.
During training, the algorithm is watched to see how closely it predicts emotions compared with the manual labelling done by the Facs specialist. Errors are taken into account and the model adjusts itself. The process is repeated with other labelled images until the error is minimised.
Once the training is done, the algorithm can be introduced to images it has never seen and it makes predictions based on its training.
Frederike Kaltheuner, policy adviser on data innovation at Privacy International, a global campaigning organisation, agrees that human interviewers can be biased. But she says: “new systems bring new problems”.
“隐私国际”(Privacy International)数据创新方面的政策顾问弗雷德里克.卡尔特霍伊纳(Frederike Kaltheuner)同意人类面试官可能会有偏见，但她说：“新系统会带来新问题”。