Improving AI Hiring: Behavioral Measures and Field Validation
Dargnies, Hakimov & Kübler "Behavioral Measures Improve AI Hiring: A Field Experiment" CRC Discussion Paper No. 532
The expansion of Artificial Intelligence (AI) into hiring processes is often hindered by the scarcity of comprehensive employee data for algorithm training. Addressing this fundamental limitation, authors Marie-Pierre Dargnies (University of Paris Dauphine), Rustamdjan Hakimov (University of Lausanne), and Dorothea Kübler (WZB Berlin, Project A06) hypothesized that incorporating behavioral measures elicited directly from applicants could significantly boost the AI’s predictive accuracy.
Studying microfinance loan officers, the authors trained a random-forest algorithm to predict productivity, specifically focusing on whether employees qualified for a bonus within their first year of employment. Initial results demonstrated that an algorithm relying solely on firm demographic data achieved 65.1% accuracy in an out-of-sample test. Critically, the inclusion of non-incentivized behavioral measures significantly improved performance by five percentage points, reaching 69.7% accuracy. This finding is particularly important for implementation, as non-incentivized measures proved substitutable for, and thus easier to elicit than, incentivized measures. Analysis revealed that the predictive algorithm relied most heavily on non-incentivized traits: the three most important variables were self-reported risk aversion, patience, and confidence in cognitive test answers.
A key contribution of this research was the validation of the algorithm in a real-world setting, testing its robustness against both the selective nature of the training sample and potential strategic manipulation by applicants. Through a field experiment on hiring new employees, the algorithm’s strong predictive power regarding bonus eligibility was confirmed. While applicants displayed some strategic attempts, such as reporting lower neuroticism, the predictive capacity remained intact, suggesting economic measures might be more robust to manipulation than psychological ones.
Comparing outcomes in the field experiment, algorithmic hiring was found to be marginally more efficient than traditional managerial hiring. Employees selected via the AI treatment exhibited a significantly higher likelihood of receiving a bonus when considering only those still employed at the time of the bonus decision. This study provides a constructive framework for building targeted hiring algorithms using measures rooted in behavioral economics and psychology, moving beyond the “one-size-fits-all” approaches commonly offered by third-party services.
Link (pdf): Behavioral Measures Improve AI Hiring: A Field Experiment


