Using algorithms to assess people’s potential or appraise their performance is controversial, as it is so open to being skewed by bias.
 
Yet Susana Almeida Lopes says her own work on creating algorithms shows there is a way round the problem of AI systems reflecting the inherent bias of the people that program or commission them.
 
It’s about avoiding building black boxes, where we have no real idea of how a machine arrives at a decision.
 
“No one wants to be appraised by a machine and if you don’t know what is in the machine it is even worse. It’s not fair,” she says.
 
Simple predictive algorithms used in a law firm may well mimic the bias and prejudices of partners – that’s how it arrives at its predictions.
 
“But you can go in a different direction,” she insists. “You can make an algorithm designed to avoid biases, directing it instead to be more objective.” 
 
Understanding the criteria used means the bias becomes transparent. “If it’s transparent, it’s not a black box.”
 
 
An interview with Susana Almeida Lopes,
Managing Partner, SHL Portugal