Bias isn’t only present in algorithms because they are being fed with biased data. Often the bias is embedded in the artefacts and design of the system itself.
“If the data is biased, the outcome may inevitably be biased because AI learning algorithms adjust parameters based on the data on which they are trained – that’s how it is,” says Manuela Veloso. “But the bias may also be present in the choices underlying the design of the algorithm and not just in the data,” she says. 
“The teams that develop AI need to be diverse too. The AI teams need to be diversified as they help develop AI algorithms,” she says.
“An important way to solve the AI algorithm bias is by diversifying a development team and, hopefully, this will reduce the problem to good data selection.” 
Regulation may not be currently addressing this need for diversity and its impact on development of good AI. Maybe it will be very hard to devise and enforce rules to penalise companies whose teams are not diverse enough. But there should be policy to encourage such practice. 
“I don’t know how we would impose this behaviour,” she says. “But it would be interesting if companies that are more diverse were also valued more highly than others.”

“It would be interesting if companies
that are more diverse were also valued
more highly than others.”

An interview with Manuela Veloso,
Head J.P. Morgan AI Research, Professor at Carnegie Mellon University