Automated employment decision tools could harbor biases
Click play to listen to this article.
(Washington News Service) A legal expert has issued a warning that artificial intelligence tools could lead to discriminatory practices in hiring and firing.
Algorithms and AI are increasingly used by businesses through Automated Employment Decision Tools.
But Hardeep Rekhi, partner at the Seattle law firm Rekhi & Wolk, said these tools can be programmed with data that can train them to be inherently biased.
He noted that Amazon, for example, developed one of these tools to review resumes - but had to abandon it in 2018 because it was eliminating female candidates.
"This tool is only as good as the data that it's being trained on," said Rekhi, "and if that data is tainted by individuals that have bias, I worry that the tool itself will be mis-trained."
Rekhi said it's also hard to know how AI is using data to make its decisions - and claimed these tools are essentially "black boxes" that could be discriminating against certain classes of people, without the user knowing.
Rekhi said there are ways to protect people against this. During this year's legislative session in Olympia, House Bill 1951 was introduced to prohibit "algorithmic discrimination" by automated tools.
The bill didn't pass, but Rekhi said it was on the right path.
"You have to put the onus on developers of the tool and users of the tool," said Rekhi, "to make sure that whatever tool they're using isn't discriminating, and that's what the Legislature has proposed."
Rekhi said automated tools for business decisions like hiring and firing threaten the many gains made in employment practices in recent decades.
"We've worked so hard, and we've made significant progress in the field of trying to eliminate discrimination in the workplace," said Rekhi, "and I don't want this to, kind of, undo that or to hide that."