Robot

Delegating Bias and Discrimination to Computer Systems

Should 2022 be the year that we turn over decision making to artificial intelligence? Writing in the Washington Post, Steven Zeitchik suggests it should. We could banish fears of making bad decisions, he says. But we beg to differ. A growing body of evidence tells us that computer systems can replicate the bad decisions we make – embedding systematic bias and discrimination. Outsourcing bad decisions doesn’t make them good ones. Rather, it simply hides them from plain view.

This is why New York, Illinois, and Maryland have put limits on the use of AI for hiring. Computer systems that screen job applicants might make the process more efficient, but they can also replicate patterns of bias and discrimination in hiring.

Coping with a Deluge of Job Applicants

In part, computer systems have created the need for this. They’ve made it easy for large numbers of people to learn about job opening and bury an employer in applications. AI can help to narrow hundreds of applications down to a few that get close scrutiny.

The problem comes in bias that finds its way into algorithms to screen those applications, says Nicol Turner Lee at Brookings:

“Computers are programmed by humans, so they come with the same values, norms, and assumptions that humans hold.”

This is what led Amazon to scrap an AI recruiting system recently. It worked to screen out women. That wasn’t the explicit intent. The system merely learned its biases from humans. Likewise, it’s not a stretch to think that systems using automated video interviews could embed anti-fat bias, as well as racial and ethnic bias.

Benefits Outside the Black Box

Of course, we should not dismiss the usefulness of AI, as Zeitchik explains:

“There are growing piles of evidence that deploying AI that can think faster and even differently will pay dividends in the real world. A Stanford study last month concluded that AI sped up discoveries on coronavirus antiviral drugs by as much as a month, potentially saving lives. Canadian researchers in September found that AI made consistently better choices than doctors in treating behavioral problems. Even a button-down institution like Deloitte has a staffer who has persuasively argued that we should use AI, not humans, to update government regulations.”

But we should recognize the pitfalls of outsourcing the bias of our thinking to black box computer systems. If not, AI will simply help us make more bad decisions and do it faster.

Click here and here for more perspective on AI in employment decisions. For Zeitchik’s essay in the Post, click here.

Robot, photograph by Alex Knight, licensed by Unsplash

Subscribe by email to follow the accumulating evidence and observations that shape our view of health, obesity, and policy.


 

January 1, 2022