People of Chilmark

The Bigot in the Machine

We live in an age of algorithms and machine learning, says Professor Barbara Fister. But we should be aware that a bigot can find its way into the machine. She explains:

“A provider of healthcare decision-making software that helps to manage care for some 200 million people each year wanted to create an algorithm to flag patients who would likely need follow-up care after hospitalization. One set of data they used to determine need was how much money had been spent in the past on their care. The assumption was that everyone had equal access to healthcare. [Yeah . . . no.]

“Using this algorithm, less than twenty percent of the patients it predicted should have follow-up care were Black, but when researchers examined the underlying data they found that percentage should be closer to fifty percent. The algorithm failed millions of patients.

“Algorithmic bias is not inevitable and bias is not unsurmountable. Algorithms are made by humans, making human decisions. With effort, humans can work toward systems that are less biased. In the meantime it’s good to be aware that algorithms aren’t entirely fair and dispassionate sorters of information.”

Ethics and Rigor for Machine Learning in Nutrition Research

Well before we had machine learning to amplify our biases in nutrition and obesity research, those biases were a problem. People are far too willing to label people with stereotypes based on body size and cultural differences in diets. Academics, who should know better, fish around for correlations with criminality and dishonesty. Weight bias and culturally skewed ideas about nutrition wreak havoc with serving the real needs of people for health and nutrition care.

So a new paper on best practices for the ethical application of machine learning in nutrition research is most welcome. With their publication in Nutrition & Diabetes, Diana Thomas and colleagues focus on seven potential concerns: measurement error, selection bias, sample size calculations, missing data, data imbalance, explainability, and data literacy. They tell us:

“The quality of artificial intelligence and machine learning modeling requires iterative and tailored processes to mitigate against potential ethical problems or to predict conclusions that are free of bias.”

The bigot in the machine comes from the biases within ourselves. Awareness is the first step for excising it.

Click here for the new paper from Thomas et al and here for more from Barbara Fister. For further perspective from the Irish Times, click here.

People of Chilmark, painting by Thomas Hart Benton / Wikipedia

Subscribe by email to follow the accumulating evidence and observations that shape our view of health, obesity, and policy.


 

December 7, 2022