Researchers Work to Make Artificial Intelligence Genuinely Fair
Artificial intelligence (AI) algorithms help make online shopping seamless, calculate credit scores, navigate vehicles and even offer judges criminal sentencing guidelines.
But as the use of AI increases exponentially, so does the concern that biased data can result in flawed decisions or prejudiced outcomes.
Now, backed by a combined $1.6 million in funding from the National Science Foundation (NSF) and Amazon, two teams of University of Maryland researchers are working to eliminate those biases by developing new algorithms and protocols that can improve the efficiency, reliability and trustworthiness of AI systems.
Out of 11 proposals that were accepted this year by the NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon, two are led by UMD faculty.
The program’s goals are to increase accountability and transparency in AI algorithms and make them more accessible so that the benefits of AI are available to everyone. This includes machine learning algorithms—a subset of AI in which computerized systems are “trained” on large datasets to allow them to make proper decisions. Machine learning is used by some colleges around the country to rank applications for admittance to graduate school or allocate resources for faculty mentoring, teaching assistantships or coveted graduate fellowships.
“As these AI-based systems are increasingly used in higher education, we want to make sure they render representations that are accurate and fair, which will require developing models that are free of both human and machine biases,” said Furong Huang, an assistant professor of computer science who is leading one of the UMD teams.
That project, “Toward Fair Decision Making and Resource Allocation with Application to AI-Assisted Graduate Admission and Degree Completion,” received $625,000 from NSF with an additional $375,000 from Amazon.
A key part of the research, Huang said, is to develop “dynamic fairness classifiers” that allow the system to train on constantly evolving data and then make multiple decisions over an extended period. This requires feeding the AI system historical admissions data, as is normally done now, and consistently adding student-performance data, something that is not currently done on a regular basis.
The researchers are also active in developing algorithms that can differentiate notions of fairness as it relates to resource allocation. This is important for quickly identifying resources—additional mentoring, interventions or increased financial aid—for at-risk students who may already be underrepresented in the STEM disciplines.
Collaborating with Huang are Min Wu and Dana Dachman-Soled, a professor and an associate professor, respectively, in the Department of Electrical and Computer Engineering.
A second UMD team led by Marine Carpuat, an associate professor of computer science, is focused on improving machine learning models used in language translation systems—with particular focus on platforms that can accurately function in high-stakes situations like an emergency hospital visit or legal proceeding.
That project, “A Human-Centered Approach to Developing Accessible and Reliable Machine Translation,” is funded with $393,000 from NSF and $235,000 from Amazon.
Immigrants and others who don’t speak the dominant language can be hurt by poor translation, said Carpuat. “This is a fairness issue, because these are people who may not have any other choice but to use machine translation to make important decisions in their daily lives,” she said. “Yet they don’t have any way to assess whether the translations are correct or the risks that errors might pose.”
To address this, Carpuat’s team will design systems that are more intuitive and interactive to help the user recognize and recover from translation errors that are common in many systems today.
Central to this approach is a machine translation bot that will quickly recognize when a user is having difficulty. The bot will flag imperfect translations, and then help the user to craft alternate inputs—phrasing their query in a different way, for example—resulting in better outcomes.
Carpuat’s team includes Ge Gao, an assistant professor in the iSchool, and Niloufar Salehi, an assistant professor in the School of Information at UC Berkeley.
Of the six researchers involved in the Fairness in AI projects, five have appointments in the University of Maryland Institute for Advanced Computer Studies (UMIACS).
“We’re tremendously encouraged that our faculty are active in advocating for fairness in AI and are developing new technologies to reduce biases on many levels,” said UMIACS Director Mihai Pop. “I’m particularly proud that the teams represent four different schools and colleges at two universities. This is interdisciplinary research at its best.”
—Story By Tom Ventsias, University of Maryland
This article was published in Maryland Today.