FAI: Advancing Fairness in AI with Human-Algorithm Collaborations

Project: Research project

Project Details

Description

Artificial intelligence (AI) systems are increasingly used to assist humans in making high-stakes decisions, such as online information curation, resume screening, mortgage lending, police surveillance, public resource allocation, and pretrial detention. While the hope is that the use of algorithms will improve societal outcomes and economic efficiency, concerns have been raised that algorithmic systems might inherit human biases from historical data, perpetuate discrimination against already vulnerable populations, and generally fail to embody a given community's important values. Recent work on algorithmic fairness has characterized the manner in which unfairness can arise at different steps along the development pipeline, produced dozens of quantitative notions of fairness, and provided methods for enforcing these notions. However, there is a significant gap between the over-simplified algorithmic objectives and the complications of real-world decision-making contexts. This project aims to close the gap by explicitly accounting for the context-specific fairness principles of actual stakeholders, their acceptable fairness-utility trade-offs, and the cognitive strengths and limitations of human decision-makers throughout the development and deployment of the algorithmic system.

To meet these goals, this project enables close human-algorithm collaborations that combine innovative machine learning methods with approaches from human-computer interaction (HCI) for eliciting feedback and preferences from human experts and stakeholders. There are three main research activities that naturally correspond to three stages of a human-in-the-loop AI system. First, the project will develop novel fairness elicitation mechanisms that will allow stakeholders to effectively express their perceptions on fairness. To go beyond the traditional approach of statistical group fairness, the investigators will formulate new fairness measures for individual fairness based on elicited feedback. Secondly, the project will develop algorithms and mechanisms to manage the trade-offs between the new fairness measures developed in the first step, and multiple existing fairness and accuracy measures. Finally, the project will develop algorithms to detect and mitigate human operators' biases, and methods that rely on human feedback to correct and de-bias existing models during the deployment of the AI system.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

StatusFinished
Effective start/end date10/1/174/30/21

Funding

  • National Science Foundation: $581,013.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.