ATD: Robustness, Privacy, and Fairness in Threat Detection

Project: Research project

Project Details

Description

The project focuses on three important topics within the general context of threat detection: robustness, privacy and fairness. Robust procedures are needed for dealing with highly corrupted and noisy data. The project will consider robust generation of images and its important application to synthetic data augmentation in the presence of outliers and noise. Private procedures are needed to maintain confidentiality of individuals when collecting aggregate data that aims to serve the public interest. Differential privacy has emerged as the predominant theoretical framework for addressing such issues. The project will develop effective differentially-private algorithms for dimension reduction that are relevant to threat detection. Fair procedures are needed to avoid influences of social biases against minorities and any prejudice or favoritism towards an individual or a group based on their characteristics. This is crucial in many threat detection applications that focus on anomalies, where under-represented individuals may be unfairly profiled as anomalous and consequently threat-prone. The project will focus on fair dimension reduction algorithms that are relevant to threat detection. The resulting algorithms will be tested on different geospatial, human-dynamics and imaging datasets.

This project will support one graduate student in the first and third years and one postdoc in the second year. The project aims to develop effective algorithms and mathematical foundations relevant to the above three stated themes. The main focus of the study of robustness will be on the generation of realistic images when the training set for the generative task is corrupted, either by noise or by outliers (e.g., having images from a different class or completely corrupted images so their typical structure cannot be recognized). The project will explore an end-to-end deep neural network to robustly generate high-fidelity images under corruption without knowing the labels of the corrupted, or anomalous, data points. The main focus of the study of privacy will be on theory and algorithms for differentially private dimension reduction. Differential privacy will be obtained by incorporating a noise mechanism. The expected theory will highlight the interaction between nonconvexity, smoothness, and robustness in relation to differential privacy. The main focus of the study of fairness will be on theory and algorithms for fair dimension reduction, where fairness will be obtained by minimizing a special non-convex energy function. The theory will highlight the interaction between nonconvexity and fairness.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

StatusActive
Effective start/end date9/1/218/31/24

Funding

  • National Science Foundation: $300,000.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.