Learning Optimal Features via Partial Invariance

Moulik Choraria, Ibtihal Ferwana, Ankur Mani, Lav R. Varshney

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Learning models that are robust to distribution shifts is a key concern in the context of their real-life applicability. Invariant Risk Minimization (IRM) is a popular framework that aims to learn robust models from multiple environments. The success of IRM requires an important assumption: the underlying causal mechanisms/features remain invariant across environments. When not satisfied, we show that IRM can over-constrain the predictor and to remedy this, we propose a relaxation via partial invariance. In this work, we theoretically highlight the sub-optimality of IRM and then demonstrate how learning from a partition of training domains can help improve invariant models. Several experiments, conducted both in linear settings as well as with deep neural networks on tasks over both language and image data, allow us to verify our conclusions.

Original languageEnglish (US)
Title of host publicationAAAI-23 Technical Tracks 6
EditorsBrian Williams, Yiling Chen, Jennifer Neville
PublisherAAAI press
Pages7175-7183
Number of pages9
ISBN (Electronic)9781577358800
StatePublished - Jun 27 2023
Event37th AAAI Conference on Artificial Intelligence, AAAI 2023 - Washington, United States
Duration: Feb 7 2023Feb 14 2023

Publication series

NameProceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
Volume37

Conference

Conference37th AAAI Conference on Artificial Intelligence, AAAI 2023
Country/TerritoryUnited States
CityWashington
Period2/7/232/14/23

Bibliographical note

Publisher Copyright:
Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Fingerprint

Dive into the research topics of 'Learning Optimal Features via Partial Invariance'. Together they form a unique fingerprint.

Cite this