Research talk: Enhancing the robustness of massive language models via invariant risk minimization
- Robert West | EPFL
- Microsoft Research Summit 2021 | Causal Machine Learning
Despite the dramatic recent progress in natural language processing (NLP) afforded by large pretrained language models, important limitations remain. A growing body of work demonstrates that such models are easily fooled by adversarial attacks and have poor out-of-distribution generalization, as they tend to learn spurious, non-causal correlations. This talk explores how to reduce the impact of spurious correlations in large language models based on the so-called invariance principle, which states that only relationships invariant across training environments should be learned. It includes data showing that language models trained via invariant risk minimization (IRM), rather than the traditional expected risk minimization, achieve better out-of-distribution generalization.
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit (opens in new tab)
Causal Machine Learning
-
Opening remarks: Causal Machine Learning
- Cheng Zhang
-
Research talk: Challenges and opportunities in causal machine learning
- Amit Sharma,
- Cheng Zhang,
- Emre Kiciman
-
Research talk: Causal ML and business
- Jacob LaRiviere
-
-
-
-
Research talk: Causality for medical image analysis
- Daniel Coelho de Castro
-
Research talk: Causal ML and fairness
- Allison Koenecke
-
Panel: Causal ML Research at Microsoft
- Adith Swaminathan,
- Javier González Hernández,
- Justin Ding
-
Research talk: Post-contextual-bandit inference
- Nathan Kallus
-
-
Demo: Enabling end-to-end causal inference at scale
- Eleanor Dillon,
- Amit Sharma
-
Panel: Causal ML at Microsoft
- Juan Lavista Ferres,
- Mingqi Wu,
- Sonia Jaffe
-
Panel: Causal ML in industry
- Greg Lewis,
- Ya Xu,
- Totte Harinen
-
Closing remarks: Causal Machine Learning
- Emre Kiciman