Interactive Grounded Language Understanding in a Collaborative Environment: IGLU 2021

Open Access
Authors
  • M. ter Hoeve
  • M. Burtsev
  • Alexey Skrynnik
  • Artem Zholus
  • Aleksandr Panov
  • Kavya Srinet
  • Arthur Szlam
  • Yuxuan Sun
  • Marc-Alexandre Côté
  • K. Hofmann
  • Ahmed Awadallah
  • Linar Abdrazakov
  • Igor Churin
  • P. Manggala
  • K. Naszadi
  • Michiel van der Meer
  • Taewoon Kim
Publication date 2022
Journal Proceedings of Machine Learning Research
Event NeurIPS 2021
Volume | Issue number 176
Pages (from-to) 146-161
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Safely deploying machine learning models to the real world is often a challenging process. For example, models trained with data obtained from a specific geographic location tend to fail when queried with data obtained elsewhere, agents trained in a simulation can struggle to adapt when deployed in the real world or novel environments, and neural networks that are fit to a subset of the population might carry some selection bias into their decision process.In this work, we describe the problem of data shift from an information-theoretic perspective by (i) identifying and describing the different sources of error, (ii) comparing some of the most promising objectives explored in the recent domain generalization and fair classification literature. From our theoretical analysis and empirical evaluation, we conclude that the model selection procedure needs to be guided by careful considerations regarding the observed data, the factors used for correction, and the structure of the data-generating process.
Document type Article
Note Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track, 6-14 December 2021, Online.
Language English
Published at https://doi.org/10.48550/arXiv.2205.02388
Published at https://proceedings.mlr.press/v176/kiseleva22a.html
Downloads
kiseleva22a (Final published version)
Permalink to this page
Back