Exploiting Inferential Structure in Neural Processes

Open Access
Authors
Publication date 2023
Journal Proceedings of Machine Learning Research
Event 39th Conference on Uncertainty in Artificial Intelligence, UAI 2023
Volume | Issue number 216
Pages (from-to) 2089-2098
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Neural Processes (NPs) are appealing due to their ability to perform fast adaptation based on a context set. This set is encoded by a latent variable, which is often assumed to follow a simple distribution. However, in real-word settings, the context set may be drawn from richer distributions having multiple modes, heavy tails, etc. In this work, we provide a framework that allows NPs’ latent variable to be given a rich prior defined by a graphical model. These distributional assumptions directly translate into an appropriate aggregation strategy for the context set. Moreover, we describe a message-passing procedure that still allows for end-to-end optimization with stochastic gradients. We demonstrate the generality of our framework by using mixture and Student-t assumptions that yield improvements in function modelling and test-time robustness.
Document type Article
Note Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, 31-4 August 2023, Pittsburgh, PA, USA. - With supplementary material.
Language English
Published at https://proceedings.mlr.press/v216/tailor23a.html
Other links https://openreview.net/forum?id=MbQKovZFHIH
Downloads
tailor23a (Final published version)
Supplementary materials
Permalink to this page
Back