A Variational Perspective on Generative Flow Networks

Open Access
Authors
Publication date 04-2023
Journal Transactions on Machine Learning Research
Article number 612
Volume | Issue number 2023
Number of pages 16
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Generative flow networks (GFNs) are a class of probabilistic models for sequential sampling of composite objects, proportional to a target distribution that is defined in terms of an energy function or a reward. GFNs are typically trained using a flow matching or trajectory balance objective, which matches forward and backward transition models over trajectories. In this work we introduce a variational objective for training GFNs, which is a convex combination of the reverse- and forward KL divergences, and compare it to the trajectory balance objective when sampling from the forward- and backward model, respectively. We show that, in certain settings, variational inference for GFNs is equivalent to minimizing the trajectory balance objective, in the sense that both methods compute the same score-function gradient. This insight suggests that in these settings, control variates, which are commonly used to reduce the variance of score-function gradient estimates, can also be used with the trajectory balance objective. We evaluate our findings and the performance of the proposed variational objective numerically by comparing it to the trajectory balance objective on two synthetic tasks.
Document type Article
Language English
Published at https://openreview.net/forum?id=AZ4GobeSLq
Other links https://github.com/zmheiko/variational-perspective-on-gflownets http://jmlr.org/tmlr/papers/
Downloads
612_a_variational_perspective_on_g (Final published version)
Permalink to this page
Back