(see gflownet tutorial and paper list here)

I have rarely been as enthusiastic about a new research direction. We call them GFlowNets, for Generative Flow Networks. They live somewhere at the intersection of reinforcement learning, deep generative models and energy-based probabilistic modelling. They are also related to variational models and inference and I believe open new doors for non-parametric Bayesian modelling, generative active learning, and unsupervised or self-supervised learning of abstract representations to disentangle both the explanatory causal factors and the mechanisms that relate them. What I find exciting is that they open so many doors, but in particular for implementing the system 2 inductive biases I have been discussing in many of my papers and talks since 2017, that I argue are important to incorporate causality and deal with out-of-distribution generalization in a rational way. They allow neural nets to model distributions over data structures like graphs (for example molecules, as in the NeurIPS paper, or explanatory and causal graphs, in current and upcoming work), to sample from them as well as to estimate all kinds of probabilistic quantities (like free energies, conditional probabilities on arbitrary subsets of variables, or partition functions) which otherwise look intractable.