9 November 2021

Jan-Willem van de Meent

Compositional Inference in Probabilistic Programs

Compositional Inference in Probabilistic Programs

JanWillem

Speaker: Jan-Willem van de Meent, Associate Professor (UHD), AMLab, University of Amsterdam

Abstract:

Deep probabilistic programming systems combine the principles of deep learning with the principles of probabilistic modeling. The user programmatically specifies a deep generative model (a neural mapping from latent variables to data), along with a corresponding inference model (a neural mapping from data to latent variables), which together can be trained using stochastic gradient descent with little or no supervision.

In this talk, I will discuss recent innovations in training deep probabilistic programs by combining techniques from variational inference and importance sampling. For many years, deep generative models were typically trained by maximizing a reparameterized lower bound, as is done in variational autoencoders. However, this approach can fail to converge to a meaningful representation in more structured problems, such as tasks the involve reasoning about shared features for a small batch of inputs. I will discuss how we can overcome these difficulties, using variational methods that learn proposals for importance samplers, as well as a programming abstractions for high-level specification of such methods in probabilistic programming systems.

Watch Back ›

COLLOQUIUM
colloquium