Skip to main content
To KTH's start page To KTH's start page

Perspectives on Probabilistic Graphical Models

Time: Mon 2020-11-16 13.30

Location:, F3, Lindstedtsvägen 26, Stockholm (English)

Subject area: Electrical Engineering

Doctoral student: Dong Liu , Teknisk informationsvetenskap

Opponent: Associate Professor Harri Lähdesmäki, Aalto University, Espoo, Finland

Supervisor: Ragnar Thobaben, Teknisk informationsvetenskap

Export to calendar


Probabilistic graphical models provide a natural framework for the representation of complex systems and offer straightforward abstraction for the interactions within the systems. Reasoning with help of probabilistic graphical models allows us to answer inference queries with uncertainty following the framework of probability theory. General inference tasks can be to compute marginal probabilities, conditional probabilities of states of a system, or the partition function of the underlining distribution of a Markov random field (undirected graphical model). Critically, the success of graphical models in practice largely relies on efficient approximate inference methods that offer fast and accurate reasoning results. Closely related to the inference tasks in graphical models, another fundamental problem is how to decide the parameters of a candidate graphical model by extracting information from empirical observations, i.e., parameter learning for a graphical model. The two essential topics (inference and learning) interact with and facilitate each other. For instance, the learning of a graphical model usually uses an inference method as a subroutine, while the learned graphical model is then employed for inference tasks in the presence of new evidence.

In this dissertation, we develop new algorithms and models for generic inference in Markov random fields. We firstly present an alternative view of belief propagation in terms of a divergence minimization, which is in contrast to the intuition of free energy minimization. The alternative view brings the development of a variant of belief propagation algorithm which turns out to generalize the standard one. Insights on the convergence behavior of the developed algorithm in the binary state space are provided apart from the intuition in development. As a step beyond approximate inference with message passing, we develop a region-based energy network model that performs generic inference via region-based free energy minimization, which turns inference in Markov random fields into an optimization problem. This model incorporates both our essential understanding of inference and modern neural network models with computational efficiency.

The further part of the dissertation focuses on parameter learning for probabilistic graphical models. This part starts with the discussion on parameter learning of undirected graphical models and explains the role of an (approximate) inference method in this routine. As for directed graphical models, new finite mixture models incorporating normalizing flows in neural network implementations are presented for more expressive and flexible modeling. The learning of developed generic models is handled within expectation maximization due to the presence of hidden (or latent) variables. The expressive modeling method and learning are further extended to dynamic systems within a reduced dynamic Bayesian network, i.e., a hidden Markov model. The dissertation closes with a chapter on the likelihood-free learning for a class of directed graphical models, where (directed) generative models induce implicit probability distributions and are learned via the optimal transport distance.