An Information-Theoretic Approach to Generalization Theory
Time: Tue 2024-04-23 13.00
Location: F3 (Flodis), Lindstedtsvägen 26 & 28, Stockholm
Language: English
Subject area: Electrical Engineering
Doctoral student: Borja Rodríguez Gálvez , Teknisk informationsvetenskap
Opponent: Associate Professor Benjamin Guedj, University College London
Supervisor: Professor Mikael Skoglund, Teknisk informationsvetenskap; Professor Ragnar Thobaben, Teknisk informationsvetenskap
QC 20240402
Abstract
In this thesis, we investigate the in-distribution generalization of machine learning algorithms, focusing on establishing rigorous upper bounds on the generalization error. We depart from traditional complexity-based approaches by introducing and analyzing information-theoretic bounds that quantify the dependence between a learning algorithm and the training data.
We consider two categories of generalization guarantees:
- Guarantees in expectation. These bounds measure performance in the average case. Here, the dependence between the algorithm and the data is often captured by the mutual information or other information measures based on ƒ-divergences. While these measures offer an intuitive interpretation, they might overlook the geometry of the algorithm's hypothesis class. To address this limitation, we introduce bounds using the Wasserstein distance, which incorporates geometric considerations at the cost of being mathematically more involved. Furthermore, we propose a structured, systematic method to derive bounds capturing the dependence between the algorithm and an individual datum, and between the algorithm and subsets of the training data, conditioned on knowing the rest of the data. These types of bounds provide deeper insights, as we demonstrate by applying them to derive generalization error bounds for the stochastic gradient Langevin dynamics algorithm.
- PAC-Bayesian guarantees. These bounds measure the performance level with high probability. Here, the dependence between the algorithm and the data is often measured by the relative entropy. We establish connections between the Seeger--Langford and Catoni's bounds, revealing that that the former is optimized by the Gibbs posterior. Additionally, we introduce novel, tighter bounds for various types of loss functions, including those with a bounded range, cumulant generating function, moment, or variance. To achieve this, we introduce a new technique to optimize parameters in probabilistic statements.
We also study the limitations of these approaches. We present a counter-example where most of the existing (relative entropy-based) information-theoretic bounds fail, and where traditional approaches do not. Finally, we explore the relationship between privacy and generalization. We show that algorithms with a bounded maximal leakage generalize. Moreover, for discrete data, we derive new bounds for differentially private algorithms that vanish as the number of samples increases, thus guaranteeing their generalization even with a constant privacy parameter. This is in contrast with previous bounds in the literature, that require the privacy parameter to decrease with the number of samples to ensure generalization.