Program

Wednesday, April 2, 2025

All talks take place in Room 1.A.12, Puerta de Toledo Campus, Universidad Carlos III de Madrid.

9:30Pick up at Hotel Catalonia Atocha. Walk to the Puerta de Toledo Campus of the Universidad Carlos III de Madrid.
10:00 – 10:55Talk by Albert Guillén i Fàbregas (University of Cambridge).

Title: Reliable Communication over Gilbert-Elliott-type Channels
10:55 – 11:10Coffee Break
11:10 – 12:05Talk by Hamdi Joudeh (Eindhoven University of Technology).

Title: Error exponents for joint communication and sensing with list estimation

Abstract: In this talk, I will discuss a basic model for joint communication and sensing, where a transmitter simultaneously communicates with a receiver over a state-dependent discrete memoryless channel and senses the channel state through a generalized feedback channel. First, I will discuss a list estimation approach to the sensing problem and establish a suitable notion of sensing capacity. Then, I will focus on reliability and discuss error exponents for sensing and communication.
12:05 – 12:20Coffee Break
12:20 – 13:15Talk by Samir Perlaza (INRIA).

Title: The Generalization Error of Machine Learning Algorithms

Abstract: In this talk, the method of gaps, a technique for deriving closed-form expressions for the generalization error of machine learning algorithms in terms of information measures is introduced. The method relies on two central observations: (a) The generalization error is an average of the variation of the expected empirical risk with respect to changes on the probability measure (used for expectation); and (b) these variations, also referred to as gaps, exhibit closed-form expressions in terms of information measures. The expectation of the empirical risk can be either with respect to a measure on the models (with a fixed dataset) or with respect to a measure on the datasets (with a fixed model), which results in two variants of the method of gaps. The first variant, which focuses on the gaps of the expected empirical risk with respect to a measure on the models, appears to be the most general, as no assumptions are made on the distribution of the datasets. The second variant develops under the assumption that datasets are made of independent and identically distributed data points. All existing exact expressions for the generalization error of machine learning algorithms can be obtained with the proposed method. Also, this method allows obtaining numerous new exact expressions, which improves the understanding of the generalization error; establish connections with other areas in statistics, e.g., hypothesis testing; and potentially, might guide algorithm designs.

Manuscript:
https://arxiv.org/pdf/2411.12030
13:30 – 15:00Lunch at Bipolar Casa de Comidas 2.0.
15:30 – 16:25Talk by Alfonso Martinez (UPF).

Title: Correct-decoding exponent of bad codes under Minimum-Likelihood decoding

Abstract: This talk deals with the channel unreliability function for discrete memoryless channels at rate R. This function describes the exponential rate of decay of the correct-decoding probability for the worst possible codes under Minimum Likelihood decoding. The channel unreliability function seems to be characterized by an optimization problem involving Gallager’s channel-coding function E0(ρ) with ρ < −1. This representation also gives operational meaning to Rényi divergences of negative order.
20:30 –Dinner at Los Porfiados.

Thursday, April 3, 2025

All talks take place in Room 1.A.12, Puerta de Toledo Campus, Universidad Carlos III de Madrid.

9:30Pick up at Hotel Catalonia Atocha. Walk to the Puerta de Toledo Campus of the Universidad Carlos III de Madrid.
10:00 – 10:55Talk by Iñaki Esnaola (University of Sheffield).

Title: Out-of-Distribution Statistical Learning and f-Divergence Regularization

Abstract:
 We consider the statistical learning setting where the test data-generating source differs from the training data-generating source, that is, the learning algorithm aims to achieve out-of-distribution generalization. To study it, we introduce the worst-case data-generating (WCDG) probability measure as a tool for characterizing the generalization capabilities of learning algorithms. We present closed-form expressions for the sensitivity of the expected loss for a fixed model. These results lead to a novel expression for the generalization error of arbitrary machine learning algorithms. Equipped with these tools, we then explore f-divergence regularization as a means to provide robust generalization capabilities and show that any f−divergence regularization is equivalent to a different f−divergence regularization with an appropriate transformation of the empirical risk function.
10:55 – 11:10Coffee Break
11:10 – 12:05Talk by Jossy Sayir (University of Cambridge).

Title: Slepian Wolf with Feedback: Bounds and Methods
12:05 – 12:20Coffee Break
12:20 – 13:15Talk by Stefan Moser (ETH Zurich).

Title: MISO Wireless Optical Communication under a First-Moment and a Peak-Power Constraint

Abstract: We study the multiple-input single-output free-space optical intensity channel with signal-independent additive Gaussian noise and subject to both an average- and a peak-intensity constraint. We present closed-form expressions for the asymptotic high-signal-to-noise-ratio (high-SNR) capacity and for the corresponding capacity-achieving input distribution. Moreover, we propose several practical modulation schemes that approach the capacity over a wide range of practical SNR values.
13:30 – 15:00Lunch at Bipolar Casa de Comidas 2.0.
15:30 – 16:25Talk by Ramji Venkataramanan (University of Cambridge).

Title: Efficient communication over many-user multiple access channels

Abstract: We consider communication over the Gaussian multiple-access channel in the “many-user” regime where the number of users grows linearly with the code length. Recent works have studied optimal tradeoffs in this regime, e.g., between energy-per-bit and achievable user density for a fixed target error rate. In this talk, we will discuss efficient coding schemes based on random linear models and Approximate Message Passing (AMP) decoding. These schemes provably approach the optimal tradeoff for small per-user payloads. We will show how the AMP decoder can be adapted to take advantage of an outer code to tackle large user payloads. Time permitting, I will also discuss bounds and efficient schemes for multiple-access with random user activity, where the receiver does not know the identities of the active users.

This is joint work with Shirley Liu, Pablo Pascual Cobo, and Kuan Hsieh.
20:30 –Dinner at Ticuí.

Friday, April 4, 2025

All talks take place in Room 1.A.12, Puerta de Toledo Campus, Universidad Carlos III de Madrid.

9:30Pick up at Hotel Catalonia Atocha. Walk to the Puerta de Toledo Campus of the Universidad Carlos III de Madrid.
10:00 – 10:55Talk by Sidharth (Sid) Jaggi (University of Bristol).

Title: Zero-error superposition codes
Sub-title: On trying to beat the Gilbert-Varshamov bound (and failing) and what we’ve learned along the way

Abstract: The classic Gilbert-Varshamov (GV) bound from the 1950’s gives us packings of n\delta radius Hamming balls in q-ary Hamming space of dimension n, with a rate of 1-H_q(\delta) – for all q < 49 this is essentially the densest known. In this talk I’ll outline a deep rabbit-hole I’ve gone down trying to beat this bound. I’ll show a (to my knowledge) heretofore unanalysed class of random codes (superposition codes) that when combined with careful expurgation achieve the GV bound for a wide range of parameters (but never exceeding it!). There’s still some hope — if  a certain class of information inequalities holds then these codes can be refined to beat the GV bound. This is an invitation for folks to come with me further down the rabbit-hole. (The material in this talk was submitted to ISIT 2025, and is joint work with Chenyu Wang, Ayesha Irfan, Yihan Zhang, and Michael Langberg.)
10:55 – 11:10Coffee Break
11:10 – 12:05Talk by Yiannis Kontoyiannis (University of Cambridge).

Title:
Sumsets, doubling, and capacity bounds

Abstract: We discuss a number of recent information-theoretic inequalities on the entropy and mutual information of sums of random variables, their origins in additive combinatorics, and their implications for channel capacity. Joint work with Mokshay Madiman (Delaware) and Lampros Gavalakis (Cambridge).
12:05 – 12:20Coffee Break
12:20 – 13:15Talk by Tobias Koch (Universidad Carlos III de Madrid)

Title: The Divergence Test for Universal Hypothesis Testing: Beating the Generalized Likelihood Ratio Test

Abstract: Universal hypothesis testing refers to the binary statistical hypothesis testing problem where n independent and identically distributed random variables are either distributed according to the null hypothesis P or the alternative hypothesis Q, and only P is known. The generalized likelihood ratio test (GLRT) for this problem is the Hoeffding test, which accepts the null hypothesis if the Kullback-Leibler (KL) divergence between the empirical distribution of the observed random variables and the null hypothesis is below some threshold. In this work, we propose the divergence test, which replaces the KL divergence in the Hoeffding test by an arbitrary divergence. For this test, we derive a second-order asymptotic expansion for the probability of a false-negative when the probability of a false-positive is bounded by some fixed value (Stein’s regime). We demonstrate that when the divergence belongs to the class of invariant divergences (which includes the Rényi divergence and the f-divergence), the divergence test has the same second-order performance as the Hoeffding test. In contrast, when the divergence is non-invariant, the divergence test may outperform the Hoeffding test for some alternative hypotheses. In particular, we observe this phenomenon when the divergence is the squared Mahalanobis distance. We conclude that a hypothesis test that accepts the null hypothesis if the squared Mahalanobis distance between the empirical distribution of the observed random variables and the null hypothesis is below some threshold may outperform the GLRT. Potentially, this behavior could be exploited by a composite hypothesis test with partial knowledge of the alternative hypothesis by tailoring the squared Mahalanobis distance to the set of possible alternative hypotheses.

This is joint work with K.V. Harsha (Gandhi Institute of Technology and Management, Hyderabad, India) and Jithin Ravi (Indian Institute of Technology Kharagpur, India).
13:30 – 15:00Lunch at Restaurante Mandarosso.