Drmota, M., Jacquet, P., Wu, C., & Szpankowski, W. (2024). Minimax Regret with Unbounded Weights. In 2024 IEEE International Symposium on Information Theory (ISIT) (pp. 2305–2310). IEEE. https://doi.org/10.1109/ISIT57864.2024.10619590
2024 IEEE International Symposium on Information Theory (ISIT)
en
Event date:
7-Jul-2024 - 12-Jul-2024
-
Event place:
Athen, Greece
-
Number of Pages:
6
-
Publisher:
IEEE
-
Peer reviewed:
Yes
-
Keywords:
Minimax regret; Unbounded Weights
en
Abstract:
In online learning, a learner receives data in rounds 1 t T and at each round predicts a label which is then compared to the true label resulting in a loss. The total loss over T rounds, when compared to a loss over the best expert from a class of experts, is called the regret. This paper focuses on logarithmic loss over a class of experts represented by a probability distribution p and parameterized by addimensional weight vector w. Unlike previous work that studied bounded weights, we assume that the norm of the weights can be unbounded. This unboundedness poses a challenging problem that leads to unexpected results. For such a class of weighted experts we analyze the (fixed design) minimax regret for the best predictor and worst label sequence. Such a minimax regret turns out to be a universal lower bound for most regrets analyzed in the literature. For bounded weights it is known that the minimax regret can grow like where R is an upper bound on the weight norm. In contrast, we show in this paper that for unbounded norm with R the minimax regret is asymptotically (d-1) for a logistic-like expert class which we also extend to R We prove our findings by introducing the so called splittable label sequences that partition the weight space into regions with maximum sequence probability equal to 1. Finally, for a general class of monotone experts we present an upper bound 2d log T for the regret.