Author
Blanchet, J
XU, R
ZHOU, Z
Last updated
2020-05-22T14:40:44.63+01:00
Abstract
In this paper, we consider online learning in generalized linear contextual bandits where rewards are not immediately observed. Instead, rewards are available to the decision-maker only after some delay, which is unknown and stochastic. We study the performance of two wellknown algorithms adapted to this delayed setting: one based on upper confidence bounds, and the other based on Thompson sampling. We describe modifications on how these two algorithms
should be adapted to handle delays and give regret characterizations for both algorithms. Our
results contribute to the broad landscape of contextual bandits literature by establishing that
both algorithms can be made to be robust to delays, thereby helping clarify and reaffirm the
empirical success of these two algorithms, which are widely deployed in the modern recommendation engines.
Symplectic ID
1106095
Download URL
https://renyuanxu.github.io/research.html
Favourite
Off
Publication type
59
Please contact us with feedback and comments about this page. Created on 22 May 2020 - 17:30.