Delay-Adaptive Learning in Generalized Linear Contextual Bandits

Author: 

Blanchet, J
XU, R
ZHOU, Z

Last Updated: 

2020-05-22T14:40:44.63+01:00

abstract: 

In this paper, we consider online learning in generalized linear contextual bandits where rewards are not immediately observed. Instead, rewards are available to the decision-maker only after some delay, which is unknown and stochastic. We study the performance of two wellknown algorithms adapted to this delayed setting: one based on upper confidence bounds, and the other based on Thompson sampling. We describe modifications on how these two algorithms
should be adapted to handle delays and give regret characterizations for both algorithms. Our
results contribute to the broad landscape of contextual bandits literature by establishing that
both algorithms can be made to be robust to delays, thereby helping clarify and reaffirm the
empirical success of these two algorithms, which are widely deployed in the modern recommendation engines.

Symplectic id: 

1106095

Download URL: 

Favourite: 

Submitted to ORA: 

Not Submitted

Publication Type: 

59