Multi-player Multi-armed Bandits: Decentralized Learning with IID Rewards Conference Paper uri icon

abstract

  • We consider the decentralized multi-armed bandit problem with distinct arms for each players. Each player can pick one arm at each time instant and can get a random reward from an unknown distribution with an unknown mean. The arms give different rewards to different players. If more than one player select the same arm, everyone gets a zero reward. There is no dedicated control channel for communication or coordination among the user. We propose an online learning algorithm called dUCB4 which achieves a near-O(log2 T). The motivation comes from opportunistic spectrum access by multiple secondary users in cognitive radio networks wherein they must pick among various wireless channels that look different to different users. 2012 IEEE.

name of conference

  • 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)

published proceedings

  • 2012 50TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON)

author list (cited authors)

  • Kalathil, D., Nayyar, N., & Jain, R.

citation count

  • 3

complete list of authors

  • Kalathil, Dileep||Nayyar, Naumaan||Jain, Rahul

publication date

  • January 2012