Learning to Coordinate in Social Networks Academic Article uri icon

abstract

  • We study a dynamic game in which short-run players repeatedly play a symmetric, strictly supermodular game whose payoffs depend on a fixed unknown state of nature. Each short-run player inherits the beliefs of his immediate predecessor in addition to observing the actions of the players in his social neighborhood in the previous stage. Because of the strategic complementary between their actions, players have the incentive to coordinate with others and learn from them. We show that in any Markov Bayesian equilibrium of the game, players eventually reach consensus in their actions. They also asymptotically receive similar payoffs despite initial differences in their access to information. We further show that, if the players payoffs can be represented by a quadratic function, then the private observations are optimally aggregated in the limit for generic specifications of the game. Therefore, players asymptotically coordinate on choosing the best action given the aggregate information available throughout the network. We provide extensions of our results to the case of changing networks and endogenous private signals.

published proceedings

  • OPERATIONS RESEARCH

altmetric score

  • 0.75

author list (cited authors)

  • Molavi, P., Eksin, C., Ribeiro, A., & Jadbabaie, A.

citation count

  • 8

complete list of authors

  • Molavi, Pooya||Eksin, Ceyhun||Ribeiro, Alejandro||Jadbabaie, Ali

publication date

  • June 2016