Learning Without Recall by Random Walks on Directed Graphs
Conference Paper

 Overview

 Research

 Identity

 Additional Document Info

 View All

Overview
abstract

© 2015 IEEE. We consider a network of agents that aim to learn some unknown state of the world using private observations and exchange of beliefs. At each time, agents observe private signals generated based on the true unknown state. Each agent might not be able to distinguish the true state based only on her private observations. This occurs when some other states are observationally equivalent to the true state from the agent's perspective. To overcome this shortcoming, agents must communicate with each other to benefit from local observations. We propose a model where each agent selects one of her neighbors randomly at each time. Then, she refines her opinion using her private signal and the prior of that particular neighbor. The proposed rule can be thought of as a Bayesian agent who cannot recall the priors based on which other agents make inferences. This learning without recall approach preserves some aspects of the Bayesian inference while being computationally tractable. By establishing a correspondence with a random walk on the network graph, we prove that under the described protocol, agents learn the truth exponentially fast in the almost sure sense. The asymptotic rate is expressed as the sum of the relative entropies between the signal structures of every agent weighted by the stationary distribution of the random walk.
author list (cited authors)

Rahimian, M. A., Shahrampour, S., & Jadbabaie, A.
citation count
publication date
publisher
published in
Research
Identity
Digital Object Identifier (DOI)
International Standard Book Number (ISBN) 13
Additional Document Info
start page
end page
volume

54rd IEEE Conference on Decision and Control,