Reinforcement learning with analogue memristor arrays Academic Article uri icon

abstract

  • 2019, The Author(s), under exclusive licence to Springer Nature Limited. Reinforcement learning algorithms that use deep neural networks are a promising approach for the development of machines that can acquire knowledge and solve problems without human input or supervision. At present, however, these algorithms are implemented in software running on relatively standard complementary metaloxidesemiconductor digital platforms, where performance will be constrained by the limits of Moores law and von Neumann architecture. Here, we report an experimental demonstration of reinforcement learning on a three-layer 1-transistor 1-memristor (1T1R) network using a modified learning algorithm tailored for our hybrid analoguedigital platform. To illustrate the capabilities of our approach in robust in situ training without the need for a model, we performed two classic control problems: the cartpole and mountain car simulations. We also show that, compared with conventional digital systems in real-world reinforcement learning tasks, our hybrid analoguedigital computing system has the potential to achieve a significant boost in speed and energy efficiency.

published proceedings

  • NATURE ELECTRONICS

altmetric score

  • 7.95

author list (cited authors)

  • Wang, Z., Li, C., Song, W., Rao, M., Belkin, D., Li, Y., ... Yang, J. J.

citation count

  • 190

complete list of authors

  • Wang, Zhongrui||Li, Can||Song, Wenhao||Rao, Mingyi||Belkin, Daniel||Li, Yunning||Yan, Peng||Jiang, Hao||Lin, Peng||Hu, Miao||Strachan, John Paul||Ge, Ning||Barnell, Mark||Wu, Qing||Bartos, Andrew G||Qiu, Qinru||Williams, R Stanley||Xia, Qiangfei||Yang, J Joshua

publication date

  • March 2019