On reinforcement learning in genetic regulatory networks Conference Paper uri icon

abstract

  • The control of probabilistic Boolean networks as a model of genetic regulatory networks is formulated as an optimal stochastic control problem and has been solved using dynamic programming; however, the proposed methods fail when the number of genes in the network goes beyond a small number. Their complexity exponentially increases with the number of genes due to the estimation of model-dependent probability distributions, and the expected curse of dimensionality associated with dynamic programming algorithm. We propose a model-free approximate stochastic control method based on reinforcement learning that mitigates the twin curses of dimensionality and provides polynomial time complexity. By using a simulator, the proposed method eliminates the complexity of estimating the probability distributions. The method can be applied on networks for which dynamic programming cannot be used owing to computational limitations. Experimental results demonstrate that the performance of the method is close to optimal stochastic control. 2007 IEEE.

name of conference

  • 2007 IEEE/SP 14th Workshop on Statistical Signal Processing

published proceedings

  • 2007 IEEE/SP 14TH WORKSHOP ON STATISTICAL SIGNAL PROCESSING, VOLS 1 AND 2

author list (cited authors)

  • Faryabi, B., Datta, A., & Dougherty, E. R.

citation count

  • 8

complete list of authors

  • Faryabi, Babak||Datta, Aniruddha||Dougherty, Edward R

publication date

  • August 2007