On approximate stochastic control in genetic regulatory networks.
Academic Article
Overview
Research
Identity
Additional Document Info
Other
View All
Overview
abstract
The control of probabilistic Boolean networks as a model of genetic regulatory networks is formulated as an optimal stochastic control problem and has been solved using dynamic programming; however, the proposed methods fail when the number of genes in the network goes beyond a small number. There are two dimensionality problems. First, the complexity of optimal stochastic control exponentially increases with the number of genes. Second, the complexity of estimating the probability distributions specifying the model increases exponentially with the number of genes. We propose an approximate stochastic control method based on reinforcement learning that mitigates the curses of dimensionality and provides polynomial time complexity. Using a simulator, the proposed method eliminates the complexity of estimating the probability distributions and, because the method is a model-free method, it eliminates the impediment of model estimation. The method can be applied on networks for which dynamic programming cannot be used owing to computational limitations. Experimental results demonstrate that the performance of the method is close to optimal stochastic control.