In this letter, we study the control of proba- bilistic Boolean control networks (PBCNs) by leveraging a model-free reinforcement learning (RL) technique. In par- ticular, we propose a Q-learning (QL) based approach to address the feedback stabilization problem of PBCNs, and we design optimal state feedback controllers such that the PBCN is stabilized at a given equilibrium point. The optimal controllers are designed for both finite-time stabil- ity and asymptotic stability of PBCNs. In order to verify the convergence of the proposed QL algorithm, the obtained optimal policy is compared with the optimal solutions of model-based techniques, namely value iteration (VI) and semi-tensor product (STP) methods. Finally, some PBCN models of gene regulatory networks (GRNs) are considered to verify the obtained results.
Reinforcement Learning Approach to Feedback Stabilization Problem of Probabilistic Boolean Control Networks
Acernese, Antonio
;Yerudkar, Amol;Glielmo, Luigi;Del Vecchio, Carmen
2020-01-01
Abstract
In this letter, we study the control of proba- bilistic Boolean control networks (PBCNs) by leveraging a model-free reinforcement learning (RL) technique. In par- ticular, we propose a Q-learning (QL) based approach to address the feedback stabilization problem of PBCNs, and we design optimal state feedback controllers such that the PBCN is stabilized at a given equilibrium point. The optimal controllers are designed for both finite-time stabil- ity and asymptotic stability of PBCNs. In order to verify the convergence of the proposed QL algorithm, the obtained optimal policy is compared with the optimal solutions of model-based techniques, namely value iteration (VI) and semi-tensor product (STP) methods. Finally, some PBCN models of gene regulatory networks (GRNs) are considered to verify the obtained results.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.