In this work, strategies to devise an optimal feedback control of probabilistic Boolean control networks (PBCNs) are discussed. Reinforcement learning (RL) based control is explored in order to minimize model design efforts and regulate PBCNs with high complexities. A Q-learning random forest (QLRF) algorithm is proposed; by making use of the algorithm, state feedback controllers are designed to stabilize the PBCNs at a given equilibrium point. Further, by adopting QLRF stabilized closed-loop PBCNs, a Lyapunov function is defined, and a method to construct the same is presented. By utilizing such Lyapunov functions, a novel self-triggered control (STC) strategy is proposed, whereby the controller is recomputed according to a triggering schedule, resulting in an optimal control strategy while retaining the closed-loop PBCN stability. Finally, the results are verified using computer simulations.

Self-triggered control of probabilistic Boolean control networks: A reinforcement learning approach

Yerudkar A.;Glielmo L.;Del Vecchio C.;
2022-01-01

Abstract

In this work, strategies to devise an optimal feedback control of probabilistic Boolean control networks (PBCNs) are discussed. Reinforcement learning (RL) based control is explored in order to minimize model design efforts and regulate PBCNs with high complexities. A Q-learning random forest (QLRF) algorithm is proposed; by making use of the algorithm, state feedback controllers are designed to stabilize the PBCNs at a given equilibrium point. Further, by adopting QLRF stabilized closed-loop PBCNs, a Lyapunov function is defined, and a method to construct the same is presented. By utilizing such Lyapunov functions, a novel self-triggered control (STC) strategy is proposed, whereby the controller is recomputed according to a triggering schedule, resulting in an optimal control strategy while retaining the closed-loop PBCN stability. Finally, the results are verified using computer simulations.
2022
Optimal feedback Control
Boolean Network
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12070/55879
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 14
  • ???jsp.display-item.citation.isi??? 13
social impact