Reinforcement learning to boost molecular docking upon protein conformational ensemble.

Phys Chem Chem Phys

College of Chemistry and Molecular Engineering, and Beijing National Laboratory for Molecular Sciences (BNLMS), Peking University, Beijing 100871, China.

Published: March 2021

Intrinsically disordered proteins (IDPs) are widely involved in human diseases and thus are attractive therapeutic targets. In practice, however, it is computationally prohibitive to dock large ligand libraries to thousands and tens of thousands of conformations. Here, we propose a reversible upper confidence bound (UCB) algorithm for the virtual screening of IDPs to address the influence of the conformation ensemble. The docking process is dynamically arranged so that attempts are focused near the boundary to separate top ligands from the bulk accurately. It is demonstrated in the example of transcription factor c-Myc that the average docking number per ligand can be greatly reduced while the performance is merely slightly affected. This study suggests that reinforcement learning is highly efficient in solving the bottleneck of virtual screening due to the conformation ensemble in the rational drug design of IDPs.

Download full-text PDF

Source
http://dx.doi.org/10.1039/d0cp06378aDOI Listing

Publication Analysis

Top Keywords

reinforcement learning
8
virtual screening
8
conformation ensemble
8
learning boost
4
boost molecular
4
molecular docking
4
docking protein
4
protein conformational
4
conformational ensemble
4
ensemble intrinsically
4

Similar Publications

Two mechanisms that have been used to study the evolution of cooperative behavior are altruistic punishment, in which cooperative individuals pay additional costs to punish defection, and multilevel selection, in which competition between groups can help to counteract individual-level incentives to cheat. Boyd, Gintis, Bowles, and Richerson have used simulation models of cultural evolution to suggest that altruistic punishment and pairwise group-level competition can work in concert to promote cooperation, even when neither mechanism can do so on its own. In this paper, we formulate a PDE model for multilevel selection motivated by the approach of Boyd and coauthors, modeling individual-level birth-death competition with a replicator equation based on individual payoffs and describing group-level competition with pairwise conflicts based on differences in the average payoffs of the competing groups.

View Article and Find Full Text PDF

Of rats and robots: A mutual learning paradigm.

J Exp Anal Behav

March 2025

Behavioral Neuroscience Laboratory, Department of Psychology, Boğaziçi University, Istanbul, Turkey.

Robots are increasingly used alongside Skinner boxes to train animals in operant conditioning tasks. Similarly, animals are being employed in artificial intelligence research to train various algorithms. However, both types of experiments rely on unidirectional learning, where one partner-the animal or the robot-acts as the teacher and the other as the student.

View Article and Find Full Text PDF

Maintaining Baby-Friendly Hospital Initiative (BFHI) standards within a complex healthcare system presents unique challenges. This case study from a regional perinatal center in the northeast United States details the design and implementation of a program to address BFHI Step 2, which requires ongoing competency assessment and team member training to ensure breastfeeding support. The shift of BFHI competencies to continuous professional development introduced logistical challenges, compounded by staff turnover and budget constraints.

View Article and Find Full Text PDF

Machine learning techniques have emerged as a promising tool for efficient cache management, helping optimize cache performance and fortify against security threats. The range of machine learning is vast, from reinforcement learning-based cache replacement policies to Long Short-Term Memory (LSTM) models predicting content characteristics for caching decisions. Diverse techniques such as imitation learning, reinforcement learning, and neural networks are extensively useful in cache-based attack detection, dynamic cache management, and content caching in edge networks.

View Article and Find Full Text PDF

Control-Informed Reinforcement Learning for Chemical Processes.

Ind Eng Chem Res

March 2025

Department of Chemical Engineering, Imperial College London, London, South Kensington SW7 2AZ, U.K.

This work proposes a control-informed reinforcement learning (CIRL) framework that integrates proportional-integral-derivative (PID) control components into the architecture of deep reinforcement learning (RL) policies, incorporating prior knowledge from control theory into the learning process. CIRL improves performance and robustness by combining the best of both worlds: the disturbance-rejection and set point-tracking capabilities of PID control and the nonlinear modeling capacity of deep RL. Simulation studies conducted on a continuously stirred tank reactor system demonstrate the improved performance of CIRL compared to both conventional model-free deep RL and static PID controllers.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!