Metal nanoparticles are widely used as heterogeneous catalysts to activate adsorbed molecules and reduce the energy barrier of the reaction. Reaction product yield depends on the interplay between elementary processes: adsorption, activation, desorption, and reaction. These processes, in turn, depend on the inlet gas composition, temperature, and pressure. At a steady state, the active surface sites may be inaccessible due to adsorbed reagents. Periodic regime may thus improve the yield, but the appropriate period and waveform are not known in advance. Dynamic control should account for surface and atmospheric modifications and adjust reaction parameters according to the current state of the system and its history. In this work, we applied a reinforcement learning algorithm to control CO oxidation on a palladium catalyst. The policy gradient algorithm was trained in the theoretical environment, parametrized from experimental data. The algorithm learned to maximize the CO formation rate based on CO and O partial pressures for several successive time steps. Within a unified approach, we found optimal stationary, periodic, and nonperiodic regimes for different problem formulations and gained insight into why the dynamic regime can be preferential. In general, this work contributes to the task of popularizing the reinforcement learning approach in the field of catalytic science.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11223201PMC
http://dx.doi.org/10.1021/acsomega.3c10422DOI Listing

Publication Analysis

Top Keywords

reinforcement learning
12
optimal dynamic
4
dynamic regimes
4
regimes oxidation
4
oxidation discovered
4
discovered reinforcement
4
learning metal
4
metal nanoparticles
4
nanoparticles heterogeneous
4
heterogeneous catalysts
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!