A Multiarmed Bandit Approach to Adaptive Water Quality Management.

Integr Environ Assess Manag

Department of Bioscience, Aarhus University, Ronde, Denmark.

Published: November 2020

Nonpoint source water quality management is challenged with allocating uncertain management actions and monitoring their performance in the absence of state-dependent decision making. This adaptive management context can be expressed as a multiarmed bandit problem. Multiarmed bandit strategies attempt to balance the exploitation of actions that appear to maximize performance with the exploration of uncertain, but potentially better, actions. We performed a test of multiarmed bandit strategies to inform adaptive water quality management in Massachusetts, USA. Conservation and restoration practitioners were tasked with allocating household wastewater treatments to minimize N inputs to impaired waters. We obtained time series of N monitoring data from 3 wastewater treatment types and organized them chronologically and randomly. The chronological data set represented nonstationary performance based on recent monitoring data, whereas the random data set represented stationary performance. We tested 2 multiarmed bandit strategies in hypothetical experiments to sample from the treatment data through 20 sequential decisions. A deterministic probability-matching strategy allocated treatments with the highest probability of success regarding their performance at each decision. A randomized probability-matching strategy randomly allocated treatments according to their probability of success at each decision. The strategies were compared with a nonadaptive strategy that equally allocated treatments at each decision. Results indicated that equal allocation is useful for learning in nonstationary situations but tended to overexplore inferior treatments and thus did not maximize performance when compared with the other strategies. Deterministic probability matching maximized performance in many stationary situations, but the strategy did not adequately explore treatments and converged on inferior treatments in nonstationary situations. Randomized probability matching balanced performance and learning in stationary situations, but the strategy could converge on inferior treatments in nonstationary situations. These findings provide evidence that probability-matching strategies are useful for adaptive management. Integr Environ Assess Manag 2020;16:841-852. © 2020 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals LLC on behalf of Society of Environmental Toxicology & Chemistry (SETAC).

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7689691PMC
http://dx.doi.org/10.1002/ieam.4302DOI Listing

Publication Analysis

Top Keywords

multiarmed bandit
20
water quality
12
quality management
12
bandit strategies
12
allocated treatments
12
nonstationary situations
12
inferior treatments
12
adaptive water
8
performance
8
adaptive management
8

Similar Publications

In-band full-duplex communication has the potential to double the wireless channel capacity. However, how to efficiently transform the full-duplex gain at the physical layer into network throughput improvement is still a challenge, especially in dynamic communication environments. This paper presents a reinforcement learning-based full-duplex (RLFD) medium access control (MAC) protocol for wireless local-area networks (WLANs) with full-duplex access points.

View Article and Find Full Text PDF

This paper addresses beam scheduling for tracking multiple smart targets in phased array radar networks, aiming to mitigate the performance degradation in previous myopic scheduling methods and enhance the tracking performance, which is measured by a discounted cost objective related to the tracking error covariance (TEC) of the targets. The scheduling problem is formulated as a restless multi-armed bandit problem, where each bandit process is associated with a target and its TEC states evolve with different transition rules for different actions, i.e.

View Article and Find Full Text PDF
Article Synopsis
  • The study focuses on the combinatorial pure exploration problem within stochastic multi-armed bandits, specifically where the action set size is polynomially related to the number of arms.
  • A new algorithm called the combinatorial gap-based exploration (CombGapE) is introduced, which achieves optimal sample complexity bounds, meaning it performs efficiently relative to theoretical limits.
  • Numerical results demonstrate that the CombGapE algorithm significantly outperforms other existing methods when tested on both synthetic data and real-world datasets.
View Article and Find Full Text PDF

Traditional explanations for stereotypes assume that they result from deficits in humans (ingroup-favoring motives, cognitive biases) or their environments (majority advantages, real group differences). An alternative explanation recently proposed that stereotypes can emerge when exploration is costly. Even optimal decision makers in an ideal environment can inadvertently form incorrect impressions from arbitrary encounters.

View Article and Find Full Text PDF

Interpreting pretext tasks for active learning: a reinforcement learning approach.

Sci Rep

October 2024

School of Electrical Engineering, Hanyang University ERICA, Ansan, 15588, South Korea.

As the amount of labeled data increases, the performance of deep neural networks tends to improve. However, annotating a large volume of data can be expensive. Active learning addresses this challenge by selectively annotating unlabeled data.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!