Antibiotic overprescribing is a global challenge contributing to rising levels of antibiotic resistance and mortality. We test a novel approach to antibiotic stewardship. Capitalising on the concept of "wisdom of crowds", which states that a group's collective judgement often outperforms the average individual, we test whether pooling treatment durations recommended by different prescribers can improve antibiotic prescribing. Using international survey data from 787 expert antibiotic prescribers, we run computer simulations to test the performance of the wisdom of crowds by comparing three data aggregation rules across different clinical cases and group sizes. We also identify patterns of prescribing bias in recommendations about antibiotic treatment durations to quantify current levels of overprescribing. Our results suggest that pooling the treatment recommendations (using the median) could improve guideline compliance in groups of three or more prescribers. Implications for antibiotic stewardship and the general improvement of medical decision making are discussed. Clinical applicability is likely to be greatest in the context of hospital ward rounds and larger, multidisciplinary team meetings, where complex patient cases are discussed and existing guidelines provide limited guidance.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7608639PMC
http://dx.doi.org/10.1038/s41598-020-75063-zDOI Listing

Publication Analysis

Top Keywords

wisdom crowds
8
improve guideline
8
guideline compliance
8
antibiotic
8
antibiotic prescribers
8
antibiotic stewardship
8
pooling treatment
8
treatment durations
8
harnessing wisdom
4
crowds improve
4

Similar Publications

Sequential collaboration describes the incremental process of contributing to online collaborative projects such as Wikipedia and OpenStreetMap. After a first contributor creates an initial entry, subsequent contributors create a sequential chain by deciding whether to adjust or maintain the latest entry which is updated if they decide to make changes. Sequential collaboration has recently been examined as a method for eliciting numerical group judgments.

View Article and Find Full Text PDF
Article Synopsis
  • Embracing local knowledge is crucial for biodiversity conservation, but there is a lack of effective frameworks to incorporate this knowledge properly.
  • Using a Wisdom of Crowds approach, the study tested whether diverse groups of individuals, with varying ages and fishing experience, provided better estimates of fishing quality compared to more uniform groups.
  • The research found that targeting a diverse group of 31% of survey participants captured most unique responses; small diverse groups were as effective as larger ones in assessing ecological conditions, highlighting the importance of including varied knowledge holders in research.
View Article and Find Full Text PDF

For unfamiliar faces, deciding whether two photographs depict the same person or not can be difficult. One way to substantially improve accuracy is to defer to the 'wisdom of crowds' by aggregating responses across multiple individuals. However, there are several methods available for doing this.

View Article and Find Full Text PDF

How wise is the crowd: Can we infer people are accurate and competent merely because they agree with each other?

Cognition

February 2025

Institut Jean Nicod, Département d'études cognitives, ENS, EHESS, PSL University, CNRS, France. Electronic address:

Are people who agree on something more likely to be right and competent? Evidence suggests that people tend to make this inference. However, standard wisdom of crowds approaches only provide limited normative grounds. Using simulations and analytical arguments, we argue that when individuals make independent and unbiased estimates, under a wide range of parameters, individuals whose answers converge with each other tend to have more accurate answers and to be more competent.

View Article and Find Full Text PDF

Human forecasting accuracy improves through the "wisdom of the crowd" effect, in which aggregated predictions tend to outperform individual ones. Past research suggests that individual large language models (LLMs) tend to underperform compared to human crowd aggregates. We simulate a wisdom of the crowd effect with LLMs.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!