Adversarial risk analysis (ARA) provides a framework to deal with risks originating from intentional actions of adversaries. We show how ARA may be used to allocate security resources in the protection of urban spaces. We take into account the spatial structure and consider both proactive and reactive measures, in that we aim at both trying to reduce criminality as well as recovering as best as possible from it, should it happen. We deal with the problem by deploying an ARA model over each spatial unit, coordinating the models through resource constraints, value aggregation, and proximity. We illustrate our approach with an example that uncovers several relevant policy issues.

Download full-text PDF

Source
http://dx.doi.org/10.1111/risa.12580DOI Listing

Publication Analysis

Top Keywords

adversarial risk
8
risk analysis
8
analysis urban
4
urban security
4
security resource
4
resource allocation
4
allocation adversarial
4
analysis ara
4
ara framework
4
framework deal
4

Similar Publications

Future military conflicts are likely to involve peer or near-peer adversaries in large-scale combat operations, leading to casualty rates not seen since World War II. Casualty volume, combined with anticipated disruptions in medical evacuation, will create resource-limited environments that challenge medical responders to make complex, repetitive triage decisions. Similarly, pandemics, mass casualty incidents, and natural disasters strain civilian health care providers, increasing their risk for exhaustion, burnout, and moral injury.

View Article and Find Full Text PDF

Whole-grain foods (WGFs) constitute a large part of humans' daily diet, making risk identification of WGFs important for health and safety. However, existing research on WGFs has paid more attention to revealing the effects of a single hazardous substance or various hazardous substances on food safety, neglecting the mutual influence between individual hazardous substances and between hazardous substances and basic information. Therefore, this paper proposes a causal inference of WGFs' risk based on a generative adversarial network (GAN) and Bayesian network (BN) to explore the mutual influence between hazardous substances and basic information.

View Article and Find Full Text PDF

This study aims to develop and evaluate a fast and robust deep learning-based auto-segmentation approach for organs at risk in MRI-guided radiotherapy of pancreatic cancer to overcome the problems of time-intensive manual contouring in online adaptive workflows. The research focuses on implementing novel data augmentation techniques to address the challenges posed by limited datasets.This study was conducted in two phases.

View Article and Find Full Text PDF

PE-CycleGAN network based CBCT-sCT generation for nasopharyngeal carsinoma adaptive radiotherapy.

Nan Fang Yi Ke Da Xue Xue Bao

January 2025

School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.

Objectives: To explore the synthesis of high-quality CT (sCT) from cone-beam CT (CBCT) using PE-CycleGAN for adaptive radiotherapy (ART) for nasopharyngeal carcinoma.

Methods: A perception-enhanced CycleGAN model "PE-CycleGAN" was proposed, introducing dual-contrast discriminator loss, multi-perceptual generator loss, and improved U-Net structure. CBCT and CT data from 80 nasopharyngeal carcinoma patients were used as the training set, with 7 cases as the test set.

View Article and Find Full Text PDF

An intelligent transportation system (ITS) offers commercial and personal movement through the smart city (SC) communication paradigms with hassle-free information sharing. ITS designs and architectures have improved via information and communication technologies in recent years. The information shared through the communication medium in SCs is exposed to adversary risk, resulting in privacy issues.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!