Which visibility indicators best represent a population's preference for a level of visual air quality?

J Air Waste Manag Assoc

b Air Resources Division , National Park Service , Lakewood , CO , USA.

Published: February 2019

Several studies have been carried out over the past 20 or so years to assess the level of visual air quality that is judged to be acceptable in urban settings. Groups of individuals were shown slides or computer-projected scenes under a variety of haze conditions and asked to judge whether each image represented acceptable visual air quality. The goal was to assess the level of haziness found to be acceptable for purposes of setting an urban visibility regulatory standard. More recently, similar studies were carried out in Beijing, China, and the more pristine Grand Canyon National Park and Great Gulf Wilderness. The studies clearly showed that when preference ratings were compared to measures of atmospheric haze such as atmospheric extinction, visual range, or deciview (dv), there was not a single indicator that represented acceptable levels of visual air quality for the varied urban or more remote settings. For instance, using a Washington, D.C., setting, 50% of the observers rated the landscape feature as not having acceptable visual air quality at an extinction of 0.19 km (21 km visual range, 29 dv), while the 50% acceptability point for a Denver, Colorado, setting was 0.075 km (52 km visual range, 20 dv) and for the Grand Canyon it was 0.023 km (170 km visual range, 7 dv). Over the past three or four decades, many scene-specific visibility indices have been put forth as potential indicators of visibility levels as perceived by human observers. They include, but are not limited to, color and achromatic contrast of single landscape features, average and equivalent contrast of the entire image, edge detection algorithms such as the Sobel index, and just-noticeable difference or change indexes. This paper explores various scene-specific visual air quality indices and examines their applicability for use in quantifying visibility preference levels and judgments of visual air quality. Implications: Visibility acceptability studies clearly show that visibility become more unacceptable as haze increases. However, there are large variations in the preference levels for different scenes when universal haze indicators, such as atmospheric extinction, are used. This variability is significantly reduced when the sky-landscape contrast of the more distant landscape features in the observed scene is used. Analysis suggest that about 50% of individuals would find the visibility unacceptable if at any time the more distant landscape features nearly disappear, that is, they are at the visual range. This common metric could form the basis for setting an urban visibility standard.

Download full-text PDF

Source
http://dx.doi.org/10.1080/10962247.2018.1506370DOI Listing

Publication Analysis

Top Keywords

visual air
28
air quality
24
visual range
20
visual
12
landscape features
12
visibility
9
level visual
8
studies carried
8
assess level
8
represented acceptable
8

Similar Publications

Although the Transformer architecture has established itself as the industry standard for jobs involving natural language processing, it still has few uses in computer vision. In vision, attention is used in conjunction with convolutional networks or to replace individual convolutional network elements while preserving the overall network design. Differences between the two domains, such as significant variations in the scale of visual things and the higher granularity of pixels in images compared to words in the text, make it difficult to transfer Transformer from language to vision.

View Article and Find Full Text PDF

An Iterative Design Method for Advancing Air Traffic Control and Management Training Through Immersive VFR 3D Map Visualization.

IISE Trans Occup Ergon Hum Factors

January 2025

The Bradley Department of Electrical and Computer Engineering, College of Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA.

OCCUPATIONAL APPLICATIONSInnovative tools that align with modern learners' preferences are essential for training in safety-critical professions like Air Traffic Control/Management. This study evaluated a Virtual Reality Visual Flight Rules 3D Map Visualization Tool designed to meet the Federal Aviation Administration's (FAA) modernization goals. The tool immerses trainee in contextually accurate environments, enhancing engagement and self-paced learning.

View Article and Find Full Text PDF

Multi-Person Localization Based on a Thermopile Array Sensor with Machine Learning and a Generative Data Model.

Sensors (Basel)

January 2025

Laboratory of Adaptive Lighting Systems and Visual Processing, Technical University of Darmstadt, Hochschulstr. 4a, 64289 Darmstadt, Germany.

Thermopile sensor arrays provide a sufficient counterbalance between person detection and localization while preserving privacy through low resolution. The latter is especially important in the context of smart building automation applications. Current research has shown that there are two machine learning-based algorithms that are particularly prominent for general object detection: You Only Look Once (YOLOv5) and Detection Transformer (DETR).

View Article and Find Full Text PDF

Background: This study investigates the therapeutic efficacy of dynamic neuromuscular stabilization (DNS) technology paired with Kinesio Taping in patients with persistent nonspecific low back pain, as well as the effect on neuromuscular function and pain self-efficacy.

Methods: A randomized controlled clinical study was conducted to collect clinical data on DNS combined with KT for the treatment of chronic nonspecific low back pain from November 2023 to April 2024. The inclusion criteria were patients with chronic nonspecific lower back pain, aged between 18 and 30 years old, and without serious underlying medical conditions, such as cardiac disease, hypertension, and diabetes.

View Article and Find Full Text PDF

Background: Spatial working memory is crucial for processing visual and spatial information, serving as a foundation for complex cognitive tasks. However, the effects of prolonged sleep deprivation on its dynamics and underlying neural mechanisms remain unclear. This study aims to investigate the specific trends and neural mechanisms underlying spatial working memory alterations during 36 h of acute sleep deprivation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!