Infrared Sensation-Based Salient Targets Enhancement Methods in Low-Visibility Scenes.

Sensors (Basel)

School of Transportation Engineering, Tongji University, No. 4800 Caoan Road, Shanghai 201804, China.

Published: August 2022

Thermal imaging is an important technology in low-visibility environments, and due to the blurred edges and low contrast of infrared images, enhancement processing is of vital importance. However, to some extent, the existing enhancement algorithms based on pixel-level information ignore the salient feature of targets, the temperature which effectively separates the targets by their color. Therefore, based on the temperature and pixel features of infrared images, first, a threshold denoising model based on wavelet transformation with bilateral filtering (WTBF) was proposed. Second, our group proposed a salient components enhancement method based on a multi-scale retinex algorithm combined with frequency-tuned salient region extraction (MSRFT). Third, the image contrast and noise distribution were improved by using salient features of orientation, color, and illuminance of night or snow targets. Finally, the accuracy of the bounding box of enhanced images was tested by the pre-trained and improved object detector. The results show that the improved method can reach an accuracy of 90% of snow targets, and the average precision of car and people categories improved in four low-visibility scenes, which demonstrates the high accuracy and adaptability of the proposed methods of great significance for target detection, trajectory tracking, and danger warning of automobile driving.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9370932PMC
http://dx.doi.org/10.3390/s22155835DOI Listing

Publication Analysis

Top Keywords

low-visibility scenes
8
infrared images
8
snow targets
8
salient
5
targets
5
infrared sensation-based
4
sensation-based salient
4
salient targets
4
enhancement
4
targets enhancement
4

Similar Publications

Deep CNNs have achieved impressive improvements for night-time self-supervised depth estimation form a monocular image. However, the performance degrades considerably compared to day-time depth estimation due to significant domain gaps, low visibility, and varying illuminations between day and night images. To address these challenges, we propose a novel night-time self-supervised monocular depth estimation framework with structure regularization, i.

View Article and Find Full Text PDF

In the realm of computer vision, object detection holds significant importance and has demonstrated commendable performance across various scenarios. However, it typically requires favorable visibility conditions within the scene. Therefore, it is imperative to explore methodologies for conducting object detection under low-visibility circumstances.

View Article and Find Full Text PDF

DST-DETR: Image Dehazing RT-DETR for Safety Helmet Detection in Foggy Weather.

Sensors (Basel)

July 2024

School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China.

In foggy weather, outdoor safety helmet detection often suffers from low visibility and unclear objects, hindering optimal detector performance. Moreover, safety helmets typically appear as small objects at construction sites, prone to occlusion and difficult to distinguish from complex backgrounds, further exacerbating the detection challenge. Therefore, the real-time and precise detection of safety helmet usage among construction personnel, particularly in adverse weather conditions such as foggy weather, poses a significant challenge.

View Article and Find Full Text PDF

A distributed state observer is designed for state estimation and tracking of mobile robots amidst dynamic environments and occlusions within distributed LiDAR sensor networks. The proposed novel framework enhances three-dimensional bounding box detection and tracking utilizing a consensus-based information filter and a region of interest for state estimation of mobile robots. The framework enables the identification of the input to the dynamic process using remote sensing, enhancing the state prediction accuracy for low-visibility and occlusion scenarios in dynamic scenes.

View Article and Find Full Text PDF

MTIE-Net: Multi-technology fusion of low-light image enhancement network.

PLoS One

February 2024

Automation and Information School of Automation and Information Engineering, Sichuan University of Science & Engineering, Zigong, Sichuan Province, China.

Images obtained in low-light scenes are often accompanied by problems such as low visibility, blurred details, and color distortion, enhancing them can effectively improve the visual effect and provide favorable conditions for advanced visual tasks. In this study, we propose a Multi-Technology Fusion of Low-light Image Enhancement Network (MTIE-Net) that modularizes the enhancement task. MTIE-Net consists of a residual dense decomposition network (RDD-Net) based on Retinex theory, an encoder-decoder denoising network (EDD-Net), and a parallel mixed attention-based self-calibrated illumination enhancement network (PCE-Net).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!