Most of the state-of-the-art defogging models presented in the literature assume that the attenuation coefficient of all spectral channels is constant, which inevitably leads to spectral distortion and information bias. To address this issue, this paper proposes a defogging method that takes into account the difference between the extinction coefficients of multispectral channels of light traveling through fog. Then the spatially distributed transmission map of each spectral channel is reconstructed to restore the fog-degraded images. The experimental results of various realistic complex scenes show that the proposed method has more outstanding advantages in restoring lost detail, compensating for degraded spectral information, and recognizing more targets hidden in uniform ground fog than state-of-the-art technologies. In addition, this work provides a method to characterize the intrinsic property of fog expressed as multispectral relative extinction coefficients, which act as a fundament for further reconstruction of multispectral information.

Download full-text PDF

Source
http://dx.doi.org/10.1364/JOSAA.511058DOI Listing

Publication Analysis

Top Keywords

fog state-of-the-art
8
extinction coefficients
8
multispectral
4
multispectral image
4
image defogging
4
defogging based
4
based wavelength-dependent
4
wavelength-dependent extinction
4
extinction coefficient
4
coefficient model
4

Similar Publications

Interfacial fluid manipulation with bioinspired strategies: special wettability and asymmetric structures.

Chem Soc Rev

January 2025

School of materials science and engineering, Smart sensing interdisciplinary science center, Nankai university, Tianjin 300350, P. R. China.

The inspirations from nature always enlighten us to develop advanced science and technology. To survive in complicated and harsh environments, plants and animals have evolved remarkable capabilities to control fluid transfer sophisticated designs such as wettability contrast, oriented micro-/nano-structures, and geometry gradients. Based on the bioinspired structures, the on-surface fluid manipulation exhibits spontaneous, continuous, smart, and integrated performances, which can promote the applications in the fields of heat transfer, microfluidics, heterogeneous catalysis, water harvesting, Although fluid manipulating interfaces (FMIs) have provided plenty of ideas to optimize the current systems, a comprehensive review of history, classification, fabrication, and integration focusing on their interfacial chemistry and asymmetric structure is highly required.

View Article and Find Full Text PDF

This paper proposes a solution to the challenging task of autonomously landing Unmanned Aerial Vehicles (UAVs). An onboard computer vision module integrates the vision system with the ground control communication and video server connection. The vision platform performs feature extraction using the Speeded Up Robust Features (SURF), followed by fast Structured Forests edge detection and then smoothing with a Kalman filter for accurate runway sidelines prediction.

View Article and Find Full Text PDF

Advances in cancer diagnosis and treatment have substantially improved patient outcomes and survival in recent years. However, up to 75% of cancer patients and survivors, including those with non-central nervous system (non-CNS) cancers, suffer from "brain fog" or impairments in cognitive functions such as attention, memory, learning, and decision-making. While we recognize the impact of cancer-related cognitive impairment (CRCI), we have not fully investigated and understood the causes, mechanisms and interplays of various involving factors.

View Article and Find Full Text PDF

Robust segmentation performance under dense fog is crucial for autonomous driving, but collecting labeled real foggy scene datasets is burdensome in the real world. To this end, existing methods have adapted models trained on labeled clear weather images to the unlabeled real foggy domain. However, these approaches require intermediate domain datasets (e.

View Article and Find Full Text PDF

IV-YOLO: A Lightweight Dual-Branch Object Detection Network.

Sensors (Basel)

September 2024

Institute of Electronic Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China.

With the rapid growth in demand for security surveillance, assisted driving, and remote sensing, object detection networks with robust environmental perception and high detection accuracy have become a research focus. However, single-modality image detection technologies face limitations in environmental adaptability, often affected by factors such as lighting conditions, fog, rain, and obstacles like vegetation, leading to information loss and reduced detection accuracy. We propose an object detection network that integrates features from visible light and infrared images-IV-YOLO-to address these challenges.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!