Most of the state-of-the-art defogging models presented in the literature assume that the attenuation coefficient of all spectral channels is constant, which inevitably leads to spectral distortion and information bias. To address this issue, this paper proposes a defogging method that takes into account the difference between the extinction coefficients of multispectral channels of light traveling through fog. Then the spatially distributed transmission map of each spectral channel is reconstructed to restore the fog-degraded images. The experimental results of various realistic complex scenes show that the proposed method has more outstanding advantages in restoring lost detail, compensating for degraded spectral information, and recognizing more targets hidden in uniform ground fog than state-of-the-art technologies. In addition, this work provides a method to characterize the intrinsic property of fog expressed as multispectral relative extinction coefficients, which act as a fundament for further reconstruction of multispectral information.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1364/JOSAA.511058 | DOI Listing |
Chem Soc Rev
January 2025
School of materials science and engineering, Smart sensing interdisciplinary science center, Nankai university, Tianjin 300350, P. R. China.
The inspirations from nature always enlighten us to develop advanced science and technology. To survive in complicated and harsh environments, plants and animals have evolved remarkable capabilities to control fluid transfer sophisticated designs such as wettability contrast, oriented micro-/nano-structures, and geometry gradients. Based on the bioinspired structures, the on-surface fluid manipulation exhibits spontaneous, continuous, smart, and integrated performances, which can promote the applications in the fields of heat transfer, microfluidics, heterogeneous catalysis, water harvesting, Although fluid manipulating interfaces (FMIs) have provided plenty of ideas to optimize the current systems, a comprehensive review of history, classification, fabrication, and integration focusing on their interfacial chemistry and asymmetric structure is highly required.
View Article and Find Full Text PDFFront Robot AI
December 2024
School of Electrical and Electronic Engineering, University of Sheffield, Sheffield, United Kingdom.
This paper proposes a solution to the challenging task of autonomously landing Unmanned Aerial Vehicles (UAVs). An onboard computer vision module integrates the vision system with the ground control communication and video server connection. The vision platform performs feature extraction using the Speeded Up Robust Features (SURF), followed by fast Structured Forests edge detection and then smoothing with a Kalman filter for accurate runway sidelines prediction.
View Article and Find Full Text PDFCancer Imaging
November 2024
Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1750 Haygood Drive NE, Atlanta, Georgia, 30322, USA.
Advances in cancer diagnosis and treatment have substantially improved patient outcomes and survival in recent years. However, up to 75% of cancer patients and survivors, including those with non-central nervous system (non-CNS) cancers, suffer from "brain fog" or impairments in cognitive functions such as attention, memory, learning, and decision-making. While we recognize the impact of cancer-related cognitive impairment (CRCI), we have not fully investigated and understood the causes, mechanisms and interplays of various involving factors.
View Article and Find Full Text PDFIEEE Trans Image Process
October 2024
Robust segmentation performance under dense fog is crucial for autonomous driving, but collecting labeled real foggy scene datasets is burdensome in the real world. To this end, existing methods have adapted models trained on labeled clear weather images to the unlabeled real foggy domain. However, these approaches require intermediate domain datasets (e.
View Article and Find Full Text PDFSensors (Basel)
September 2024
Institute of Electronic Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China.
With the rapid growth in demand for security surveillance, assisted driving, and remote sensing, object detection networks with robust environmental perception and high detection accuracy have become a research focus. However, single-modality image detection technologies face limitations in environmental adaptability, often affected by factors such as lighting conditions, fog, rain, and obstacles like vegetation, leading to information loss and reduced detection accuracy. We propose an object detection network that integrates features from visible light and infrared images-IV-YOLO-to address these challenges.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!