Imaging through the fog is valuable for many areas, such as autonomous driving and cosmic exploration. However, due to the influence of strong backscattering and diffuse reflection generated by the dense fog on the temporal-spatial correlations of photons returning from the target object, the reconstruction quality of most existing methods is significantly reduced under dense fog conditions. In this study, we describe the optical scatter imaging process and propose a physics-driven Swin Transformer method utilizing Time-of-Flight (ToF) and Deep Learning principles to mitigate scattering effects and reconstruct targets in conditions of heterogeneous dense fog. The results suggest that, despite the exponential decrease in the number of ballistic photons as the optical thickness of fog increases, the Physics-Driven Swin Transformer method demonstrates satisfactory performance in imaging targets obscured by dense fog. Importantly, this article highlights that even in dense fog imaging experiments with optical thickness reaching up to 3.0, which exceeds previous studies, commonly utilized quantitative evaluation metrics like PSNR and SSIM indicate that our method is cutting-edge in imaging through dense fog.

Download full-text PDF

Source
http://dx.doi.org/10.1364/OE.519662DOI Listing

Publication Analysis

Top Keywords

dense fog
28
physics-driven swin
12
swin transformer
12
fog
9
imaging dense
8
transformer method
8
optical thickness
8
dense
7
imaging
5
time-gated imaging
4

Similar Publications

Autonomous vehicles, often known as self-driving cars, have emerged as a disruptive technology with the promise of safer, more efficient, and convenient transportation. The existing works provide achievable results but lack effective solutions, as accumulation on roads can obscure lane markings and traffic signs, making it difficult for the self-driving car to navigate safely. Heavy rain, snow, fog, or dust storms can severely limit the car's sensors' ability to detect obstacles, pedestrians, and other vehicles, which pose potential safety risks.

View Article and Find Full Text PDF

Dense-TNT: Efficient Vehicle Type Classification Neural Network Using Satellite Imagery.

Sensors (Basel)

November 2024

School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore.

Accurate vehicle type classification plays a significant role in intelligent transportation systems. It is critical to understand the road conditions and usually contributive for the traffic light control system to respond correspondingly to alleviate traffic congestion. New technologies and comprehensive data sources, such as aerial photos and remote sensing data, provide richer and higher-dimensional information.

View Article and Find Full Text PDF
Article Synopsis
  • * The research analyzed how different levels of fog concentration affected transmission quality, including path loss, signal quality, and error rates, revealing that higher fog concentration hindered VLC performance more than DUVLC.
  • * The findings demonstrated that a combined VLC and DUVLC system could effectively transmit information in foggy conditions, suggesting its potential use in various complex outdoor scenarios where fog might impact communication.
View Article and Find Full Text PDF

Computational defogging using machine learning presents significant potential; however, its progress is hindered by the scarcity of large-scale datasets comprising real-world paired images with sufficiently dense fog. To address this limitation, we developed a binocular imaging system and introduced Stereofog-an open-source dataset comprising 10,067 paired clear and foggy images, with a majority captured under dense fog conditions. Utilizing this dataset, we trained a pix2pix image-to-image (I2I) translation model and achieved a complex wavelet structural similarity index (CW-SSIM) exceeding 0.

View Article and Find Full Text PDF

Robust segmentation performance under dense fog is crucial for autonomous driving, but collecting labeled real foggy scene datasets is burdensome in the real world. To this end, existing methods have adapted models trained on labeled clear weather images to the unlabeled real foggy domain. However, these approaches require intermediate domain datasets (e.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!