Low-light image enhancement (LIME) aims to convert images with unsatisfied lighting into desired ones. Different from existing methods that manipulate illumination in uncontrollable manners, we propose a flexible framework to take user-specified guide images as references to improve the practicability. To achieve the goal, this article models an image as the combination of two components, that is, content and exposure attribute, from an information decoupling perspective. Specifically, we first adopt a content encoder and an attribute encoder to disentangle the two components. Then, we combine the scene content information of the low-light image with the exposure attribute of the guide image to reconstruct the enhanced image through a generator. Extensive experiments on public datasets demonstrate the superiority of our approach over state-of-the-art alternatives. Particularly, the proposed method allows users to enhance images according to their preferences, by providing specific guide images. Our source code and the pretrained model are available at https://github.com/Linfeng-Tang/DRLIE.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2022.3190880DOI Listing

Publication Analysis

Top Keywords

low-light image
12
image enhancement
8
guide images
8
exposure attribute
8
image
6
drlie flexible
4
flexible low-light
4
enhancement disentangled
4
disentangled representations
4
representations low-light
4

Similar Publications

Introduction: Pests are important factors affecting the growth of cotton, and it is a challenge to accurately detect cotton pests under complex natural conditions, such as low-light environments. This paper proposes a low-light environments cotton pest detection method, DCP-YOLOv7x, based on YOLOv7x, to address the issues of degraded image quality, difficult feature extraction, and low detection precision of cotton pests in low-light environments.

Methods: The DCP-YOLOv7x method first enhances low-quality cotton pest images using FFDNet (Fast and Flexible Denoising Convolutional Neural Network) and the EnlightenGAN low-light image enhancement network.

View Article and Find Full Text PDF

Nocturnal and crepuscular fast-eyed insects often exploit multiple optical channels and temporal summation for fast and low-light imaging. Here, we report high-speed and high-sensitive microlens array camera (HS-MAC), inspired by multiple optical channels and temporal summation for insect vision. HS-MAC features cross-talk-free offset microlens arrays on a single rolling shutter CMOS image sensor and performs high-speed and high-sensitivity imaging by using channel fragmentation, temporal summation, and compressive frame reconstruction.

View Article and Find Full Text PDF

Spinning coding masks, recognized for their fast modulation rate and cost-effectiveness, are now often used in real-time single-pixel imaging (SPI). However, in the photon-counting regime, they encounter difficulties in synchronization between the coding mask patterns and the photon detector, unlike digital micromirror devices. To address this issue, we propose a scheme that assumes a constant disk rotation speed throughout each cycle and models photon detection as a non-homogeneous Poisson process (NHPP).

View Article and Find Full Text PDF

Stable and Lead-Free Perovskite Hemispherical Photodetector for Vivid Fourier Imaging.

Adv Sci (Weinh)

December 2024

State Key Laboratory of Supramolecular Structure and Materials, College of Chemistry, Jilin University, Changchun, 130012, P.R. China.

The filterless single-pixel imaging technology is anticipated to hold tremendous competitiveness in diverse imaging applications. Nevertheless, achieving single-pixel color imaging without a filter remains a formidable challenge. Here a lead-free perovskite hemispherical photodetector is reported for filterless single-pixel color imaging.

View Article and Find Full Text PDF

EC-WAMI: Event Camera-Based Pose Optimization in Remote Sensing and Wide-Area Motion Imagery.

Sensors (Basel)

November 2024

Artificial Intelligence and Robotics Lab (AIRLab), Department of Computer Science, Saint Louis University, Saint Louis, MO 63103, USA.

In this paper, we present , the first successful application of neuromorphic for Wide-Area Motion Imagery (WAMI) and Remote Sensing (RS), showcasing their potential for advancing Structure-from-Motion (SfM) and 3D reconstruction across diverse imaging scenarios. ECs, which detect asynchronous pixel-level , offer key advantages over traditional frame-based sensors such as high temporal resolution, low power consumption, and resilience to dynamic lighting. These capabilities allow ECs to overcome challenges such as glare, uneven lighting, and low-light conditions that are common in aerial imaging and remote sensing, while also extending UAV flight endurance.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!