Defect detection requires highly sensitive and robust inspection methods. This study shows that non-overlapping illumination patterns can improve the noise robustness of deep learning ghost imaging (DLGI) without modifying the convolutional neural network (CNN). Ghost imaging (GI) can be accelerated by combining GI and deep learning. However, the robustness of DLGI decreases in exchange for higher speed. Using non-overlapping patterns can decrease the noise effects in the input data to the CNN. This study evaluates the DLGI robustness by using non-overlapping patterns generated based on binary notation. The results show that non-overlapping patterns improve the position accuracy by up to 51%, enabling the detection of defect positions with higher accuracy in noisy environments.

Download full-text PDF

Source
http://dx.doi.org/10.1364/AO.470770DOI Listing

Publication Analysis

Top Keywords

deep learning
12
ghost imaging
12
non-overlapping patterns
12
learning ghost
8
patterns improve
8
non-overlapping
5
noise-robust deep
4
imaging non-overlapping
4
non-overlapping pattern
4
pattern defect
4

Similar Publications

Obstructive sleep apnea (OSA) is widespread, under-recognized, and under-treated, impacting the health and quality of life for millions. The current gold standard for sleep apnea testing is based on the in-lab sleep study, which is costly, cumbersome, not readily available and represents a well-known roadblock to managing this huge societal burden. Assessment of neuromuscular function involved in the upper airway using electromyography (EMG) has shown potential to characterize and diagnose sleep apnea, while the development of transmembranous electromyography (tmEMG), a painless surface probe, has made this opportunity practical and highly feasible.

View Article and Find Full Text PDF

Adaptive deep feature representation learning for cross-subject EEG decoding.

BMC Bioinformatics

December 2024

College of Computer and Information Engineering/College of Artificial Intelligence, Nanjing Tech University, Nanjing, 210093, China.

Background: The collection of substantial amounts of electroencephalogram (EEG) data is typically time-consuming and labor-intensive, which adversely impacts the development of decoding models with strong generalizability, particularly when the available data is limited. Utilizing sufficient EEG data from other subjects to aid in modeling the target subject presents a potential solution, commonly referred to as domain adaptation. Most current domain adaptation techniques for EEG decoding primarily focus on learning shared feature representations through domain alignment strategies.

View Article and Find Full Text PDF

Optical approaches to monitor neural activity are transforming neuroscience, owing to a fast-evolving palette of genetically encoded molecular reporters. However, the field still requires robust and label-free technologies to monitor the multifaceted biomolecular changes accompanying brain development, aging or disease. Here, we have developed vibrational fiber photometry as a low-invasive method for label-free monitoring of the biomolecular content of arbitrarily deep regions of the mouse brain in vivo through spontaneous Raman spectroscopy.

View Article and Find Full Text PDF

Self-supervised denoising of grating-based phase-contrast computed tomography.

Sci Rep

December 2024

Research Group Biomedical Imaging Physics, Department of Physics, TUM School of Natural Sciences, Technical University of Munich, 85748, Garching, Germany.

In the last decade, grating-based phase-contrast computed tomography (gbPC-CT) has received growing interest. It provides additional information about the refractive index decrement in the sample. This signal shows an increased soft-tissue contrast.

View Article and Find Full Text PDF

A computational deep learning investigation of animacy perception in the human brain.

Commun Biol

December 2024

Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium.

The functional organization of the human object vision pathway distinguishes between animate and inanimate objects. To understand animacy perception, we explore the case of zoomorphic objects resembling animals. While the perception of these objects as animal-like seems obvious to humans, such "Animal bias" is a striking discrepancy between the human brain and deep neural networks (DNNs).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!