We demonstrate compressive-sensing (CS) spectroscopy in a planar-waveguide Fourier-transform spectrometer (FTS) device. The spectrometer is implemented as an array of Mach-Zehnder interferometers (MZIs) integrated on a photonic chip. The signal from a set of MZIs is composed of an undersampled discrete Fourier interferogram, which we invert using l-norm minimization to retrieve a sparse input spectrum. To implement this technique, we use a subwavelength-engineered spatial heterodyne FTS on a chip composed of 32 independent MZIs. We demonstrate the retrieval of three sparse input signals by collecting data from restricted sets (8 and 14) of MZIs and applying common CS reconstruction techniques to this data. We show that this retrieval maintains the full resolution and bandwidth of the original device, despite a sampling factor as low as one-fourth of a conventional (non-compressive) design.

Download full-text PDF

Source
http://dx.doi.org/10.1364/OL.42.001440DOI Listing

Publication Analysis

Top Keywords

sparse input
8
demonstration compressive-sensing
4
compressive-sensing fourier-transform
4
fourier-transform on-chip
4
on-chip spectrometer
4
spectrometer demonstrate
4
demonstrate compressive-sensing
4
compressive-sensing spectroscopy
4
spectroscopy planar-waveguide
4
planar-waveguide fourier-transform
4

Similar Publications

The advent of millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) systems, coupled with reconfigurable intelligent surfaces (RISs), presents a significant opportunity for advancing wireless communication technologies. This integration enhances data transmission rates and broadens coverage areas, but challenges in channel estimation (CE) remain due to the limitations of the signal processing capabilities of RIS. To address this, we propose an adaptive channel estimation framework comprising two algorithms: log-sum normalized least mean squares (Log-Sum NLMS) and hybrid normalized least mean squares-normalized least mean fourth (Hybrid NLMS-NLMF).

View Article and Find Full Text PDF

General matrix multiplication (GEMM) in machine learning involves massive computation and data movement, which restricts its deployment on resource-constrained devices. Although data reuse can reduce data movement during GEMM processing, current approaches fail to fully exploit its potential. This work introduces a sparse GEMM accelerator with a weight-and-output stationary (WOS) dataflow and a distributed buffer architecture.

View Article and Find Full Text PDF

Improving ocean reanalyses of observationally sparse regions with transfer learning.

Sci Rep

January 2025

Institute of Oceanography, Center for Earth System Sustainability, Universität Hamburg, Hamburg, Germany.

Oceanic subsurface observations are sparse and lead to large uncertainties in any model-based estimate. We investigate the applicability of transfer learning based neural networks to reconstruct North Atlantic temperatures in times with sparse observations. Our network is trained on a time period with abundant observations to learn realistic physical behavior.

View Article and Find Full Text PDF

The U.S. Clean Water Act is believed to have driven widespread decreases in pollutants from point sources and developed areas, but has not substantially affected nutrient pollution from agriculture.

View Article and Find Full Text PDF

Optimal sparsity in autoencoder memory models of the hippocampus.

bioRxiv

January 2025

Center for Theoretical Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY.

Storing complex correlated memories is significantly more efficient when memories are recoded to obtain compressed representations. Previous work has shown that compression can be implemented in a simple neural circuit, which can be described as a sparse autoencoder. The activity of the encoding units in these models recapitulates the activity of hippocampal neurons recorded in multiple experiments.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!