The noise produced by the inspiral of millions of white dwarf binaries in the Milky Way may pose a threat to one of the main goals of the space-based LISA mission: the detection of massive black hole binary mergers. We present a novel study for reconstruction of merger waveforms in the presence of Galactic confusion noise using dictionary learning. We discuss the limitations of untangling signals from binaries with total mass from 10^{2}  M_{⊙} to 10^{4}  M_{⊙}. Our method proves extremely successful for binaries with total mass greater than ∼3×10^{3}  M_{⊙} up to redshift 3 in conservative scenarios, and up to redshift 7.5 in optimistic scenarios. In addition, consistently good waveform reconstruction of merger events is found if the signal-to-noise ratio is approximately 5 or greater.

Download full-text PDF

Source
http://dx.doi.org/10.1103/PhysRevLett.130.091401DOI Listing

Publication Analysis

Top Keywords

dictionary learning
8
presence galactic
8
reconstruction merger
8
binaries total
8
total mass
8
learning novel
4
novel approach
4
approach detecting
4
detecting binary
4
binary black
4

Similar Publications

In single-cell sequencing analysis, several computational methods have been developed to map the cellular state space, but little has been done to map or create embeddings of the gene space. Here we formulate the gene embedding problem, design tasks with simulated single-cell data to evaluate representations, and establish ten relevant baselines. We then present a graph signal processing approach, called gene signal pattern analysis (GSPA), that learns rich gene representations from single-cell data using a dictionary of diffusion wavelets on the cell-cell graph.

View Article and Find Full Text PDF

Knowledge mining of brain connectivity in massive literature based on transfer learning.

Bioinformatics

November 2024

Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.

Motivation: Neuroscientists have long endeavored to map brain connectivity, yet the intricate nature of brain networks often leads them to concentrate on specific regions, hindering efforts to unveil a comprehensive connectivity map. Recent advancements in imaging and text mining techniques have enabled the accumulation of a vast body of literature containing valuable insights into brain connectivity, facilitating the extraction of whole-brain connectivity relations from this corpus. However, the diverse representations of brain region names and connectivity relations pose a challenge for conventional machine learning methods and dictionary-based approaches in identifying all instances accurately.

View Article and Find Full Text PDF

Objective: To develop and evaluate innovative methods for compressing and reconstructing complex audio signals from medical auscultation, while maintaining diagnostic integrity and reducing dimensionality for machine classification.

Methods: Using the ICBHI Respiratory Challenge 2017 Database, we assessed various compression frameworks, including discrete Fourier transform with peak detection, time-frequency transforms, dictionary learning and singular value decomposition. Reconstruction quality was evaluated using mean squared error (MSE).

View Article and Find Full Text PDF

In practice, collecting auxiliary labeled data with same feature space from multiple domains is difficult. Thus, we focus on the heterogeneous transfer learning to address the problem of insufficient sample sizes in neuroimaging. Viewing subjects, time, and features as dimensions, brain activation and dynamic functional connectivity data can be treated as high-order heterogeneous data with heterogeneity arising from distinct feature space.

View Article and Find Full Text PDF

VERA-ARAB: unveiling the Arabic tweets credibility by constructing balanced news dataset for veracity analysis.

PeerJ Comput Sci

October 2024

Chair of Cyber Security, Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia.

The proliferation of fake news on social media platforms necessitates the development of reliable datasets for effective fake news detection and veracity analysis. In this article, we introduce a veracity dataset of Arabic tweets called "VERA-ARAB", a pioneering large-scale dataset designed to enhance fake news detection in Arabic tweets. VERA-ARAB is a balanced, multi-domain, and multi-dialectal dataset, containing both fake and true news, meticulously verified by fact-checking experts from Misbar.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!