The noise produced by the inspiral of millions of white dwarf binaries in the Milky Way may pose a threat to one of the main goals of the space-based LISA mission: the detection of massive black hole binary mergers. We present a novel study for reconstruction of merger waveforms in the presence of Galactic confusion noise using dictionary learning. We discuss the limitations of untangling signals from binaries with total mass from 10^{2} M_{⊙} to 10^{4} M_{⊙}. Our method proves extremely successful for binaries with total mass greater than ∼3×10^{3} M_{⊙} up to redshift 3 in conservative scenarios, and up to redshift 7.5 in optimistic scenarios. In addition, consistently good waveform reconstruction of merger events is found if the signal-to-noise ratio is approximately 5 or greater.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1103/PhysRevLett.130.091401 | DOI Listing |
Nat Comput Sci
December 2024
Computational Biology and Bioinformatics Program, Yale University, New Haven, CT, USA.
In single-cell sequencing analysis, several computational methods have been developed to map the cellular state space, but little has been done to map or create embeddings of the gene space. Here we formulate the gene embedding problem, design tasks with simulated single-cell data to evaluate representations, and establish ten relevant baselines. We then present a graph signal processing approach, called gene signal pattern analysis (GSPA), that learns rich gene representations from single-cell data using a dictionary of diffusion wavelets on the cell-cell graph.
View Article and Find Full Text PDFBioinformatics
November 2024
Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China.
Motivation: Neuroscientists have long endeavored to map brain connectivity, yet the intricate nature of brain networks often leads them to concentrate on specific regions, hindering efforts to unveil a comprehensive connectivity map. Recent advancements in imaging and text mining techniques have enabled the accumulation of a vast body of literature containing valuable insights into brain connectivity, facilitating the extraction of whole-brain connectivity relations from this corpus. However, the diverse representations of brain region names and connectivity relations pose a challenge for conventional machine learning methods and dictionary-based approaches in identifying all instances accurately.
View Article and Find Full Text PDFDigit Health
December 2024
Department of Computing and Informatics, Bournemouth University, Bournemouth, UK.
Objective: To develop and evaluate innovative methods for compressing and reconstructing complex audio signals from medical auscultation, while maintaining diagnostic integrity and reducing dimensionality for machine classification.
Methods: Using the ICBHI Respiratory Challenge 2017 Database, we assessed various compression frameworks, including discrete Fourier transform with peak detection, time-frequency transforms, dictionary learning and singular value decomposition. Reconstruction quality was evaluated using mean squared error (MSE).
Neural Netw
December 2024
Department of Biomedical Engineering, Tulane University, New Orleans, LA 70118, USA. Electronic address:
In practice, collecting auxiliary labeled data with same feature space from multiple domains is difficult. Thus, we focus on the heterogeneous transfer learning to address the problem of insufficient sample sizes in neuroimaging. Viewing subjects, time, and features as dimensions, brain activation and dynamic functional connectivity data can be treated as high-order heterogeneous data with heterogeneity arising from distinct feature space.
View Article and Find Full Text PDFPeerJ Comput Sci
October 2024
Chair of Cyber Security, Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia.
The proliferation of fake news on social media platforms necessitates the development of reliable datasets for effective fake news detection and veracity analysis. In this article, we introduce a veracity dataset of Arabic tweets called "VERA-ARAB", a pioneering large-scale dataset designed to enhance fake news detection in Arabic tweets. VERA-ARAB is a balanced, multi-domain, and multi-dialectal dataset, containing both fake and true news, meticulously verified by fact-checking experts from Misbar.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!