We recorded the time series of location data from stationary, single-frequency (L1) GPS positioning systems at a variety of geographic locations. The empirical autocorrelation function of these data shows significant temporal correlations. The Gaussian white noise model, widely used in sensor-fusion algorithms, does not account for the observed autocorrelations and has an artificially large variance. Noise-model analysis-using Akaike's Information Criterion-favours alternative models, such as an Ornstein-Uhlenbeck or an autoregressive process. We suggest that incorporating a suitable enhanced noise model into applications (e.g., Kalman Filters) that rely on GPS position estimates will improve performance. This provides an alternative to explicitly modelling possible sources of correlation (e.g., multipath, shadowing, or other second-order physical phenomena).

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7660693PMC
http://dx.doi.org/10.3390/s20216050DOI Listing

Publication Analysis

Top Keywords

noise model
12
enhanced noise
8
comparison enhanced
4
model performance
4
performance based
4
based analysis
4
analysis civilian
4
civilian gps
4
gps data
4
data recorded
4

Similar Publications

Including sensor information in medical interventions aims to support surgeons to decide on subsequent action steps by characterizing tissue intraoperatively. With bladder cancer, an important issue is tumor recurrence because of failure to remove the entire tumor. Impedance measurements can help to classify bladder tissue and give the surgeons an indication on how much tissue to remove.

View Article and Find Full Text PDF

Machine Learning-Based Estimation of Hoarseness Severity Using Acoustic Signals Recorded During High-Speed Videoendoscopy.

J Voice

January 2025

Division of Phoniatrics and Pediatric Audiology at the Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany.

Objectives: This study investigates the use of sustained phonations recorded during high-speed videoendoscopy (HSV) for machine learning-based assessment of hoarseness severity (H). The performance of this approach is compared with conventional recordings obtained during voice therapy to evaluate key differences and limitations of HSV-derived acoustic recordings.

Methods: A database of 617 voice recordings with a duration of 250 ms was gathered during HSV examination (HS).

View Article and Find Full Text PDF

Background: Respiratory motion during radiotherapy (RT) may reduce the therapeutic effect and increase the dose received by organs at risk. This can be addressed by real-time tracking, where respiration motion prediction is currently required to compensate for system latency in RT systems. Notably, for the prediction of future images in image-guided adaptive RT systems, the use of deep learning has been considered.

View Article and Find Full Text PDF

Background: Fragile X syndrome (FXS) is a leading known genetic cause of intellectual disability and autism spectrum disorders (ASD)-associated behaviors. A consistent and debilitating phenotype of FXS is auditory hypersensitivity that may lead to delayed language and high anxiety. Consistent with findings in FXS human studies, the mouse model of FXS, the Fmr1 knock out (KO) mouse, shows auditory hypersensitivity and temporal processing deficits.

View Article and Find Full Text PDF

Detecting brain tumours (BT) early improves treatment possibilities and increases patient survival rates. Magnetic resonance imaging (MRI) scanning offers more comprehensive information, such as better contrast and clarity, than any alternative scanning process. Manually separating BTs from several MRI images gathered in medical practice for cancer analysis is challenging and time-consuming.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!