We propose a , feature-level and score-level fusion approach by combining acoustic and estimated articulatory information for both text independent and text dependent speaker verification. From a practical point of view, we study how to improve speaker verification performance by combining dynamic articulatory information with the conventional acoustic features. On text independent speaker verification, we find that concatenating articulatory features obtained from measured speech production data with conventional Mel-frequency cepstral coefficients (MFCCs) improves the performance dramatically. However, since directly measuring articulatory data is not feasible in many real world applications, we also experiment with estimated articulatory features obtained through acoustic-to-articulatory inversion. We explore both feature level and score level fusion methods and find that the overall system performance is significantly enhanced even with estimated articulatory features. Such a performance boost could be due to the inter-speaker variation information embedded in the estimated articulatory features. Since the dynamics of articulation contain important information, we included inverted articulatory trajectories in text dependent speaker verification. We demonstrate that the articulatory constraints introduced by inverted articulatory features help to reject wrong password trials and improve the performance after score level fusion. We evaluate the proposed methods on the X-ray Microbeam database and the RSR 2015 database, respectively, for the aforementioned two tasks. Experimental results show that we achieve more than 15% relative equal error rate reduction for both speaker verification tasks.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5423730PMC
http://dx.doi.org/10.1016/j.csl.2015.05.003DOI Listing

Publication Analysis

Top Keywords

speaker verification
24
articulatory features
20
estimated articulatory
16
inverted articulatory
12
articulatory
11
text independent
8
text dependent
8
dependent speaker
8
score level
8
level fusion
8

Similar Publications

Validation of a Monte Carlo-based dose calculation engine including the 1.5 T magnetic field for independent dose-check in MRgRT.

Phys Med

January 2025

Department of Radiation Oncology, IRCCS Sacro Cuore Don Calabria Hospital, Via Don A. Sempreboni 5, 37024 Negrar di Valpolicella, VR, Italy; University of Brescia, Brescia, Italy.

Purpose: Adaptive MRgRT by 1.5 T MR-linac requires independent verification of the plan-of-the-day by the primary TPS (Monaco) (M). Here we validated a Monte Carlo-based dose-check including the magnetostatic field, SciMoCa (S).

View Article and Find Full Text PDF

Robust text-dependent speaker verification system using gender aware Siamese-Triplet Deep Neural Network.

Network

December 2024

Department of Electronics and Communication Engineering, Dronacharya Group of Institutions, Greater Noida, UP, India.

Speaker verification in text-dependent scenarios is critical for high-security applications but faces challenges such as voice quality variations, linguistic diversity, and gender-related pitch differences, which affect authentication accuracy. This paper introduces a Gender-Aware Siamese-Triplet Network-Deep Neural Network (ST-DNN) architecture to address these challenges. The Gender-Aware Network utilizes Convolutional 2D layers with ReLU activation for initial feature extraction, followed by multi-fusion dense skip connections and batch normalization to integrate features across different depths, enhancing discrimination between male and female speakers.

View Article and Find Full Text PDF

Purpose: Based on CARS Congress events selected from its 40 year history, this editorial summarises the main challenges and solution concepts encountered, and what the future may hold for a model-centric world view in the specific domain of computer assisted radiology and surgery.

Methods: Altogether some 15,000 publications appeared in the CAR/CARS Congress Proceedings and Journal between 1985 and 2025, comprising approximately 3000 full papers and 12,000 long abstracts. Modelling occupied a central theme in many of these publications, particularly in the 2020s.

View Article and Find Full Text PDF

Does the posterior tibial slope in caliper-verified unrestricted kinematically aligned TKA using manual instruments match the slope in the contralateral healthy knee and improve function?

Knee

December 2024

Department of Biomedical Engineering, Department of Mechanical Engineering, Department of Orthopaedic Surgery, University of California, Davis, CA 95616, United States. Electronic address:

Purpose: Unrestricted kinematically aligned total knee arthroplasty (unKA TKA) strives to restore the pre-arthritic posterior tibial slope (PTS), however consistency of achieving this alignment target is unknown. The present study determined the proportion of subjects with differences in PTS less than 2° from the target and the improvement in patient-reported function after unKA TKA.

Methods: A review of 562 postoperative scanograms identified 99 patients (51 female) with a unKA TKA in one limb, a contralateral healthy limb, and a postoperative axial CT scan.

View Article and Find Full Text PDF

Long-term outcome of photodynamic therapy with hexyl aminolevulinate, 5-aminolevulinic acid nanoemulsion and methyl aminolevulinate for low-risk Basal Cell Carcinomas.

Photodiagnosis Photodyn Ther

December 2024

Department of Laboratory Medicine, Institute of Biomedicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden; Region Västra Götaland, Sahlgrenska University Hospital, Department of Clinical Pathology, Gothenburg, Sweden.

Background: Non-surgical treatments are cost-effective options for low-risk basal cell carcinomas (BCCs) i.e. superficial or small nodular BCCs located outside the high-risk locations.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!