Exchange rates are affected by the impact of disparate types of new information as well as the couplings between these modalities. Previous work mainly predicted exchange rates solely based on market indicators and therefore achieved unsatisfactory results. In response to such an issue, this study develops an inventive multimodal fusion-based long short-term memory (MF-LSTM) model to forecast the USD/CNY exchange rate. Our model consists of two parallel LSTM modules that extract abstract features from each modality of information and a shared representation layer that fuses these features. In terms of the text modality, bidirectional encoder representations from transformers (BERT) is applied to conduct a sentiment analysis on social media microblogs. Compared to previous studies, we incorporate not only market indicators but also investor sentiments into consideration, treating the two types of data differently to match their exclusive characteristics. In addition, we apply the multimodal fusion technique and contrive a deep coupled model rather than a shallow and simple model to reflect the couplings between the two modalities. As a consequence, the experimental results obtained over a 15-month period exhibit the superiority of the proposed approach over nine baseline algorithms. The purpose of our study is to demonstrate that it is practicable and effective to incorporate multimodal fusion into financial time series forecasting.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8949836PMC
http://dx.doi.org/10.1007/s10489-022-03342-5DOI Listing

Publication Analysis

Top Keywords

multimodal fusion
12
exchange rate
8
exchange rates
8
couplings modalities
8
market indicators
8
model
5
improving exchange
4
rate forecasting
4
forecasting deep
4
multimodal
4

Similar Publications

An explainable transformer model integrating PET and tabular data for histologic grading and prognosis of follicular lymphoma: a multi-institutional digital biopsy study.

Eur J Nucl Med Mol Imaging

January 2025

Department of Nuclear Medicine, West China Hospital, Sichuan University, No.37, Guoxue Alley, Chengdu City, Sichuan Province, 610041, China.

Background: Pathological grade is a critical determinant of clinical outcomes and decision-making of follicular lymphoma (FL). This study aimed to develop a deep learning model as a digital biopsy for the non-invasive identification of FL grade.

Methods: This study retrospectively included 513 FL patients from five independent hospital centers, randomly divided into training, internal validation, and external validation cohorts.

View Article and Find Full Text PDF

Background: Acute Stanford Type A aortic dissection (AAD-type A) and acute myocardial infarction (AMI) present with similar symptoms but require distinct treatments. Efficient differentiation is critical due to limited access to radiological equipment in many primary healthcare. This study develops a multimodal deep learning model integrating electrocardiogram (ECG) signals and laboratory indicators to enhance diagnostic accuracy for AAD-type A and AMI.

View Article and Find Full Text PDF

Background: Integrating comprehensive information on hepatocellular carcinoma (HCC) is essential to improve its early detection. We aimed to develop a model with multi-modal features (MMF) using artificial intelligence (AI) approaches to enhance the performance of HCC detection.

Materials And Methods: A total of 1,092 participants were enrolled from 16 centers.

View Article and Find Full Text PDF

Virtual biopsy for non-invasive identification of follicular lymphoma histologic transformation using radiomics-based imaging biomarker from PET/CT.

BMC Med

January 2025

Department of Nuclear Medicine, West China Hospital, Sichuan University, Guoxue Alley, Address: No.37, Chengdu City, Sichuan, 610041, China.

Background: This study aimed to construct a radiomics-based imaging biomarker for the non-invasive identification of transformed follicular lymphoma (t-FL) using PET/CT images.

Methods: A total of 784 follicular lymphoma (FL), diffuse large B-cell lymphoma, and t-FL patients from 5 independent medical centers were included. The unsupervised EMFusion method was applied to fuse PET and CT images.

View Article and Find Full Text PDF

Integrating visual features has been proven effective for deep learning-based speech quality enhancement, particularly in highly noisy environments. However, these models may suffer from redundant information, resulting in performance deterioration when the signal-to-noise ratio (SNR) is relatively high. Real-world noisy scenarios typically exhibit widely varying noise levels.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!