Learning task-agnostic and interpretable subsequence-based representation of time series and its applications in fMRI analysis.

Neural Netw

Department of Computational Brain Imaging, Advanced Telecommunication Research Institute International, Kyoto, Japan; Department of Biomedical Data Science, School of Medicine, Fujita Health University, Aichi, Japan; International Center for Brain Science, Fujita Health University, Aichi, Japan.

Published: June 2023

AI Article Synopsis

  • - Recent advancements in sequential learning models like deep recurrent neural networks excel at creating task-specific representations for time series data, but struggle with generalizing across different tasks and can be too abstract for easy understanding.
  • - We introduce a unified local predictive model utilizing multi-task learning to generate task-agnostic and interpretable representations that can be applied across various temporal prediction tasks, making them easier for humans to comprehend.
  • - Our proof-of-concept results show that these new task-agnostic representations outperform traditional methods in temporal tasks and can also uncover the periodicity in time series data, with promising applications in analyzing fMRI data for better understanding of brain activity.

Article Abstract

The recent success of sequential learning models, such as deep recurrent neural networks, is largely due to their superior representation-learning capability for learning the informative representation of a targeted time series. The learning of these representations is generally goal-directed, resulting in their task-specific nature, giving rise to excellent performance in completing a single downstream task but hindering between-task generalisation. Meanwhile, with increasingly intricate sequential learning models, learned representation becomes abstract to human knowledge and comprehension. Hence, we propose a unified local predictive model based on the multi-task learning paradigm to learn the task-agnostic and interpretable subsequence-based time series representation, allowing versatile use of learned representations in temporal prediction, smoothing, and classification tasks. The targeted interpretable representation could convey the spectral information of the modelled time series to the level of human comprehension. Through a proof-of-concept evaluation study, we demonstrate the empirical superiority of learned task-agnostic and interpretable representation over task-specific and conventional subsequence-based representation, such as symbolic and recurrent learning-based representation, in solving temporal prediction, smoothing, and classification tasks. These learned task-agnostic representations can also reveal the ground-truth periodicity of the modelled time series. We further propose two applications of our unified local predictive model in functional magnetic resonance imaging (fMRI) analysis to reveal the spectral characterisation of cortical areas at rest and reconstruct more smoothed temporal dynamics of cortical activations in both resting-state and task-evoked fMRI data, giving rise to robust decoding.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2023.03.038DOI Listing

Publication Analysis

Top Keywords

time series
20
task-agnostic interpretable
12
interpretable subsequence-based
8
representation
8
subsequence-based representation
8
fmri analysis
8
sequential learning
8
learning models
8
unified local
8
local predictive
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!