The brain is a complex system with multiple scales and hierarchies, making it challenging to identify abnormalities in individuals with mental disorders. The dynamic segregation and integration of activities across brain regions enable flexible switching between local and global information processing modes. Modeling these scale dynamics within and between brain regions can uncover hidden correlates of brain structure and function in mental disorders. Consequently, we propose a multimodal cross-scale context clusters (MCCocs) model. First, the complementary information in the multimodal image voxels of the brain is integrated and mapped to the original target space to establish a novel voxel-level brain representation. Within each region of interest (ROI), the Voxel Reducer uses a convolution operator to extract local associations among neighboring features and achieves quantitative dimensionality reduction. Among multiple ROIs, the ROI Context Cluster Block performs unsupervised clustering of whole-brain features, capturing nonlinear relationships between ROIs through bidirectional feature aggregation to simulate the effective integration of information across regions. By alternately executing the Voxel Reducer and ROI Context Cluster Block modules multiple times, our model simulates dynamic scale switching within and between ROIs. Experimental results show that MCCocs can recognize potential discriminative biomarkers and achieve state-of-the-art performance in multiple mental disorder classification tasks. The code is available at https://github.com/yangshuqigit/MCCocs.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2025.107209 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!