Publications by authors named "Nai Ding"

Grouping sensory events into chunks is an efficient strategy to integrate information across long sequences such as speech, music, and complex movements. Although chunks can be constructed based on diverse cues (e.g.

View Article and Find Full Text PDF

Working memory (WM) is constructive in nature. Instead of passively retaining information, WM reorganizes complex sequences into hierarchically embedded chunks to overcome capacity limits and facilitate flexible behaviour. Here, to investigate the neural mechanisms underlying hierarchical reorganization in WM, we performed two electroencephalography and one magnetoencephalography experiments, wherein humans retain in WM a temporal sequence of items, that is, syllables, which are organized into chunks, that is, multisyllabic words.

View Article and Find Full Text PDF

In speech perception, low-frequency cortical activity tracks hierarchical linguistic units (e.g., syllables, phrases, and sentences) on top of acoustic features (e.

View Article and Find Full Text PDF

As the basis of musical emotions, dynamic tension experience is felt by listeners as music unfolds over time. The effects of musical harmonic and melodic structures on tension have been widely investigated, however, the potential roles of metrical structures in tension perception remain largely unexplored. This experiment examined how different metrical structures affect tension experience and explored the underlying neural activities.

View Article and Find Full Text PDF

For patients with disorders of consciousness (DoC), accurate assessment of residual consciousness levels and cognitive abilities is critical for developing appropriate rehabilitation interventions. In this study, we investigated the potential of electrooculography (EOG) in assessing language processing abilities and consciousness levels. Patients' EOG data and related electrophysiological data were analysed before and after explicit language learning.

View Article and Find Full Text PDF

Humans can quickly adapt to recognize acoustically degraded speech, and here we hypothesize that the quick adaptation is enabled by internal linguistic feedback - Listeners use partially recognized sentences to adapt the mapping between acoustic features and phonetic labels. We test this hypothesis by quantifying how quickly humans adapt to degraded speech and analyzing whether the adaptation process can be simulated by adapting an automatic speech recognition (ASR) system based on its own speech recognition results. We consider three types of acoustic degradation, i.

View Article and Find Full Text PDF

Speech recognition crucially relies on slow temporal modulations (<16 Hz) in speech. Recent studies, however, have demonstrated that the long-delay echoes, which are common during online conferencing, can eliminate crucial temporal modulations in speech but do not affect speech intelligibility. Here, we investigated the underlying neural mechanisms.

View Article and Find Full Text PDF

Discovering knowledge and effectively predicting target events are two main goals of medical text mining. However, few models can achieve them simultaneously. In this study, we investigated the possibility of discovering knowledge and predicting diagnosis at once via raw medical text.

View Article and Find Full Text PDF

The computational principles underlying attention allocation in complex goal-directed tasks remain elusive. Goal-directed reading, that is, reading a passage to answer a question in mind, is a common real-world task that strongly engages attention. Here, we investigate what computational models can explain attention distribution in this complex task.

View Article and Find Full Text PDF

When listening to speech, the low-frequency cortical response below 10 Hz can track the speech envelope. Previous studies have demonstrated that the phase lag between speech envelope and cortical response can reflect the mechanism by which the envelope-tracking response is generated. Here, we analyze whether the mechanism to generate the envelope-tracking response is modulated by the level of consciousness, by studying how the stimulus-response phase lag is modulated by the disorder of consciousness (DoC).

View Article and Find Full Text PDF

Speech comprehension is a complex process involving multiple stages, such as decoding of phonetic units, recognizing words, and understanding sentences and passages. In this study, we identify cortical networks beyond basic phonetic processing using a novel passage learning paradigm. Participants learn to comprehend a story composed of syllables of their native language, but containing unfamiliar vocabulary and syntax.

View Article and Find Full Text PDF

When listening to connected speech, the human brain can extract multiple levels of linguistic units, such as syllables, words, and sentences. It has been hypothesized that the time scale of cortical activity encoding each linguistic unit is commensurate with the time scale of that linguistic unit in speech. Evidence for the hypothesis originally comes from studies using the frequency-tagging paradigm that presents each linguistic unit at a constant rate, and more recently extends to studies on natural speech.

View Article and Find Full Text PDF

Temporal modulations provide critical cues for speech recognition. When the temporal modulations are distorted by, e.g.

View Article and Find Full Text PDF

The syllable is a perceptually salient unit in speech. Since both the syllable and its acoustic correlate, i.e.

View Article and Find Full Text PDF

Heartbeat-evoked responses (HERs) can interact with external stimuli and play a crucial role in shaping perception, self-related processes, and emotional processes. On the one hand, the external stimulus could modulate HERs. On the other hand, the HERs could affect cognitive processing of the external stimulus.

View Article and Find Full Text PDF

Working memory load can modulate speech perception. However, since speech perception and working memory are both complex functions, it remains elusive how each component of the working memory system interacts with each speech processing stage. To investigate this issue, we concurrently measure how the working memory load modulates neural activity tracking three levels of linguistic units, i.

View Article and Find Full Text PDF

Human language units are hierarchical, and reading acquisition involves integrating multisensory information (typically from auditory and visual modalities) to access meaning. However, it is unclear how the brain processes and integrates language information at different linguistic units (words, phrases, and sentences) provided simultaneously in auditory and visual modalities. To address the issue, we presented participants with sequences of short Chinese sentences through auditory, visual, or combined audio-visual modalities while electroencephalographic responses were recorded.

View Article and Find Full Text PDF

It is debated whether cortical responses matching the time scales of phrases and sentences mediate the mental construction of the syntactic chunks or are simply caused by the semantic properties of words. Here, we investigate to what extent delta-band neural responses to speech can be explained by semantic relatedness between words. To dissociate the contribution of semantic relatedness from sentential structures, participants listened to sentence sequences and paired-word sequences in which semantically related words repeated at 1 Hz.

View Article and Find Full Text PDF

To efficiently process complex visual scenes, the visual system often summarizes statistical information across individual items and represents them as an ensemble. However, due to the lack of techniques to disentangle the representation of the ensemble from that of the individual items constituting the ensemble, whether there exists a specialized neural mechanism for ensemble processing and how ensemble perception is computed in the brain remain unknown. To address these issues, we used a frequency-tagging EEG approach to track brain responses to periodically updated ensemble sizes.

View Article and Find Full Text PDF

When listening to speech, cortical activity can track mentally constructed linguistic units such as words, phrases, and sentences. Recent studies have also shown that the neural responses to mentally constructed linguistic units can predict the outcome of patients with disorders of consciousness (DoC). In healthy individuals, cortical tracking of linguistic units can be driven by both long-term linguistic knowledge and online learning of the transitional probability between syllables.

View Article and Find Full Text PDF

Natural scenes contain multi-modal information, which is integrated to form a coherent perception. Previous studies have demonstrated that cross-modal information can modulate neural encoding of low-level sensory features. These studies, however, mostly focus on the processing of single sensory events or rhythmic sensory sequences.

View Article and Find Full Text PDF

Architects should consider the aesthetic experience of potential users when designing architectures. Previous studies have shown that subjective aesthetic judgment of architectures is influenced by structure features, and Western observers prefer structures that have curvilinear contours, high ceilings, and open space. The building styles, however, vary across cultures, and it remains unclear whether the preference for contours, ceiling height, and openness exist across cultures.

View Article and Find Full Text PDF

The amplitude of low-frequency fluctuation (ALFF) describes the regional intensity of spontaneous blood-oxygen-level-dependent signal in resting-state functional magnetic resonance imaging (fMRI). How the fMRI-ALFF relates to the amplitude in electrophysiological signals remains unclear. We here aimed to investigate the neural correlates of fMRI-ALFF by comparing the spatial difference of amplitude between the eyes-closed (EC) and eyes-open (EO) states from fMRI and magnetoencephalography (MEG), respectively.

View Article and Find Full Text PDF

A PHP Error was encountered

Severity: Warning

Message: fopen(/var/lib/php/sessions/ci_sessionf5qegva5b78l2kr2q4g8joio176qh4ut): Failed to open stream: No space left on device

Filename: drivers/Session_files_driver.php

Line Number: 177

Backtrace:

File: /var/www/html/index.php
Line: 316
Function: require_once

A PHP Error was encountered

Severity: Warning

Message: session_start(): Failed to read session data: user (path: /var/lib/php/sessions)

Filename: Session/Session.php

Line Number: 137

Backtrace:

File: /var/www/html/index.php
Line: 316
Function: require_once