Temporal perspectives allow us to place ourselves and temporal events on a timeline, making it easier to conceptualize time. This study investigates how we take different temporal perspectives in our temporal gestures. We asked participants (n = 36) to retell temporal scenarios written in the Moving-Ego, Moving-Time, and Time-Reference-Point perspectives in spontaneous and encouraged gesture conditions. Participants took temporal perspectives mostly in similar ways regardless of the gesture condition. Perspective comparisons showed that temporal gestures of our participants resonated better with the Ego- (i.e., Moving-Ego and Moving-Time) versus Time-Reference-Point distinction instead of the classical Moving-Ego versus Moving-Time contrast. Specifically, participants mostly produced more Moving-Ego and Time-Reference-Point gestures for the corresponding scenarios and speech; however, the Moving-Time perspective was not adopted more than the others in any condition. Similarly, the Moving-Time gestures did not favor an axis over the others, whereas Moving-Ego gestures were mostly sagittal and Time-Reference-Point gestures were mostly lateral. These findings suggest that we incorporate temporal perspectives into our temporal gestures to a considerable extent; however, the classical Moving-Ego and Moving-Time classification may not hold for temporal gestures.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/cogs.13425 | DOI Listing |
Sensors (Basel)
January 2025
College of Computer, Nanjing University of Posts and Telecommunications, Nanjing 210023, China.
Gesture recognition technology based on millimeter-wave radar can recognize and classify user gestures in non-contact scenarios. To address the complexity of data processing with multi-feature inputs in neural networks and the poor recognition performance with single-feature inputs, this paper proposes a gesture recognition algorithm based on esNet ong Short-Term Memory with an ttention Mechanism (RLA). In the aspect of signal processing in RLA, a range-Doppler map is obtained through the extraction of the range and velocity features in the original mmWave radar signal.
View Article and Find Full Text PDFBrain Sci
December 2024
Department of Speech, Hearing and Phonetic Sciences, Division of Psychology and Language Sciences, University College London, Chandler House 2 Wakefield Street, London WC1N 1PF, UK.
Speech is a highly skilled motor activity that shares a core problem with other motor skills: how to reduce the massive degrees of freedom (DOF) to the extent that the central nervous control and learning of complex motor movements become possible. It is hypothesized in this paper that a key solution to the DOF problem is to eliminate most of the temporal degrees of freedom by synchronizing concurrent movements, and that this is performed in speech through the syllable-a mechanism that synchronizes consonantal, vocalic, and laryngeal gestures. Under this hypothesis, syllable articulation is enabled by three basic mechanisms: target approximation, edge-synchronization, and tactile anchoring.
View Article and Find Full Text PDFNeuropsychologia
January 2025
Neuroscience Area, SISSA, Trieste, Italy; Dipartimento di Medicina dei Sistemi, Università di Roma-Tor Vergata, Roma, Italy.
Although gesture observation tasks are believed to invariably activate the action-observation network (AON), we investigated whether the activation of different cognitive mechanisms when processing identical stimuli with different explicit instructions modulates AON activations. Accordingly, 24 healthy right-handed individuals observed gestures and they processed both the actor's moved hand (hand laterality judgment task, HT) and the meaning of the actor's gesture (meaning task, MT). The main brain-level result was that the HT (vs MT) differentially activated the left and right precuneus, the left inferior parietal lobe, the left and right superior parietal lobe, the middle frontal gyri bilaterally and the left precentral gyrus.
View Article and Find Full Text PDFTop Cogn Sci
January 2025
Department of Anthropolgy, Indiana University.
Studies of the evolution of language rely heavily on comparisons to nonhuman primates, particularly the gestural communication of nonhuman apes. Differences between human and ape gestures are largely ones of degree rather than kind. For example, while human gestures are more flexible, ape gestures are not inflexible.
View Article and Find Full Text PDFCogn Neurodyn
December 2025
Shanghai University, Shanghai, China.
Neurodynamic observations indicate that the cerebral cortex evolved by self-organizing into functional networks, These networks, or distributed clusters of regions, display various degrees of attention maps based on input. Traditionally, the study of network self-organization relies predominantly on static data, overlooking temporal information in dynamic neuromorphic data. This paper proposes Temporal Self-Organizing (TSO) method for neuromorphic data processing using a spiking neural network.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!