Self-integration, critical to identity, is the process of connecting experiences to the self and often occurs as individuals narrate events. Elaboration (Fivush & Nelson, 2006; King & Raspin, 2004; Smyth & Pennebaker, 2008) and listener responsiveness (Pasupathi & Rich, 2005) correlate with better self-integration, but these variables are seldom disentangled. In this set of studies, we examine how individuals construct connections between the self and experience for negative events. In Study 1, 90 friendship pairs discussed a negative event. Stability self-integration, change self-integration, elaboration, and listener responsiveness were assessed independently of the narrative. Elaboration and listener responsiveness contributed independently and positively to change self-integration but were unrelated to stability self-integration. Study 2 manipulated listener responsiveness and added preconversation measures of self-integration. Study 1 results were replicated, except that elaboration failed to achieve significance, and a significant interaction between initial change self-integration and listener responsiveness was found. Implications are discussed.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/j.1467-6494.2011.00685.x | DOI Listing |
Do machines and humans process language in similar ways? Recent research has hinted at the affirmative, showing that human neural activity can be effectively predicted using the internal representations of language models (LMs). Although such results are thought to reflect shared computational principles between LMs and human brains, there are also clear differences in how LMs and humans represent and use language. In this work, we systematically explore the divergences between human and machine language processing by examining the differences between LM representations and human brain responses to language as measured by Magnetoencephalography (MEG) across two datasets in which subjects read and listened to narrative stories.
View Article and Find Full Text PDFJ Cogn Neurosci
January 2025
National Central University, Taoyuan City, Taiwan.
Pitch variation of the fundamental frequency (F0) is critical to speech understanding, especially in noisy environments. Degrading the F0 contour reduces behaviorally measured speech intelligibility, posing greater challenges for tonal languages like Mandarin Chinese where the F0 pattern determines semantic meaning. However, neural tracking of Mandarin speech with degraded F0 information in noisy environments remains unclear.
View Article and Find Full Text PDFInt Marit Health
January 2025
Institute for Occupational and Maritime Medicine (ZfAM), University Medical Center Hamburg-Eppendorf (UKE), Seewartenstraße 10, 20459 Hamburg, Hamburg, Germany.
Background: Seafarers are exposed to a variety of job-specific physical and psychosocial stressors. Health promotion on board is of great importance for the salutogenesis of this occupational group. Due to the difficult accessibility of seafarers, electronically supported health management can be highly valuable.
View Article and Find Full Text PDFSocial vocalizations contain cues that reflect the motivational state of a vocalizing animal. Once perceived, these cues may in turn affect the internal state and behavioral responses of listening animals. Using the CBA/CAJ mouse model of acoustic communication, this study examined acoustic cues that signal intensity in male-female interactions, then compared behavioral responses to intense mating vocal sequences with those from another intense behavioral context, restraint.
View Article and Find Full Text PDFImaging Neurosci (Camb)
April 2024
Department of Electrical Engineering, Columbia University, New York, NY, United States.
Listeners with hearing loss have trouble following a conversation in multitalker environments. While modern hearing aids can generally amplify speech, these devices are unable to tune into a target speaker without first knowing to which speaker a user aims to attend. Brain-controlled hearing aids have been proposed using auditory attention decoding (AAD) methods, but current methods use the same model to compare the speech stimulus and neural response, regardless of the dynamic overlap between talkers which is known to influence neural encoding.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!