Appraisal theories of emotion, and particularly the Component Process Model, claim that the different components of the emotion process (action tendencies, physiological reactions, expressions, and feeling experiences) are essentially driven by the results of cognitive appraisals and that the feeling component constitutes a central integration and representation of these processes. Given the complexity of the proposed architecture, comprehensive experimental tests of these predictions are difficult to perform and to date are lacking. Encouraged by the "lexical sedimentation" hypothesis, here we propose an indirect examination of the compatibility of the theoretical assumptions with the semantic structure of a set of major emotion words as measured in a cross-language and cross-cultural study. Specifically, we performed a secondary analysis of the large-scale data set with ratings of affective features covering all components of the emotion process for 24 emotion words in 27 countries, constituting profiles of emotion-specific appraisals, action tendencies, physiological reactions, expressions, and feeling experiences. The results of a series of hierarchical regression analyses to examine the prediction of the theoretical model are highly consistent with the claim that appraisal patterns determine the structure of the response components, which in turn predict central dimensions of the feeling component.

Download full-text PDF

Source
http://dx.doi.org/10.1080/02699931.2018.1481369DOI Listing

Publication Analysis

Top Keywords

semantic structure
8
components emotion
8
emotion process
8
action tendencies
8
tendencies physiological
8
physiological reactions
8
reactions expressions
8
expressions feeling
8
feeling experiences
8
feeling component
8

Similar Publications

In recent years, substantial strides have been made in the field of visual image reconstruction, particularly in its capacity to generate high-quality visual representations from human brain activity while considering semantic information. This advancement not only enables the recreation of visual content but also provides valuable insights into the intricate processes occurring within high-order functional brain regions, contributing to a deeper understanding of brain function. However, considering fusion semantics in reconstructing visual images from brain activity involves semantic-to-image guide reconstruction and may ignore underlying neural computational mechanisms, which does not represent true reconstruction from brain activity.

View Article and Find Full Text PDF

Loss of Insight in Syndromes Associated with Frontotemporal Lobar Degeneration: Clinical and Imaging Features.

Am J Geriatr Psychiatry

December 2024

Department of Clinical and Experimental Sciences (DA, BB), University of Brescia, Brescia, Italy; Molecular Markers Laboratory (BB), IRCCS Istituto Centro San Giovanni di Dio Fatebenefratelli, Brescia, Italy. Electronic address:

Objectives: The present study aims to assess the prevalence, associated clinical symptoms, longitudinal changes, and imaging correlates of Loss of Insight (LOI), which is still unexplored in syndromes associated with Frontotemporal Lobar Degeneration (FTLD).

Design: Retrospective longitudinal cohort study, from Oct 2009 to Feb 2023.

Setting: Tertiary Frontotemporal Dementia research clinic.

View Article and Find Full Text PDF

A discrete convolutional network for entity relation extraction.

Neural Netw

January 2025

State Key Laboratory of Public Big Data, Guizhou University, 550025, China; Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Guizhou University, 550025, China; College of Computer Science and Technology, Guizhou University, 550025, China. Electronic address:

Relation extraction independently verifies all entity pairs in a sentence to identify predefined relationships between named entities. Because these entity pairs share the same contextual features of a sentence, they lead to a complicated semantic structure. To distinguish semantic expressions between relation instances, manually designed rules or elaborate deep architectures are usually applied to learn task-relevant representations.

View Article and Find Full Text PDF

Table Extraction with Table Data Using VGG-19 Deep Learning Model.

Sensors (Basel)

January 2025

Faculty of Science and Environmental Studies, Department of Computer Science, Lakehead University, Thunder Bay, ON P7B 5E1, Canada.

In recent years, significant progress has been achieved in understanding and processing tabular data. However, existing approaches often rely on task-specific features and model architectures, posing challenges in accurately extracting table structures amidst diverse layouts, styles, and noise contamination. This study introduces a comprehensive deep learning methodology that is tailored for the precise identification and extraction of rows and columns from document images that contain tables.

View Article and Find Full Text PDF

Data-Efficient Bone Segmentation Using Feature Pyramid- Based SegFormer.

Sensors (Basel)

December 2024

Master's Program in Information and Computer Science, Doshisha University, Kyoto 610-0394, Japan.

The semantic segmentation of bone structures demands pixel-level classification accuracy to create reliable bone models for diagnosis. While Convolutional Neural Networks (CNNs) are commonly used for segmentation, they often struggle with complex shapes due to their focus on texture features and limited ability to incorporate positional information. As orthopedic surgery increasingly requires precise automatic diagnosis, we explored SegFormer, an enhanced Vision Transformer model that better handles spatial awareness in segmentation tasks.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!