Neuroscience and Visual Art; Moving through empathy to the Ineffable.

Psychiatr Danub

Clare College Cambridge, Department of Psychiatry, University of Cambridge, Cambridge, UK,

Published: November 2018

In this article we wish to discuss recent work on neurobiology and visual arts, with impact on human pleasure, wellbeing and improved mental health. We wish to discuss briefly our model of the Human Person and apply it to Visual Art, and we wish to discuss our view of how empathy has been suggested as an important factor in how visual art can impact the human person, with its links with neuroscience and anthropology, and thus how Visual Art can put Human Beings in touch with their deepest feelings and even with the ineffable.

Download full-text PDF

Source

Publication Analysis

Top Keywords

visual art
16
impact human
8
human person
8
neuroscience visual
4
art
4
art moving
4
moving empathy
4
empathy ineffable
4
ineffable article
4
article discuss
4

Similar Publications

Medical Visual Question Answering aims to assist doctors in decision-making when answering clinical questions regarding radiology images. Nevertheless, current models learn cross-modal representations through residing vision and text encoders in dual separate spaces, which inevitably leads to indirect semantic alignment. In this paper, we propose UnICLAM, a Unified and Interpretable Medical-VQA model through Contrastive Representation Learning with Adversarial Masking.

View Article and Find Full Text PDF

Background: At present, although some studies have offered certain insights into the genetic factors related to unruptured intracranial aneurysms (uIAs), the potential genetic targets associated with uIAs remain largely unknown. Thus, this research adopted Mendelian randomization (MR) analysis to study two genome-wide association studies on uIAs, aiming to determine the reliable genetic susceptibility and potential therapeutic targets for uIAs.

Methods: This study summarizes the data of expression quantitative trait loci (eQTL) as exposure data.

View Article and Find Full Text PDF

Looking at the world often involves not just seeing things, but feeling things. Modern feedforward machine vision systems that learn to perceive the world in the absence of active physiology, deliberative thought, or any form of feedback that resembles human affective experience offer tools to demystify the relationship between seeing and feeling, and to assess how much of visually evoked affective experiences may be a straightforward function of representation learning over natural image statistics. In this work, we deploy a diverse sample of 180 state-of-the-art deep neural network models trained only on canonical computer vision tasks to predict human ratings of arousal, valence, and beauty for images from multiple categories (objects, faces, landscapes, art) across two datasets.

View Article and Find Full Text PDF

Human perception of art in the age of artificial intelligence.

Front Psychol

January 2025

The MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, NSW, Australia.

Recent advancement in Artificial Intelligence (AI) has rendered image-synthesis models capable of producing complex artworks that appear nearly indistinguishable from human-made works. Here we present a quantitative assessment of human perception and preference for art generated by OpenAI's DALL·E 2, a leading AI tool for art creation. Participants were presented with pairs of artworks, one human-made and one AI-generated, in either a preference-choice task or an origin-discrimination task.

View Article and Find Full Text PDF

Given the integration of color emotion space information from multiple feature sources in multimodal recognition systems, effectively fusing this information presents a significant challenge. This article proposes a three-dimensional (3D) color-emotion space visual feature extraction model for multimodal data integration based on an improved Gaussian mixture model to address these issues. Unlike traditional methods, which often struggle with redundant information and high model complexity, our approach optimizes feature fusion by employing entropy and visual feature sequences.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!