Recognition of environmental changes is essential in everyday life. In this study, recognition of animate objects by elderly people was examined with various methods (introduction, restoration) and types (addition, deletion) of change. For restoration, deletions and additions were produced by eliminating features from pictures and reintroducing the deleted features, respectively. In introduction, additions and deletions were produced by adding and deleting features from original pictures. 37 subjects (M age = 74 yr.) viewed each card for 10 sec. (learning phase) and were then asked (test phase) whether they had viewed the card in the learning phase and to rate their confidence in their answer. Percentage correct rejection and confidence ratings were higher for introductions compared to restorations and for deletions compared to additions. Findings are similar to those in young adults and children, which indicates developmental robustness of asymmetric effects in recognition of animate objects.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.2466/PMS.110.1.69-76 | DOI Listing |
PeerJ Comput Sci
November 2024
Shaanxi Artists Association, Shaanxi, China.
The creation of 3D animation increasingly prioritizes the enhancement of character effects, narrative depth, and audience engagement to address the growing demands for visual stimulation, cultural enrichment, and interactive experiences. The advancement of virtual reality (VR) animation is anticipated to require sustained collaboration among researchers, animation experts, and hardware developers over an extended period to achieve full maturity. This article explores the use of Virtual Reality Modeling Language (VRML) in generating 3D stereoscopic forms and environments, applying texture mapping, optimizing lighting effects, and establishing interactive user responses, thereby enriching the 3D animation experience.
View Article and Find Full Text PDFFront Psychol
October 2024
Department of Life Sciences, University of Trieste, Trieste, Italy.
Despite the interest in animacy perception, few studies have considered sensory modalities other than vision. However, even everyday experience suggests that the auditory sense can also contribute to the recognition of animate beings, for example through the identification of voice-like sounds or through the perception of sounds that are the by-products of locomotion. Here we review the studies that have investigated the responses of humans and other animals to different acoustic features that may indicate the presence of a living entity, with particular attention to the neurophysiological mechanisms underlying such perception.
View Article and Find Full Text PDFiScience
November 2024
Faculty of Psychology, UniDistance Suisse, Brig, Switzerland.
Most researchers agree that some stages of object recognition can proceed implicitly. Implicit recognition occurs when an object is automatically and unintentionally encoded and represented in the brain even though the object is irrelevant to the current task. No consensus has been reached as to what level of semantic abstraction processing can go implicitly.
View Article and Find Full Text PDFSci Rep
November 2024
Laboratory of Behavioral and Cognitive Neuroscience, Stanford University, Stanford, CA, USA.
In this study, we examined the relatively unexplored realm of face perception, investigating the activities within human brain face-selective regions during the observation of faces at both subordinate and superordinate levels. We recorded intracranial EEG signals from the ventral temporal cortex in neurosurgical patients implanted with subdural electrodes during viewing of face subcategories (human, mammal, bird, and marine faces) as well as various non-face control stimuli. The results revealed a noteworthy correlation in response patterns across all face-selective areas in the ventral temporal cortex, not only within the same face category but also extending to different face categories.
View Article and Find Full Text PDFThe speech-driven facial animation technology is generally categorized into two main types: 3D and 2D talking face. Both of these have garnered considerable research attention in recent years. However, to our knowledge, the research into 3D talking face has not progressed as deeply as that of 2D talking face, particularly in terms of lip-sync and perceptual mouth movements.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!