We conducted a visibility graph analysis (a Space Syntax method) of a virtual environment to examine how the configurational salience of global and local landmarks (i.e., their relative positions in the environment) as compared to their visual salience affects the probability of their depiction on sketch maps. Participants of two experimental conditions produced sketch maps from memory after exploration with a layout map or without a map, respectively. Participants of a third condition produced sketch maps in parallel to exploration. More detailed sketch maps were produced in the third condition, but landmarks with higher configurational salience were depicted more frequently across all experimental conditions. Whereas the inclusion of global landmarks onto sketch maps was best predicted by their size, both visual salience and isovist size (i.e., the area a landmark was visible from) predicted the frequency of depiction for local landmarks. Our findings imply that people determine the relevance of landmarks not only by their visual, but even more by their configurational salience.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s10339-015-0726-5 | DOI Listing |
Commun Psychol
December 2024
Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Wales, UK.
Cognitive maps are thought to arise, at least in part, from our intrinsic curiosity to explore unknown places. However, it remains untested how curiosity shapes aspects of spatial exploration in humans. Combining a virtual reality task with indices of exploration complexity, we found that pre-exploration curiosity states predicted how much individuals spatially explored environments, whereas markers of visual exploration determined post-exploration feelings of interest.
View Article and Find Full Text PDFSalud Colect
October 2024
Psicóloga Clínica. Estudiante Programa de doctorado en estudios interdisciplinares de género, Universidad Rey Juan Carlos, Madrid, España.
This essay explores the affective maps or emotional archives of racialized communities in Spain, specifically focusing on the Caribbean Afro-diaspora in Madrid. It questions how migratory grief is prescribed by the government without taking into account the colonial wound, racial trauma, and the geopolitics of emotions, while delving into everyday structural racism. Drawing from decolonial theory and Black feminism, as well as narrative healing practices created by migrant collectives, qualitative research was conducted during the period 2023-2024, involving 25 in-depth interviews and two group workshops with the participation of 15 anti-racist activists.
View Article and Find Full Text PDFSci Rep
October 2024
Art School, Northwest University, Xi'an, 710127, China.
Ancient murals embody profound historical, cultural, scientific, and artistic values, yet many are afflicted with challenges such as pigment shedding or missing parts. While deep learning-based completion techniques have yielded remarkable results in restoring natural images, their application to damaged murals has been unsatisfactory due to data shifts and limited modeling efficacy. This paper proposes a novel progressive reasoning network designed specifically for mural image completion, inspired by the mural painting process.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
December 2024
Video-to-Video synthesis (Vid2Vid) gains remarkable performance in generating a photo-realistic video from a sequence of semantic maps, such as segmentation, sketch and pose. However, this pipeline is heavily limited to high computational cost and long inference latency, mainly attributed to two essential factors: 1) network architecture parameters, 2) sequential data stream. Recently, the parameters of image-based generative models have been significantly reduced via more efficient network architectures.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
July 2024
Existing facial editing methods have achieved remarkable results, yet they often fall short in supporting multimodal conditional local facial editing. One of the significant evidences is that their output image quality degrades dramatically after several iterations of incremental editing, as they do not support local editing. In this paper, we present a novel multimodal generative and fusion framework for globally-consistent local facial editing (FACEMUG) that can handle a wide range of input modalities and enable fine-grained and semantic manipulation while remaining unedited parts unchanged.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!