Advances in virtual and augmented reality have increased the demand for immersive and engaging 3D experiences. To create such experiences, it is crucial to understand visual attention in 3D environments, which is typically modeled by means of saliency maps. While attention in 2D images and traditional media has been widely studied, there is still much to explore in 3D settings.
View Article and Find Full Text PDFVisual behavior depends on both bottom-up mechanisms, where gaze is driven by the visual conspicuity of the stimuli, and top-down mechanisms, guiding attention towards relevant areas based on the task or goal of the viewer. While this is well-known, visual attention models often focus on bottom-up mechanisms. Existing works have analyzed the effect of high-level cognitive tasks like memory or visual search on visual behavior; however, they have often done so with different stimuli, methodology, metrics and participants, which makes drawing conclusions and comparisons between tasks particularly difficult.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
November 2023
Understanding human visual behavior within virtual reality environments is crucial to fully leverage their potential. While previous research has provided rich visual data from human observers, existing gaze datasets often suffer from the absence of multimodal stimuli. Moreover, no dataset has yet gathered eye gaze trajectories (i.
View Article and Find Full Text PDFHuman performance is poor at detecting certain changes in a scene, a phenomenon known as change blindness. Although the exact reasons of this effect are not yet completely understood, there is a consensus that it is due to our constrained attention and memory capacity: We create our own mental, structured representation of what surrounds us, but such representation is limited and imprecise. Previous efforts investigating this effect have focused on 2D images; however, there are significant differences regarding attention and memory between 2D images and the viewing conditions of daily life.
View Article and Find Full Text PDFTime perception is fluid and affected by manipulations to visual inputs. Previous literature shows that changes to low-level visual properties alter time judgments at the millisecond-level. At longer intervals, in the span of seconds and minutes, high-level cognitive effects (e.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
May 2022
Understanding and modeling the dynamics of human gaze behavior in 360° environments is crucial for creating, improving, and developing emerging virtual reality applications. However, recruiting human observers and acquiring enough data to analyze their behavior when exploring virtual environments requires complex hardware and software setups, and can be time-consuming. Being able to generate virtual observers can help overcome this limitation, and thus stands as an open problem in this medium.
View Article and Find Full Text PDFBackground: To quantify development of gaze stability throughout life during short and long fixational tasks using eye tracking technology.
Methods: Two hundred and fifty-nine participants aged between 5 months and 77 years were recruited along the study. All participants underwent a complete ophthalmological assessment.
Painters are masters in replicating the visual appearance of materials. While the perception of material appearance is not yet fully understood, painters seem to have acquired an implicit understanding of the key visual cues that we need to accurately perceive material properties. In this study, we directly compare the perception of material properties in paintings and in renderings by collecting professional realistic paintings of rendered materials.
View Article and Find Full Text PDFIEEE Comput Graph Appl
September 2021
Virtual reality (VR) is a powerful medium for $360^{\circ }$360∘ storytelling, yet content creators are still in the process of developing cinematographic rules for effectively communicating stories in VR. Traditional cinematography has relied for over a century on well-established techniques for editing, and one of the most recurrent resources for this are cinematic cuts that allow content creators to seamlessly transition between scenes. One fundamental assumption of these techniques is that the content creator can control the camera; however, this assumption breaks in VR: Users are free to explore $360^{\circ }$360∘ around them.
View Article and Find Full Text PDFObserving and recognizing materials is a fundamental part of our daily life. Under typical viewing conditions, we are capable of effortlessly identifying the objects that surround us and recognizing the materials they are made of. Nevertheless, understanding the underlying perceptual processes that take place to accurately discern the visual properties of an object is a long-standing problem.
View Article and Find Full Text PDFWe report an auditory effect of visual performance degradation in a virtual reality (VR) setting, where the viewing conditions are significantly different from previous studies. With the presentation of temporally congruent but spatially incongruent sound, we can degrade visual performance significantly at detection and recognition levels. We further show that this effect is robust to different types and locations of both auditory and visual stimuli.
View Article and Find Full Text PDFIntroduction: Around 70% to 80% of the 19 million visually disabled children in the world are due to a preventable or curable disease, if detected early enough. Vision screening in childhood is an evidence-based and cost-effective way to detect visual disorders. However, current screening programmes face several limitations: training required to perform them efficiently, lack of accurate screening tools and poor collaboration from young children.
View Article and Find Full Text PDFAim: We aim to assess oculomotor behaviour in children adopted from Eastern Europe, who are at high risk of maternal alcohol consumption.
Methods: This cross-sectional study included 29 adoptees and 29 age-matched controls. All of them underwent a complete ophthalmological examination.
We present a method for adding parallax and real-time playback of 360° videos in Virtual Reality headsets. In current video players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head.
View Article and Find Full Text PDFIEEE Comput Graph Appl
March 2018
Computational imaging techniques allow capturing richer, more complete representations of a scene through the introduction of novel computational algorithms, overcoming the limitations imposed by hardware and optics. The areas of application range from medical imaging to security, to areas in engineering, to name a few. Computational displays combine optics, hardware and computation to faithfully reproduce the world as seen by our eyes, something that current displays still cannot do.
View Article and Find Full Text PDFUnderstanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions.
View Article and Find Full Text PDF