We developed an autostereoscopic display for distant viewing of 3D computer graphics (CG) images without using special viewing glasses or tracking devices. The images are created by employing referential viewing area-based CG image generation and pixel distribution algorithm for integral photography (IP) and integral videography (IV) imaging. CG image rendering is used to generate IP/IV elemental images. The images can be viewed from each viewpoint within a referential viewing area and the elemental images are reconstructed from rendered CG images by pixel redistribution and compensation method. The elemental images are projected onto a screen that is placed at the same referential viewing distance from the lens array as in the image rendering. Photographic film is used to record the elemental images through each lens. The method enables 3D images with a long visualization depth to be viewed from relatively long distances without any apparent influence from deviated or distorted lenses in the array. We succeeded in creating an actual autostereoscopic images with an image depth of several meters in front of and behind the display that appear to have 3D even when viewed from a distance.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TVCG.2010.267 | DOI Listing |
Visual narratives, like comics, at times show depictions of characters' imagination, dreams, or flashbacks, which seem incongruent with the ongoing primary narrative. Such "domain constructions" thus integrate an auxiliary domain (e.g.
View Article and Find Full Text PDFPLoS One
January 2025
Origin of Language Laboratories, School of Communication Sciences and Disorders, University of Memphis, Memphis, Tennessee, United States of America.
Speculations on the evolution of language have invoked comparisons across human and non-human primate communication. While there is widespread support for the claim that gesture plays a central, perhaps a predominant role in early language development and that gesture played the foundational role in language evolution, much empirical information does not accord with the gestural claims. The present study follows up on our prior work that challenged the gestural theory of language development with longitudinal data showing early speech-like vocalizations occurred more than 5 times as often as gestures in the first year of life.
View Article and Find Full Text PDFCortex
December 2024
Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
Background: Language is multimodal and situated in rich visual contexts. Language is also incremental, unfolding moment-to-moment in real time, yet few studies have examined how spoken language interacts with gesture and visual context during multimodal language processing. Gesture is a rich communication cue that is integrally related to speech and often depicts concrete referents from the visual world.
View Article and Find Full Text PDFEmotion
February 2025
Department of General Psychiatry, Section Social Neuroscience, Heidelberg University.
When people receive feedback from others, this is an opportunity for them to update their self-views. People with mental health problems (e.g.
View Article and Find Full Text PDFInt J Cosmet Sci
September 2024
Department of Dermatology, The General Hospital of air Force PLA, Beijing, China.
Objective: The objective of this study is to assess the correspondence, in live conditions, between clinical gradings of facial aging signs by three dermatologists and those afforded by an automatic AI-based algorithm that analyses smartphones' selfie images of Chinese subjects.
Methods: In total, 125 Chinese subjects of both genders, aged 18-62y, took a selfie using their own smartphones and were immediately viewed by three dermatologists. The latter graded the severity of 15 facial signs in women and 9 in men, using the standardized values afforded by a Skin Aging Atlas referential dedicated to Asian skin.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!