The conventional approach to appearance prediction for 3D printed parts is to print a thin slab of material and measure its reflectance or transmittance with a spectrophotometer. Reflectance works for opaque printing materials. Transmittance works for transparent printing materials.
View Article and Find Full Text PDFThe optical properties available for an object are most often fragmented and insufficient for photorealistic rendering of the object. We propose a procedure for digitizing a translucent object with sufficient information for predictive rendering of its appearance. Based on object material descriptions, we compute optical properties and validate or adjust this object appearance model based on comparison of simulation with spectrophotometric measurements of the bidirectional scattering-surface reflectance distribution function (BSSRDF).
View Article and Find Full Text PDFLaryngoscope Investig Otolaryngol
February 2024
Objectives: In this study, we propose a diagnostic model for automatic detection of otitis media based on combined input of otoscopy images and wideband tympanometry measurements.
Methods: We present a neural network-based model for the joint prediction of otitis media and diagnostic difficulty. We use the subclassifications acute otitis media and otitis media with effusion.
When 3D scanning objects, the objective is usually to obtain a continuous surface. However, most surface scanning methods, such as structured light scanning, yield a point cloud. Obtaining a continuous surface from a point cloud requires a subsequent surface reconstruction step, which is directly affected by any error from the computation of the point cloud.
View Article and Find Full Text PDFWe propose a method for direct comparison of rendered images with a corresponding photograph in order to analyze the optical properties of physical objects and test the appropriateness of appearance models. To this end, we provide a practical method for aligning a known object and a point-like light source with the configuration observed in a photograph. Our method is based on projective transformation of object edges and silhouette matching in the image plane.
View Article and Find Full Text PDF