Augmented reality head-up displays (HUDs) require virtual-object distance matching to the real scene along an adequate field of view (FoV). At the same time, pupil-replication-based waveguide systems provide a wide FoV while affording compact HUDs. To provide 3D imaging and enable virtual-object distance matching in such waveguide systems, we propose a time-sequential autostereoscopic imaging architecture using a synchronized multi-view picture generation and eyebox formation units.
View Article and Find Full Text PDFWe explore the feasibility of implementing stereoscopy-based 3D images with an eye-tracking-based light-field display and actual head-up display optics for automotive applications. We translate the driver's eye position into the virtual eyebox plane via a "light-weight" equation to replace the actual optics with an effective lens model, and we implement a light-field rendering algorithm using the model-processed eye-tracking data. Furthermore, our experimental results with a prototype closely match our ray-tracing simulations in terms of designed viewing conditions and low-crosstalk margin width.
View Article and Find Full Text PDF