Publications by authors named "Stirling Scholes"

Article Synopsis
  • This paper introduces a Bayesian method that allows single photon avalanche diode (SPAD) arrays to act as pseudo event cameras, capturing changes in light and depth within a scene.
  • The method utilizes a changepoint detection strategy to convert direct time-of-flight (dToF) data from SPAD arrays into event streams that report changes in intensity and depth.
  • It demonstrates that this integration can enhance active neuromorphic 3D imaging by reducing output redundancy and effectively capturing variations in scene depth through experiments with both synthetic and real dToF data.
View Article and Find Full Text PDF

Spatially structured optical modes exhibit a group velocity lower than c, resulting in a measurable temporal delay with respect to plane waves. Here, we develop a technique to image this temporal delay and measure it across a set of optical modes. An inevitable consequence of spatially varying delay is temporal broadening of the mode.

View Article and Find Full Text PDF

Non-Line of Sight (NLOS) imaging has gained attention for its ability to detect and reconstruct objects beyond the direct line of sight, using scattered light, with applications in surveillance and autonomous navigation. This paper presents a versatile framework for modeling the temporal distribution of photon detections in direct Time of Flight (dToF) Lidar NLOS systems. Our approach accurately accounts for key factors such as material reflectivity, object distance, and occlusion by utilizing a proof-of-principle simulation realized with the Unreal Engine.

View Article and Find Full Text PDF

Single-Photon Avalanche Diode (SPAD) direct Time-of-Flight (dToF) sensors provide depth imaging over long distances, enabling the detection of objects even in the absence of contrast in colour or texture. However, distant objects are represented by just a few pixels and are subject to noise from solar interference, limiting the applicability of existing computer vision techniques for high-level scene interpretation. We present a new SPAD-based vision system for human activity recognition, based on convolutional and recurrent neural networks, which is trained entirely on synthetic data.

View Article and Find Full Text PDF

3D time-of-flight (ToF) image sensors are used widely in applications such as self-driving cars, augmented reality (AR), and robotics. When implemented with single-photon avalanche diodes (SPADs), compact, array format sensors can be made that offer accurate depth maps over long distances, without the need for mechanical scanning. However, array sizes tend to be small, leading to low lateral resolution, which combined with low signal-to-background ratio (SBR) levels under high ambient illumination, may lead to difficulties in scene interpretation.

View Article and Find Full Text PDF

Single-Photon Avalanche Detector (SPAD) arrays are a rapidly emerging technology. These multi-pixel sensors have single-photon sensitivities and pico-second temporal resolutions thus they can rapidly generate depth images with millimeter precision. Such sensors are a key enabling technology for future autonomous systems as they provide guidance and situational awareness.

View Article and Find Full Text PDF

Single-photon-sensitive depth sensors are being increasingly used in next-generation electronics for human pose and gesture recognition. However, cost-effective sensors typically have a low spatial resolution, restricting their use to basic motion identification and simple object detection. Here, we perform a temporal to spatial mapping that drastically increases the resolution of a simple time-of-flight sensor, i.

View Article and Find Full Text PDF

Using custom laser cavities to produce as the output some desired structured light field has seen tremendous advances lately, but there is no universal approach to designing such cavities for arbitrarily defined field structures within the cavity, e.g., at both the output and gain ends.

View Article and Find Full Text PDF

Structured light concerns the control of light in its spatial degrees of freedom (amplitude, phase, and polarization), and has proven instrumental in many applications. The creation of structured light usually involves the conversion of a Gaussian mode to a desired structure in a single step, while the detection is often the reverse process, both fundamentally lossy or imperfect. Here we show how to ideally reshape structured light in a lossless manner in a simple two-step process using conformal mapping.

View Article and Find Full Text PDF

Laser brightness is crucial in many optical processes, and is optimized by high power, high beam quality (low ) beams. Here we show how to improve the laser beam quality factor (reducing the ) of arbitrary structured light fields in a lossless manner using continuous phase-only elements, thus allowing for the increase in brightness by a simple linear optical transformation. We demonstrate the principle with four high initial beams, converting each to a Gaussian (≈1) with a dramatic increase in brightness of >10×.

View Article and Find Full Text PDF

Encoding information in high-dimensional degrees of freedom of photons has led to new avenues in various quantum protocols such as communication and information processing. Yet to fully benefit from the increase in dimension requires a deterministic detection system, e.g.

View Article and Find Full Text PDF