The majority of computer vision applications assumes that the camera adheres to the pinhole camera model. However, most optical systems will introduce undesirable effects. By far, the most evident of these effects is radial lensing, which is particularly noticeable in fish-eye camera systems, where the effect is relatively extreme. Several authors have developed models of fish-eye lenses that can be used to describe the fish-eye displacement. Our aim is to evaluate the accuracy of several of these models. Thus, we present a method by which the lens curve of a fish-eye camera can be extracted using well-founded assumptions and perspective methods. Several of the models from the literature are examined against this empirically derived curve.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1364/AO.49.003338 | DOI Listing |
Commun Biol
December 2024
Centre for the Advanced Study of Collective Behaviour, University of Konstanz, 78464, Konstanz, Germany.
Eye tracking has emerged as a key method for understanding how animals process visual information, identifying crucial elements of perception and attention. Traditional fish eye tracking often alters animal behavior due to invasive techniques, while non-invasive methods are limited to either 2D tracking or restricting animals after training. Our study introduces a non-invasive technique for tracking and reconstructing the retinal view of free-swimming fish in a large 3D arena without behavioral training.
View Article and Find Full Text PDFJ Imaging
March 2024
Concordia Institute for Information Systems Engineering (CIISE), Concordia University, Montreal, QC H3G1M8, Canada.
The application of large field-of-view (FoV) cameras equipped with fish-eye lenses brings notable advantages to various real-world computer vision applications, including autonomous driving. While deep learning has proven successful in conventional computer vision applications using regular perspective images, its potential in fish-eye camera contexts remains largely unexplored due to limited datasets for fully supervised learning. Semi-supervised learning comes as a potential solution to manage this challenge.
View Article and Find Full Text PDFSensors (Basel)
January 2024
School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing 100083, China.
Three-dimensional (3D) localization plays an important role in visual sensor networks. However, the frame rate and flexibility of the existing vision-based localization systems are limited by using synchronized multiple cameras. For such a purpose, this paper focuses on developing an indoor 3D localization system based on unsynchronized multiple cameras.
View Article and Find Full Text PDFSensors (Basel)
November 2023
Kanagawa Environmental Research Center, Hiratsuka 254-0014, Japan.
Radar is an important sensing technology for three-dimensional positioning of aircraft. This method requires detecting the response from the object to the signal transmitted from the antenna, but the accuracy becomes unstable due to effects such as obstruction and reflection from surrounding buildings at low altitudes near the antenna. Accordingly, there is a need for a ground-based positioning method with high accuracy.
View Article and Find Full Text PDFCameras with rolling shutters (RSs) dominate consumer markets but are subject to distortions when capturing motion. Many methods have been proposed to mitigate RS distortions for applications such as vision-aided odometry and three-dimensional (3D) reconstruction. They usually need known line delay d between successive image rows.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!