Despite significant strides in achieving vehicle autonomy, robust perception under low-light conditions still remains a persistent challenge. In this study, we investigate the potential of multispectral imaging, thereby leveraging deep learning models to enhance object detection performance in the context of nighttime driving. Features encoded from the red, green, and blue (RGB) visual spectrum and thermal infrared images are combined to implement a multispectral object detection model. This has proven to be more effective compared to using visual channels only, as thermal images provide complementary information when discriminating objects in low-illumination conditions. Additionally, there is a lack of studies on effectively fusing these two modalities for optimal object detection performance. In this work, we present a framework based on the Faster R-CNN architecture with a feature pyramid network. Moreover, we design various fusion approaches using concatenation and addition operators at varying stages of the network to analyze their impact on object detection performance. Our experimental results on the KAIST and FLIR datasets show that our framework outperforms the baseline experiments of the unimodal input source and the existing multispectral object detectors.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10816846 | PMC |
http://dx.doi.org/10.3390/jimaging10010012 | DOI Listing |
Unlabelled: Ultrasound imaging plays an important role in the early detection and management of breast cancer. This study aimed to evaluate the imaging performance of a range of clinically-used breast ultrasound systems using a set of novel spherical lesion contrast-detail (C-D) and anechoic-target (A-T) phantoms.
Methods: C-D and A-T phantoms were imaged using a range of clinical breast ultrasound systems and imaging modes.
Comput Biol Med
January 2025
School of Computer Science, Chungbuk National University, Cheongju 28644, Republic of Korea. Electronic address:
The fusion index is a critical metric for quantitatively assessing the transformation of in vitro muscle cells into myotubes in the biological and medical fields. Traditional methods for calculating this index manually involve the labor-intensive counting of numerous muscle cell nuclei in images, which necessitates determining whether each nucleus is located inside or outside the myotubes, leading to significant inter-observer variation. To address these challenges, this study proposes a three-stage process that integrates the strengths of pattern recognition and deep-learning to automatically calculate the fusion index.
View Article and Find Full Text PDFSci Rep
January 2025
School of Food and Pharmacy, Zhejiang Ocean University, Zhoushan, 316022, People's Republic of China.
Accurate and rapid segmentation of key parts of frozen tuna, along with precise pose estimation, is crucial for automated processing. However, challenges such as size differences and indistinct features of tuna parts, as well as the complexity of determining fish poses in multi-fish scenarios, hinder this process. To address these issues, this paper introduces TunaVision, a vision model based on YOLOv8 designed for automated tuna processing.
View Article and Find Full Text PDFViruses
January 2025
Section for Veterinary Clinical Microbiology, Department of Veterinary and Animal Sciences, University of Copenhagen, DK-1870 Frederiksberg, Denmark.
Introduction of African swine fever virus (ASFV) into pig herds can occur via virus-contaminated feed or other objects. Knowledge about ASFV survival in different matrices and under different conditions is required to understand indirect virus transmission. Maintenance of ASFV infectivity can occur for extended periods outside pigs.
View Article and Find Full Text PDFSensors (Basel)
January 2025
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
With the rapid development of AI algorithms and computational power, object recognition based on deep learning frameworks has become a major research direction in computer vision. UAVs equipped with object detection systems are increasingly used in fields like smart transportation, disaster warning, and emergency rescue. However, due to factors such as the environment, lighting, altitude, and angle, UAV images face challenges like small object sizes, high object density, and significant background interference, making object detection tasks difficult.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!