The images acquired by a single visible light sensor are very susceptible to light conditions, weather changes, and other factors, while the images acquired by a single infrared light sensor generally have poor resolution, low contrast, low signal-to-noise ratio, and blurred visual effects. The fusion of visible and infrared light can avoid the disadvantages of two single sensors and, in fusing the advantages of both sensors, significantly improve the quality of the images. The fusion of infrared and visible images is widely used in agriculture, industry, medicine, and other fields. In this study, firstly, the architecture of mainstream infrared and visible image fusion technology and application was reviewed; secondly, the application status in robot vision, medical imaging, agricultural remote sensing, and industrial defect detection fields was discussed; thirdly, the evaluation indicators of the main image fusion methods were combined into the subjective evaluation and the objective evaluation, the properties of current mainstream technologies were then specifically analyzed and compared, and the outlook for image fusion was assessed; finally, infrared and visible image fusion was summarized. The results show that the definition and efficiency of the fused infrared and visible image had been improved significantly. However, there were still some problems, such as the poor accuracy of the fused image, and irretrievably lost pixels. There is a need to improve the adaptive design of the traditional algorithm parameters, to combine the innovation of the fusion algorithm and the optimization of the neural network, so as to further improve the image fusion accuracy, reduce noise interference, and improve the real-time performance of the algorithm.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9862268 | PMC |
http://dx.doi.org/10.3390/s23020599 | DOI Listing |
Sensors (Basel)
January 2025
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
With the rapid development of AI algorithms and computational power, object recognition based on deep learning frameworks has become a major research direction in computer vision. UAVs equipped with object detection systems are increasingly used in fields like smart transportation, disaster warning, and emergency rescue. However, due to factors such as the environment, lighting, altitude, and angle, UAV images face challenges like small object sizes, high object density, and significant background interference, making object detection tasks difficult.
View Article and Find Full Text PDFSensors (Basel)
January 2025
The 54th Research Institute, China Electronics Technology Group Corporation, College of Signal and Information Processing, Shijiazhuang 050081, China.
The multi-sensor fusion, such as LiDAR and camera-based 3D object detection, is a key technology in autonomous driving and robotics. However, traditional 3D detection models are limited to recognizing predefined categories and struggle with unknown or novel objects. Given the complexity of real-world environments, research into open-vocabulary 3D object detection is essential.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Centre of Mechanical Technology and Automation (TEMA), Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal.
To automate the quality control of painted surfaces of heating devices, an automatic defect detection and classification system was developed by combining deflectometry and bright light-based illumination on the image acquisition, deep learning models for the classification of non-defective (OK) and defective (NOK) surfaces that fused dual-modal information at the decision level, and an online network for information dispatching and visualization. Three decision-making algorithms were tested for implementation: a new model built and trained from scratch and transfer learning of pre-trained networks (ResNet-50 and Inception V3). The results revealed that the two illumination modes employed widened the type of defects that could be identified with this system, while maintaining its lower computational complexity by performing multi-modal fusion at the decision level.
View Article and Find Full Text PDFSensors (Basel)
January 2025
School of Electronic and Communication Engineering, Sun Yat-sen University, Shenzhen 518000, China.
Exploring the relationships between plant phenotypes and genetic information requires advanced phenotypic analysis techniques for precise characterization. However, the diversity and variability of plant morphology challenge existing methods, which often fail to generalize across species and require extensive annotated data, especially for 3D datasets. This paper proposes a zero-shot 3D leaf instance segmentation method using RGB sensors.
View Article and Find Full Text PDFSensors (Basel)
January 2025
School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
Aiming at the problems caused by a lack of feature matching due to occlusion and fixed model parameters in cross-domain person re-identification, a method based on multi-branch pose-guided occlusion generation is proposed. This method can effectively improve the accuracy of person matching and enable identity matching even when pedestrian features are misaligned. Firstly, a novel pose-guided occlusion generation module is designed to enhance the model's ability to extract discriminative features from non-occluded areas.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!