In large-scale laying hen farming, timely detection of dead chickens helps prevent cross-infection, disease transmission, and economic loss. Dead chicken detection is still performed manually and is one of the major labor costs on commercial farms. This study proposed a new method for dead chicken detection using multi-source images and deep learning and evaluated the detection performance with different source images. We first introduced a pixel-level image registration method that used depth information to project the near-infrared (NIR) and depth image into the coordinate of the thermal infrared (TIR) image, resulting in registered images. Then, the registered single-source (TIR, NIR, depth), dual-source (TIR-NIR, TIR-depth, NIR-depth), and multi-source (TIR-NIR-depth) images were separately used to train dead chicken detecting models with object detection networks, including YOLOv8n, Deformable DETR, Cascade R-CNN, and TOOD. The results showed that, at an IoU (Intersection over Union) threshold of 0.5, the performance of these models was not entirely the same. Among them, the model using the NIR-depth image and Deformable DETR achieved the best performance, with an average precision (AP) of 99.7% (IoU = 0.5) and a recall of 99.0% (IoU = 0.5). While the IoU threshold increased, we found the following: The model with the NIR image achieved the best performance among models with single-source images, with an AP of 74.4% (IoU = 0.5:0.95) in Deformable DETR. The performance with dual-source images was higher than that with single-source images. The model with the TIR-NIR or NIR-depth image outperformed the model with the TIR-depth image, achieving an AP of 76.3% (IoU = 0.5:0.95) and 75.9% (IoU = 0.5:0.95) in Deformable DETR, respectively. The model with the multi-source image also achieved higher performance than that with single-source images. However, there was no significant improvement compared to the model with the TIR-NIR or NIR-depth image, and the AP of the model with multi-source image was 76.7% (IoU = 0.5:0.95) in Deformable DETR. By analyzing the detection performance with different source images, this study provided a reference for selecting and using multi-source images for detecting dead laying hens on commercial farms.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10251900PMC
http://dx.doi.org/10.3390/ani13111861DOI Listing

Publication Analysis

Top Keywords

deformable detr
20
iou 05095
16
dead chicken
12
nir-depth image
12
single-source images
12
05095 deformable
12
images
11
image
10
dead laying
8
laying hens
8

Similar Publications

To achieve real-time monitoring and intelligent maintenance of transformers, a framework based on deep vision and digital twin has been developed. An enhanced visual detection model, DETR + X, is proposed, implementing multidimensional sample data augmentation through Swin2SR and GAN networks. This model converts one-dimensional DGA data into three-dimensional feature images based on Gram angle fields, facilitating the transformation and fusion of heterogeneous modal information.

View Article and Find Full Text PDF

A Machine Vision Perspective on Droplet-Based Microfluidics.

Adv Sci (Weinh)

January 2025

Department of Mechanical and Aerospace Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong SAR, 999077, China.

Microfluidic droplets, with their unique properties and broad applications, are essential in in chemical, biological, and materials synthesis research. Despite the flourishing studies on artificial intelligence-accelerated microfluidics, most research efforts have focused on the upstream design phase of microfluidic systems. Generating user-desired microfluidic droplets still remains laborious, inefficient, and time-consuming.

View Article and Find Full Text PDF

ChromTR: chromosome detection in raw metaphase cell images via deformable transformers.

Front Med

December 2024

Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, 200240, China.

Chromosome karyotyping is a critical way to diagnose various hematological malignancies and genetic diseases, of which chromosome detection in raw metaphase cell images is the most critical and challenging step. In this work, focusing on the joint optimization of chromosome localization and classification, we propose ChromTR to accurately detect and classify 24 classes of chromosomes in raw metaphase cell images. ChromTR incorporates semantic feature learning and class distribution learning into a unified DETR-based detection framework.

View Article and Find Full Text PDF

DV-DETR: Improved UAV Aerial Small Target Detection Algorithm Based on RT-DETR.

Sensors (Basel)

November 2024

School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.

For drone-based detection tasks, accurately identifying small-scale targets like people, bicycles, and pedestrians remains a key challenge. In this paper, we propose DV-DETR, an improved detection model based on the Real-Time Detection Transformer (RT-DETR), specifically optimized for small target detection in high-density scenes. To achieve this, we introduce three main enhancements: (1) ResNet18 as the backbone network to improve feature extraction and reduce model complexity; (2) the integration of recalibration attention units and deformable attention mechanisms in the neck network to enhance multi-scale feature fusion and improve localization accuracy; and (3) the use of the Focaler-IoU loss function to better handle the imbalanced distribution of target scales and focus on challenging samples.

View Article and Find Full Text PDF

Green Apple Detection Method Based on Multidimensional Feature Extraction Network Model and Transformer Module.

J Food Prot

January 2025

The School of Electrical and Information Engineering, Jiangsu University, Zhenjiang 212013, China.

To enhance the fast and accurate detection of pollution-free green apples for food safety, this paper uses the DETR network as a framework to propose a new method for pollution-free green apple detection based on a multidimensional feature extraction network and Transformer module. Firstly, an improved DETR network main feature extraction module adopts the ResNet18 network and replaces some residual layers with deformable convolutions (DCNv2), enabling the model to better adapt to pollution-free fruit changes at different scales and angles, while eliminating the impact of microbial contamination on fruit testing; Subsequently, the extended spatial pyramid pooling model (DSPP) and multiscale residual aggregation module (FRAM) are integrated, which help reduce feature noise and minimize the loss of underlying features during the feature extraction process. The fusion of the two modules enhances the model's ability to detect objects of different scales, thereby improving the accuracy of near-color fruit detection.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!