Computer vision techniques are becoming increasingly popular for monitoring pig behavior. For instance, object detection models allow us to detect the presence of pigs, their location, and their posture. The performance of object detection models can be affected by variations in lighting conditions (e.g., intensity, spectrum, and uniformity). Furthermore, lighting conditions can influence pigs' active and resting behavior. In the context of experiments testing different lighting conditions, a detection model was developed to detect the location and postures of group-housed growing-finishing pigs. The objective of this paper is to validate the model developed using YOLOv8 detecting standing, sitting, sternal lying, and lateral lying pigs. Training, validation, and test datasets included annotation of pigs from 10 to 24 wk of age in 10 different light settings; varying in intensity, spectrum, and uniformity. Pig detection was comparable for the different lighting conditions, despite a slightly lower posture agreement for warm light and uneven light distribution, likely due to a less clear contrast between pigs and their background and the presence of shadows. The detection reached a mean average precision (mAP) of 89.4%. Standing was the best-detected posture with the highest precision, sensitivity, and F1 score, while the sensitivity and F1 score of sitting was the lowest. This lower performance resulted from confusion of sitting with sternal lying and standing, as a consequence of the top camera view and a low occurrence of sitting pigs in the annotated dataset. This issue is inherent to pig behavior and could be tackled using data augmentation. Some confusion was reported between types of lying due to occlusion by pen mates or pigs' own bodies, and grouping both types of lying postures resulted in an improvement in the detection (mAP = 97.0%). Therefore, comparing resting postures (both lying types) to active postures could lead to a more reliable interpretation of pigs' behavior. Some detection errors were observed, e.g., two detections for the same pig were generated due to posture uncertainty, dirt on cameras detected as a pig, and undetected pigs due to occlusion. The localization accuracy measured by the intersection over union was higher than 95.5% for 75% of the dataset, meaning that the location of predicted pigs was very close to annotated pigs. Tracking individual pigs revealed challenges with ID changes and switches between pen mates, requiring further work.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11635830PMC
http://dx.doi.org/10.1093/tas/txae167DOI Listing

Publication Analysis

Top Keywords

lighting conditions
20
object detection
12
pigs
11
detection
8
detection model
8
pig behavior
8
detection models
8
intensity spectrum
8
spectrum uniformity
8
model developed
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!