The chicken is the world's most farmed animal. In this work, we introduce the Chicks4FreeID dataset, the first publicly available dataset focused on the reidentification of individual chickens. We begin by providing a comprehensive overview of the existing animal reidentification datasets. Next, we conduct closed-set reidentification experiments on the introduced dataset, using transformer-based feature extractors in combination with two different classifiers. We evaluate performance across domain transfer, supervised, and one-shot learning scenarios. The results demonstrate that transfer learning is particularly effective with limited data, and training from scratch is not necessarily advantageous even when sufficient data are available. Among the evaluated models, the vision transformer paired with a linear classifier achieves the highest performance, with a mean average precision of 97.0%, a top-1 accuracy of 95.1%, and a top-5 accuracy of 100.0%. Our evaluation suggests that the vision transformer architecture produces higher-quality embedding clusters than the Swin transformer architecture. All data and code are publicly shared under a CC BY 4.0 license.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3390/ani15010001 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!