AI Article Synopsis

  • Deep learning models can analyze biological behaviors in video data, but they usually need a lot of manually-labeled training data, which is hard to get.
  • This paper introduces LFAGPA, a method that uses automatically generated (but noisy) annotations to train deep learning models for detecting ants in videos.
  • The approach involves extracting foreground objects from videos to create pseudo-annotations, and using these along with limited human labels to achieve effective detection performance, reaching 77% accuracy without manual annotations and 81% with only 10% human annotations.

Article Abstract

Deep learning (DL) based detection models are powerful tools for large-scale analysis of dynamic biological behaviors in video data. Supervised training of a DL detection model often requires a large amount of manually-labeled training data which are time-consuming and labor-intensive to acquire. In this paper, we propose LFAGPA (Learn From Algorithm-Generated Pseudo-Annotations) that utilizes (noisy) annotations which are automatically generated by algorithms to train DL models for ant detection in videos. Our method consists of two main steps: (1) generate foreground objects using a (set of) state-of-the-art foreground extraction algorithm(s); (2) treat the results from step (1) as pseudo-annotations and use them to train deep neural networks for ant detection. We tackle several challenges on how to make use of automatically generated noisy annotations, how to learn from multiple annotation resources, and how to combine algorithm-generated annotations with human-labeled annotations (when available) for this learning framework. In experiments, we evaluate our method using 82 videos (totally 20,348 image frames) captured under natural conditions in a tropical rain-forest for dynamic ant behavior study. Without any manual annotation cost but only algorithm-generated annotations, our method can achieve a decent detection performance (77% in [Formula: see text] score). Moreover, when using only 10% manual annotations, our method can train a DL model to perform as well as using the full human annotations (81% in [Formula: see text] score).

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10354180PMC
http://dx.doi.org/10.1038/s41598-023-28734-6DOI Listing

Publication Analysis

Top Keywords

algorithm-generated pseudo-annotations
8
noisy annotations
8
automatically generated
8
ant detection
8
algorithm-generated annotations
8
annotations method
8
[formula text]
8
text] score
8
annotations
7
detection
5

Similar Publications

Article Synopsis
  • Deep learning models can analyze biological behaviors in video data, but they usually need a lot of manually-labeled training data, which is hard to get.
  • This paper introduces LFAGPA, a method that uses automatically generated (but noisy) annotations to train deep learning models for detecting ants in videos.
  • The approach involves extracting foreground objects from videos to create pseudo-annotations, and using these along with limited human labels to achieve effective detection performance, reaching 77% accuracy without manual annotations and 81% with only 10% human annotations.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!