Field-road classification for agricultural vehicles in China based on pre-trained visual model.

PeerJ Comput Sci

College of Information and Electrical Engineering, China Agricultural University, Beijing, China.

Published: October 2024

AI Article Synopsis

  • The text discusses a multi-view field-road classification method designed to automatically determine if points in Global Navigation Satellite System (GNSS) trajectories are in-field or on-road, which is crucial for analyzing agricultural vehicle behavior.
  • The proposed approach utilizes a pre-trained visual model to extract visual features from images generated around each trajectory point, combining general knowledge from large datasets and specific knowledge from agricultural data.
  • The method was tested on four trajectory datasets, achieving high accuracy (over 87%) and outperforming existing methods by an average of approximately 3% in F1-score, demonstrating its effectiveness in agricultural context analysis.

Article Abstract

Field-road classification that automatically identifies the activity (either in-field or on-road) of each point in Global Navigation Satellite System (GNSS) trajectories is a critical process in the behavior analysis of agricultural vehicles. To capture movement patterns specific to agricultural operations, we propose a multi-view field-road classification method, which extracts a physical and a visual feature vector to represent a trajectory point. We propose a task-specific approach using a pre-trained visual model to effectively extract visual features. Firstly, an image is generated based on a point plus its neighboring points to provide the contextual information of the point. Then, an image recognition model, a fine-tuned ResNet model is developed using the pretraining-finetuning paradigm. In such a paradigm, a pre-training process is used to train an image recognition model (ResNet) with natural image datasets (, ImageNet), and a fine-tuning process is applied to update the parameters of the pre-trained model using the trajectory point images, enabling the model to have both general knowledge and task-specific knowledge. Finally, a visual feature is extracted for a point by the fine-tuned model, thereby overcoming the limitations caused by the small-scale generated images. To validate the effectiveness of our multi-view field-road classification, we conducted experiments on four trajectory datasets (Wheat 2021, Paddy, Wheat 2023, and Wheat 2024). The results demonstrated that the proposed method achieves competitive accuracy performance, ., 92.56%, 87.91%, 90.31%, and 94.23% on four trajectory datasets, respectively. Extensive experiments demonstrate that our approach can consistently perform better than the existing state-of-the-art method on the four trajectory datasets by 2.99%, 4.42%, 2.88%, and 2.77% in the F1-score, respectively. In addition, we conduct an in-depth analysis to verify the necessity and effectiveness of our method.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11640928PMC
http://dx.doi.org/10.7717/peerj-cs.2359DOI Listing

Publication Analysis

Top Keywords

field-road classification
16
trajectory datasets
12
agricultural vehicles
8
pre-trained visual
8
model
8
visual model
8
multi-view field-road
8
visual feature
8
trajectory point
8
image recognition
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!