The utilization of robust, pre-trained foundation models enables simple adaptation to specific ongoing tasks. In particular, the recently developed Segment Anything Model (SAM) has demonstrated impressive results in the context of semantic segmentation. Recognizing that data collection is generally time-consuming and costly, this research aims to determine whether the use of these foundation models can reduce the need for training data. To assess the models' behavior under conditions of reduced training data, five test datasets for semantic segmentation will be utilized. This study will concentrate on traffic sign segmentation to analyze the results in comparison to Mask R-CNN: the field's leading model. The findings indicate that SAM does not surpass the leading model for this specific task, regardless of the quantity of training data. Nevertheless, a knowledge-distilled student architecture derived from SAM exhibits no reduction in accuracy when trained on data that have been reduced by 95%.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433296PMC
http://dx.doi.org/10.3390/jimaging10090220DOI Listing

Publication Analysis

Top Keywords

training data
16
foundation models
12
pre-trained foundation
8
traffic sign
8
sign segmentation
8
segment model
8
semantic segmentation
8
leading model
8
data
6
reducing training
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!