Publications by authors named "Yoshiki Nota"

Article Synopsis
  • Large vision-language models like CLIP have strong zero-shot transfer abilities, but improving few-shot recognition requires more adaptive methods like Tip-Adapter, which increases adapter size with more training samples.
  • * Proto-Adapter is introduced as a more efficient alternative, using a constant-sized adapter that derives weights from prototype representations, enabling better performance without inflating model size.
  • * Fine-tuning with a distance margin penalty enhances Proto-Adapter's effectiveness, helping achieve discriminative outcomes even with minimal training data, as shown in numerous experiments.
View Article and Find Full Text PDF

One way to improve annotation efficiency is active learning. The goal of active learning is to select images from many unlabeled images, where labeling will improve the accuracy of the machine learning model the most. To select the most informative unlabeled images, conventional methods use deep neural networks with a large number of computation nodes and long computation time, but we propose a non-deep neural network method that does not require any additional training for unlabeled image selection.

View Article and Find Full Text PDF