The steady rise of online shopping goes hand in hand with the development of increasingly complex ML and NLP models. While most use cases are cast as specialized supervised learning problems, we argue that practitioners would greatly benefit from general and transferable representations of products. In this work, we build on recent developments in contrastive learning to train FashionCLIP, a CLIP-like model adapted for the fashion industry. We demonstrate the effectiveness of the representations learned by FashionCLIP with extensive tests across a variety of tasks, datasets and generalization probes. We argue that adaptations of large pre-trained models such as CLIP offer new perspectives in terms of scalability and sustainability for certain types of players in the industry. Finally, we detail the costs and environmental impact of training, and release the model weights and code as open source contribution to the community.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9643437PMC
http://dx.doi.org/10.1038/s41598-022-23052-9DOI Listing

Publication Analysis

Top Keywords

contrastive language
4
language vision
4
vision learning
4
learning general
4
general fashion
4
fashion concepts
4
concepts steady
4
steady rise
4
rise online
4
online shopping
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!