Publication
Multi-modal joint embedding for fashion product retrieval
Conference Article
Conference
IEEE International Conference on Image Processing (ICIP)
Edition
24th
Pages
400-404
Doc link
http://dx.doi.org/10.1109/ICIP.2017.8296311
File
Authors
Projects associated
Abstract
Finding a product in the fashion world can be a daunting task. Everyday, e-commerce sites are updating with thousands of images and their associated metadata (textual information), deepening the problem, akin to finding a needle in a haystack. In this paper, we leverage both the images and textual meta-data and propose a joint multi-modal embedding that maps both the text and images into a common latent space. Distances in the latent space correspond to similarity between products, allowing us to effectively perform retrieval in this latent space, which is both efficient and accurate. We train this embedding using large-scale real world e-commerce data by both minimizing the similarity between related products and using auxiliary classification networks to that encourage the embedding to have semantic meaning. We compare against existing approaches and show significant improvements in retrieval tasks on a large-scale e-commerce dataset. We also provide an analysis of the different metadata.
Categories
active vision, learning (artificial intelligence).
Author keywords
image-text embedding; deep learning; fashion
Scientific reference
A. Rubio, L. Yu, E. Simo-Serra and F. Moreno-Noguer. Multi-modal joint embedding for fashion product retrieval, 24th IEEE International Conference on Image Processing, 2017, Beijing, pp. 400-404.
Follow us!