Publication

Learning grounded word meaning representations on similarity graphs

Conference Article

Conference

Conference on Empirical Methods in Natural Language Processing (EMNLP)

Edition

16th

Pages

4760-4769

Doc link

https://aclanthology.org/2021.emnlp-main.391

File

Download the digital copy of the doc pdf document

Abstract

This paper introduces a novel approach to learn visually grounded meaning representations of words as low-dimensional node embeddings on an underlying graph hierarchy. The lower level of the hierarchy models modality-specific word representations through dedicated but communicating graphs, while the higher level puts these representations together on a single graph to learn a representation jointly from both modalities. The topology of each graph models similarity relations among words, and is estimated jointly with the graph embedding.The assumption underlying this model is that words sharing similar meaning correspond to communities in an underlying similarity graph in a low-dimensional space. We named this model Hierarchical Multi-Modal Similarity Graph Embedding (HM-SGE). Experimental results validate the ability of HM-SGE to simulate human similarity judgements and concept categorization, outperforming the state of the art.

Categories

pattern clustering, pattern recognition.

Author keywords

grounded word representations, similarity graphs

Scientific reference

M. Dimiccoli, H. Wendt and P. Batlle. Learning grounded word meaning representations on similarity graphs, 16th Conference on Empirical Methods in Natural Language Processing, 2021, Punta Cana, Dominican Republic, pp. 4760-4769.