Notice
This item was automatically migrated from a legacy system. It's data has not been checked and might not meet the quality criteria of the present system.
Rekabsaz, N., Lupu, M., & Hanbury, A. (2016). Uncertainty in Neural Network Word Embedding Exploration of Threshold for Similarity. Neu-IR: The SIGIR 2016 Workshop on Neural Information Retrieval, Pisa, EU. http://hdl.handle.net/20.500.12708/86458
E194-04 - Forschungsbereich Data Science E194-01 - Forschungsbereich Software Engineering
-
Date (published):
2016
-
Event name:
Neu-IR: The SIGIR 2016 Workshop on Neural Information Retrieval
-
Event date:
21-Jul-2016
-
Event place:
Pisa, EU
-
Keywords:
word embeddings; threshold
-
Abstract:
Word embedding, specially with its recent developments, promises a quantification of the similarity between terms. However, it is not clear to which extent this similarity value can be genuinely mean- ingful and useful for subsequent tasks. We explore how the sim- ilarity score obtained from the models is really indicative of term relatedness. We first observe and quantify the uncertainty factor o...
Word embedding, specially with its recent developments, promises a quantification of the similarity between terms. However, it is not clear to which extent this similarity value can be genuinely mean- ingful and useful for subsequent tasks. We explore how the sim- ilarity score obtained from the models is really indicative of term relatedness. We first observe and quantify the uncertainty factor of the word embedding models regarding to the similarity value. Based on this factor, we introduce a general threshold on various dimensions which effectively filters the highly related terms. Our evaluation on four information retrieval collections supports the ef- fectiveness of our approach as the results of the introduced thresh- old are significantly better than the baseline while being equal to or statistically indistinguishable from the optimal results.