Neural word and entity embeddings for ad hoc retrieval
سال
: 2018
چکیده: Learning low dimensional dense representations of the vocabularies of a corpus, known as neural embeddings, has gained much attention in the information retrieval community. While there have been several successful attempts at integrating embeddings within the ad hoc document retrieval task, yet, no systematic study has been reported that explores the various aspects of neural embeddings and how they impact retrieval performance. In this paper, we perform a methodical study on how neural embeddings influence the ad hoc document retrieval task. More specifically, we systematically explore the following research questions: (i) do methods solely based on neural embeddings perform competitively with state of the art retrieval methods with and without interpolation? (ii) are there any statistically significant difference between the performance of retrieval models when based on word embeddings compared to when knowledge graph entity embeddings are used? and (iii) is there significant difference between using locally trained neural embeddings compared to when globally trained neural embeddings are used? We examine these three research questions across both hard and all queries. Our study finds that word embeddings do not show competitive performance to any of the baselines. In contrast, entity embeddings show competitive performance to the baselines and when interpolated, outperform the best baselines for both hard and soft queries.
کلیدواژه(گان): Neural embeddings,Ad hoc document retrieval,TREC,Knowledge graph
کالکشن
:
-
آمار بازدید
Neural word and entity embeddings for ad hoc retrieval
Show full item record
contributor author | فائزه انسان | en |
contributor author | E. Bagheri | fa |
contributor author | F Ensan | fa |
contributor author | F. Al-Obeidat | fa |
date accessioned | 2020-06-06T13:40:05Z | |
date available | 2020-06-06T13:40:05Z | |
date issued | 2018 | |
identifier uri | http://libsearch.um.ac.ir:80/fum/handle/fum/3364271 | |
description abstract | Learning low dimensional dense representations of the vocabularies of a corpus, known as neural embeddings, has gained much attention in the information retrieval community. While there have been several successful attempts at integrating embeddings within the ad hoc document retrieval task, yet, no systematic study has been reported that explores the various aspects of neural embeddings and how they impact retrieval performance. In this paper, we perform a methodical study on how neural embeddings influence the ad hoc document retrieval task. More specifically, we systematically explore the following research questions: (i) do methods solely based on neural embeddings perform competitively with state of the art retrieval methods with and without interpolation? (ii) are there any statistically significant difference between the performance of retrieval models when based on word embeddings compared to when knowledge graph entity embeddings are used? and (iii) is there significant difference between using locally trained neural embeddings compared to when globally trained neural embeddings are used? We examine these three research questions across both hard and all queries. Our study finds that word embeddings do not show competitive performance to any of the baselines. In contrast, entity embeddings show competitive performance to the baselines and when interpolated, outperform the best baselines for both hard and soft queries. | en |
language | English | |
title | Neural word and entity embeddings for ad hoc retrieval | en |
type | Journal Paper | |
contenttype | External Fulltext | |
subject keywords | Neural embeddings | en |
subject keywords | Ad hoc document retrieval | en |
subject keywords | TREC | en |
subject keywords | Knowledge graph | en |
journal title | Information Processing & Management | fa |
pages | 657-673 | |
journal volume | 54 | |
journal issue | 1 | |
identifier link | https://profdoc.um.ac.ir/paper-abstract-1068444.html | |
identifier articleid | 1068444 |