Evaluating Implicit Feedback Models Using Searcher Simulations

ACM Transactions on Information Systems (ACM TOIS) | , Vol 23(3): pp. 325-361

In this article we describe an evaluation of relevance feedback (RF) algorithms using searcher simulations. Since these algorithms select additional terms for query modification based on inferences made from searcher interaction, not on relevance information searchers explicitly provide (as in traditional RF) we refer to them as implicit feedback models. We introduce six different models that base their decisions on the interaction of searchers and use different approaches to rank query modification terms.

The aim of this paper is to determine which of these models should be used to assist searchers in the systems we develop. To evaluate these models we use searcher simulations that afford us more control over the experimental conditions than experiments with human subjects and allow complex interaction to be modelled without the need for costly human experimentation. The simulation-based evaluation methodology measures how well the models learn the distribution of terms across relevant documents (i.e., learn what information is relevant) and how well they improve search effectiveness (i.e., create effective search queries). Our findings show that an implicit feedback model based on Jeffrey’s rule of conditioning outperforms other models under investigation.