Relevance feedback is a technique to offload the work a user performs to improve a search by iteratively reformulating her query. As described above, an initial query formulated by a user may not fully capture her information need due to the complexity of formulating the query, unfamiliarity with the data collection, or inadequacy of the available features. Users then typically manually change the query and re-execute the search until they are satisfied. By using relevance feedback to criticise the answers, the system learns a new query that better captures the user's information need, and therefore relieves the user from reformulating the query herself.
Figure 21.1 shows the overall feedback process. The user formulates an initial query to the retrieval system, which generates a set of answers. The user then examines the answers and provides a judgement as to the quality or relevance of the answers. The system uses the original answers and the user supplied feedback, and builds a new query.
Figure 21.1: Relevance Feedback Cycle
There are three main ways for the user to supply relevance feedback:
Goodness / badness of results. The user looks at individual results and determines if the result is a good or bad instance of her information need. She can provide relevance feedback at varying granularities. Most retrieval systems support a binary approach to relevance: a result is either relevant or not. Typically, the system considers all items to be non-relevant (or neutral) as the user marks only a few items she considers relevant. This binary notion of relevance can be generalized to multiple levels of relevance, as well as non-relevance. Some systems have experimented with varying levels of relevance trading user convenience for a more accurate picture of what the user wants . Empirical studies however have shown that users typically give very little feedback and that the flexibility of multiple levels of relevance is too burdensome .
Ranking. In this approach, the user considers a subset of results at a time and "sorts" them in the order she thinks they should appear. In a sense, the user is performing the task of the retrieval system: to let it imitate her preferred ranking. This approach can be considered an extension of the multiple-relevance-levels approach where there are as many levels of relevance as the user gives relevance judgements, and no items share the same relevance level. The ranking approach gives excellent feedback to the retrieval system, but tends to be burdensome to the user.
Explicit. For explicit feedback, the retrieval system exposes to the user a visualization of its internal query structure and lets the user interactively manipulate it to improve the query. Examples of such systems are , and several text search engines that employ term suggestion. To employ this technique, the user must have some familiarity with the domain, and can quickly become too burdensome for multimedia data; therefore we will not discuss this approach in this chapter.
In this chapter we will concentrate on the first approach to providing feedback, and give also a brief discussion of the second approach, namely ranking.
We will denote the answers by ai, where i indicates the rank of that answer, that is, answers are ordered based on i: <a1, a2, …>. We denote the relevance feedback for answer ai by rfi. This is a numeric value with the following interpretation:
In general, when a finer gradation of relevance is needed, we use arbitrary positive values to denote relevant answers, and arbitrary negative values to denote non-relevant answers.
The specific query model in use determines how to derive a new query using relevance feedback. The techniques we will discuss assume certain properties of the query model; therefore we describe the query model assumptions together with the relevance feedback techniques.