Proposed of the question:
Given a query Q and a document collection C, a retrieval system returns a ranked list of documents L. Li denotes the i-th ranked document in the ranked list. We assume that Q is so dif ficult that all the top f ranked documents (seen so far by a user) are non-relevant. The goal is to study how to use these negative examples, i.e., N = fL1; :::; Lf g, to rerank the next r unseen documents in the original ranked list: U = fLf+1; :::; Lf+rg. We set f = 10 to simulate that the rst page of search results are irrelevant,
and set r = 1000.
Q is query, D is document, when beta equals 0, it does not perform any negative feedback, but when beta is equals to negative, they will penalize the data.
Two methods to be：
Local Method: Rank all the documents in U by the negative query and penalize the top p documents.
Global Method: Rank all the documents in C by the negative query. Select, from the top p documents of this ranked list, those documents in U to penalize.
VECTOR SPACE MODEL
Negative feedback is very important because it can help a user when search results are very poor.
This work inspires several future directions. First, we can study a more principled way to model multiple negative models and use these multiple negative models to conduct constrained query expansion, for example, avoiding terms which are in negative models. Second, we are interested in a learning framework which can utilize both a little positive information (original queries) and a certain amount of negative information to learn a ranking function to help dif fcult queries. Third, queries are dif cult due to different reasons. Identifying these reasons and customizing negative feedback
strategies would be much worth studying.