5. On-Line Learning


5. On-Line Learning

Compared to off-line learning, on-line learning does not have the whole set of training data beforehand. The data are often obtained during the process, which makes the learning process a best effort one and highly dependent on the input training data, even the order they come in. However, on-line learning involves the interaction between the system and the user. The system can then quickly modify its internal model in order to output good results for each specific user. As discussed in Section 1, similarity measure in information retrieval systems is highly user-dependent. On-line learning's adaptive property makes it very suitable for such applications.

In retrieval systems, on-line learning is used in three scenarios: relevance feedback, finding the query seed and enhancing the annotation efficiency. They are discussed respectively in the following three subsections.

5.1 Relevance Feedback

Widely used in text retrieval [47][48], relevance feedback was first proposed by Rui et al. as an interactive tool in content-based image retrieval [49]. Since then it has been proven to be a powerful tool and has become a major focus of research in this area [50][51][52][53][54][55]. Chapter 23 and Chapter 33 have detailed explanations on this topic.

Relevance feedback often does not accumulate the knowledge the system learned. That's because the end-user's feedback is often unpredictable, and inconsistent from user to user, or even query to query. If the user who gives the feedback is trustworthy and consistent, feedback can be accumulated and added to the knowledge of the system, as was suggested by Lee et al. [37].

5.2 Query Concept Learner

In a query by example [31][1] system, it is often hard to initialise the first query, because the user may not have a good example to begin with. Having got used to text retrieval engines such as Google [56], users may prefer to query the database by keyword. Many systems with keyword annotations can provide such kind of service [32][33][35]. Chang et al. recently proposed the SVM Active Learning system [58] and MEGA system [57], which can be an alternate solution.

SVM Active Learning and MEGA have similar ideas but with different tools. They both want to find a query-concept learner that learns query criteria through an intelligent sampling process. No example is needed as the initial query. Instead of browsing the database completely randomly, these two systems ask the user to provide some feedback and try to quickly capture the concept in the user's mind. The key to success is to maximally utilize the user's feedback and quickly reduce the size of the space that the user's concept lies in. Active learning is THE answer.

Active learning is an interesting idea in the machine learning literature. While in traditional machine learning research, the learner typically works as a passive recipient of the data, active learning enables the learner to use its own ability to respond to collect data and to influence the world it is trying to understand. A standard passive learner can be thought of as a student that sits and listens to a teacher, while an active learner is a student that asks the teacher questions, listens to the answers and asks further questions based on the answer. In the literature, active learning has shown very promising results in reducing the number of samples required to finish a certain task [59][60][61].

In practice, the idea of active learning can be translated into a simple rule: if the system is allowed to propose samples and get feedback, always propose those samples that the system is most confused about, or that can bring the greatest information gain.

Following the rule, SVM Active Learning becomes very straightforward. In SVM, objects far away from the separating hyperplane are easy to classify. The most confused objects are those that are close to the boundary. Therefore, during the feedback loop, the system will always propose the images closest to the SVM boundary for the user to annotate.

MEGA system models the query concept space (QCS) in k-CNF and the candidate concept space (CCS) in k-DNF [62]. Here k-CNF and k-DNF are Boolean formulae sets that can virtually model any practical query concepts. The CCS is initialised to be larger than the real concept space, while the QCS is initialised smaller. During the learning process, the QCS keeps being refined by the positive feedbacks, while the CCS keeps shrinking due to the negative samples. The spatial difference between QCS and CCS is the interesting area where most images are undetermined. Based on the idea of active learning, they should be shown to the user for more feedback. Some interesting trade-offs have to be made in selecting these samples [57].

5.3 Efficient Annotation Through Active Learning

Keyword annotation is a very expensive work, as it can only be done manually. It is natural to look for methods that can improve the annotation efficiency. Active learning turns out to be also suitable for this job.

In [33], Zhang and Chen proposed a framework for active learning during the annotation. For each object in the database, they maintain a list of probabilities, each indicating the probability of this object having one of the attributes. During training, the learning algorithm samples objects in the database and presents them to the annotator to assign attributes to. For each sampled object, each probability is set to be one or zero depending on whether or not the corresponding attribute is assigned by the annotator. For objects that have not been annotated, the learning algorithm estimates their probabilities with biased kernel regression. Knowledge gain is then defined to determine, among the objects that have not been annotated, which one the system is the most uncertain of. The system then presents it as the next sample to the annotator to assign attributes to.

Naphade et al. proposed a very similar work in [38]. However, they used a support vector machine to learn the semantics. They have essentially the same method as Chang et al.'s SVM Active Learning [58] to choose new samples for the annotator to annotate.




Handbook of Video Databases. Design and Applications
Handbook of Video Databases: Design and Applications (Internet and Communications)
ISBN: 084937006X
EAN: 2147483647
Year: 2003
Pages: 393

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net