2. Extracting Semantic Features

2. Extracting Semantic Features

Many features have been proposed for image/video retrieval. For images, often used features are color, shape, texture, color layout, etc. A comprehensive review can be found in [1]. Traditional video retrieval systems employ the same feature set on each frame, in addition to some temporal analysis, e.g., key shot detection [2][3][4][5]. Recently, a lot of new approaches have been introduced to improve the features. Some of them are based on temporal or spatial-temporal analysis, i.e., better ways to group the frames and select better key frames. This includes integrating with other media, e.g., audio, text, etc. Another hot research topic is motion-based features and object-based features. Compared to color, shape and texture, motion-based and object-based features are more natural to human beings, and therefore at a higher level.

Traditional video analysis methods are often shot based. Shot detection methods can be classified into many categories, e.g., pixel based, statistics based, transform based, feature based and histogram based [6]. After the shot detection, key frames can be extracted in various ways [7]. Although key frames can be used directly for retrieval [3][8], many researchers are studying better organization of the video structures. In [9], Yeung et al. developed scene transition graphs (STG) to illustrate the scene flow of movies. Aigrain et al. proposed to use explicit models of video documents or rules related to editing techniques and film theory [10]. Statistical approaches such as Hidden Markov Model (HMM) [13], unsupervised clustering [14][15] were also proposed. When audio, text and some other accompanying contents are available, grouping can be done jointly [11][12]. There were also a lot of researches on extracting captions from video clips and they can also be used to help retrieval [16][17].

Motion is one of the most significant differences between video and images. Motion analysis has also been very popular for video retrieval. On one hand, motion can help find interesting objects in the video, such as the work by Courtney [18], and Ferman et al. [19], Gelgon and Bouthemy [20], Ma and Zhang [21], etc. On the other hand, motion can be directly used as a feature, named by Nelson and Polana [22] as "temporal texture." The work was extended by Otsuka et al. [23], Bouthemy and Fablet [24], Szummer and Picard [25], etc.

If objects can be segmented easily, object-based analysis of video sequence is definitely one of the most attractive methods to try. With the improvement on computer vision technologies, many object-based approaches were proposed recently. To name a few, in [18], Courtney developed a system, which allows for detecting moving objects in a closed environment based on motion detection.

Zhong and Chang [26] applied color segmentation to separate images into homogeneous regions, and tracked them along time for content-based video query. Deng and Manjunath [27] proposed a new spatio-temporal segmentation and region-tracking scheme for video representation. Chang et al. proposed to use Semantic Visual Templates (SVT), which is a personalized view of concepts composed of interactively created templates /objects.

This chapter is not intended to cover in depth for features used in the state-of-the-art video retrieval systems. Readers are referred Sections II, III and V for more detailed information.

Handbook of Video Databases. Design and Applications
Handbook of Video Databases: Design and Applications (Internet and Communications)
ISBN: 084937006X
EAN: 2147483647
Year: 2003
Pages: 393

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net