Feature fusion for robust object tracking

Abstract

This paper addresses the problem of long term object tracking in a video stream. Our main objective is to determine object’s location in every frame that follows. We propose a tracking framework that explicitly decomposes the long term tracking task into tracking, detection and fusion. These three sub-systems are connected in one functional system, which addresses the problem of long term object tracking. The tracker follows the object from frame to frame. The detector observes all appearances and corrects the tracker if necessary. Detection model is integrated to enhance the tracking performance. Output from Lucas-Kanade Tracker and radial sub-window based detector are combined in the fusion part. We use benchmark sequences of MIL datasets which are widely used by other tracking algorithms. We employ the standard center location error and precision for assessing performance of our proposed method. Our tracking system outperforms other state-of-the-art tracking systems in four sequences considering the center location error.

Publication
In International Conference on Wavelet Analysis and Pattern Recognition, IEEE

Related