A simple and efficient template matching algorithm (original) (raw)

REAL-TIME OBJECT TRACKING AND LEARNING USING TEMPLATE MATCHING

The system that has been proposed here uses a live video stream to enable tracking, learning and detection of real time objects. The object of interest is selected in a cropping action and then it is subsequently tracked with the help of an indicator in the form of a rectangular bounding box. The process mentioned above is a long term process in the sense that the system is perpetually alert regarding the reappearance of the object after a departure from the frame or regarding the deformities in the physical appearance of the object. The proposed system uses the Template Matching algorithm to match the selected object with the 'Region of Interest' in the frame to mark the object's location. If a match is found then the Principle Component Analysis algorithm is used. PN discrimination algorithm has been proposed in this system which uses background subtraction technique to increase the speed of frame processing for object detection. This reduction in frame processing time and the reduction in average localization errors improve the template matching percentage irrespective of scaling of the input image. Thus the proposed system is expected to overcome the drawbacks of existing system which range from loss of information caused by complex shapes, rapid motion, illumination changes, scaling and projection of 3D world on 2D image, etc.

Object tracking using deformable templates

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000

AbstractÐWe propose a novel method for object tracking using prototype-based deformable template models. To track an object in an image sequence, we use a criterion which combines two terms: the frame-to-frame deviations of the object shape and the fidelity of the modeled shape to the input image. The deformable template model utilizes the prior shape information which is extracted from the previous frames along with a systematic shape deformation scheme to model the object shape in a new frame. The following image information is used in the tracking process: 1) edge and gradient information: the object boundary consists of pixels with large image gradient, 2) region consistency: the same object region possesses consistent color and texture throughout the sequence, and 3) interframe motion: the boundary of a moving object is characterized by large interframe motion. The tracking proceeds by optimizing an objective function which combines both the shape deformation and the fidelity of the modeled shape to the current image (in terms of gradient, texture, and interframe motion). The inherent structure in the deformable template, together with region, motion, and image gradient cues, makes the proposed algorithm relatively insensitive to the adverse effects of weak image features and moderate amounts of occlusion.

Dynamic Template Tracking and Recognition

International Journal of Computer Vision, 2013

In this paper we address the problem of tracking non-rigid objects whose local appearance and motion changes as a function of time. This class of objects includes dynamic textures such as steam, fire, smoke, water, etc., as well as articulated objects such as humans performing various actions. We model the temporal evolution of the object's appearance/motion using a Linear Dynamical System (LDS). We learn such models from sample videos and use them as dynamic templates for tracking objects in novel videos. We pose the problem of tracking a dynamic non-rigid object in the current frame as a maximum a-posteriori estimate of the location of the object and the latent state of the dynamical system, given the current image features and the best estimate of the state in the previous frame. The advantage of our approach is that we can specify a-priori the type of texture to be tracked in the scene by using previously trained models for the dynamics of these textures. Our framework naturally generalizes common tracking methods such as SSD and kernel-based tracking from static templates to dynamic templates. We test our algorithm on synthetic as well as real examples of dynamic tex-tures and show that our simple dynamics-based trackers perform at par if not better than the state-of-the-art. Since our approach is general and applicable to any image feature, we also apply it to the problem of human action tracking and build action-specific optical flow trackers that perform better than the state-of-the-art when tracking a human performing a particular action. Finally, since our approach is generative, we can use a-priori trained trackers for different texture or action classes to simultaneously track and recognize the texture or action in the video.

Template Matching Based Video Tracking System Using a Novel N-Step Search Algorithm and HOG Features

Lecture Notes in Computer Science, 2012

A novel video object tracking technique is proposed in this article. We consider a robust template-matching based video tracking technique that works satisfactory for both static-camera and moving-camera video sequences, being not influenced by the camera motions. In our approach, the first instance of the video object is selected interactively. Then, its successive instances in the video frames are detected using a novel and improved N-step search algorithm for motion estimation taking into account both the scaling and translation of the target. A HOG-based feature extraction approach is used by our algorithm.

Competent Tracking of Moving Object Using Affine & Illumination Insensitive Template Matching

Moving object detection & tracking in real world scene is becoming significant problem in today's era. The extensive study in this area is motivated by potential number of applications of object tracking. In this paper, we analyze a method for motion segmentation & tracking of nonstationary objects that uses the complete image information using the affine flow equations. These affine flow parameters are then combined with illumination insensitive template matching tracker to efficiently segment & track the moving objects. Experimental results performed on various database videos shows the method effectively tracks the object without miss detection of the target compared to other methods.

Template tracking using color invariant pixel features

IEEE Transactions on Knowledge and Data Engineering, 2002

In our method for tracking objects, appearance features are smoothed by robust and adaptive Kalman filters, one to each pixel, making the method robust against occlusions. While the existing methods use only intensity to model the object appearance, our paper concentrates on multivalue features. Specifically, one option is to use photometric invariant color features, making the method robust to illumination effects such as shadow and object geometry. The method is able to track objects in real time.

Rapid Visual Tracking with Modified On-Line Boosting and Template Matching

Computer Graphics and Imaging / 798: Signal Processing, Pattern Recognition and Applications, 2013

On-line learning is increasingly popular in visual tracking, but the challenge that it faced is how to adapt the appearance changes and avoid the drifting or missing track. In this paper, a fast visual tracking algorithm is proposed to make the tracker more accurate and stable in the complex variations situations like occlusions, illuminations and shape deformations. In the proposed algorithm, a modified on-line boosting method is developed to make the tracker more adaptive to variable scene and a template matching model is used to constrain the training samples, so that the accumulating errors in self-update learning can be alleviated effectively. In addition, an optimization process is used to reduce the computational burden. Our experimental results have demonstrated that compared with other on-line tracking methods the target can be accurately tracked with lower drifting error in the complicated environments by using the proposed algorithm. Moreover, the new tracker runs at 60 frames per second, and is suitable for the real-time catching and tracking.

Template Tracking using Generalized Modus Tollens

2010

Abstract: This article describes an original method to track 2D templates in a video sequence. This approach is based on robust modelling of the interaction between pattern movement and grey level variations in the image using fuzzy bimodal rules. Movement estimation is performed by inverting these rules and polling the results in a quasi-continuous histogram. Some experimental results are given to illustrate the performance of this algorithm. 1. INTRODUCTION. Template tracking has various applications within the scope of robotics (visual servoing), medical analysis, surveillance, human-computer interaction, video databases and 3D-reconstruction (stereo vision, structured light), to name but a few.

Combining View-based Object Recognition with Template Matching for the Identification and Tracking of Fully Dynamic Targets

This paper describes a system intended to identify and track dynamic targets that may change appearance while moving. The full system includes a pan and tilt unit to ease tracking and keep the interesting target in the center of the image. View-based Haar-like features are used for object recognition while template matching continues to track the object even when its view is not recognized by object recognition system. Some of the techniques used to improve the template matching performance are also presented. Preliminary results are given and system performs well up to 15 frames per second on a 320 x 240 image on an ordinary laptop computer.

A fuzzy inference approach to template-based visual tracking

Machine Vision and …, 2011

The tracking of visual features using appearance models is a well studied but still open area of computer vision. In the absence of knowledge about the structural constraints of the tracked object, the validity of the model can be compromised if only appearance information is used. We propose a fuzzy inference scheme that can be used to selectively update a given template based model in tracking tasks. This allows us to track moving objects under translation, rotation and scale changes with minimal feature drift. Moreover, no rigidity constraint needs to be enforced on the moving target. Some experiments have been performed using several targets, and the results are very close to the ground truth paths. The computational cost of our approach is low enough to allow its application in realtime tracking using modest hardware requirements.