Human tracking using 3D surface colour distributions (original) (raw)
Related papers
Adaptive Learning of Statistical Appearance Models for 3D Human Tracking
British Machine Vision Conference, 2002
A likelihood formulation for human tracking is presented based upon matching feature statistics on the surface of an articulated 3D body model. A benefit of such a formulation over current techniques is that it provides a dense, object-based cue. Multi-dimensional histograms are used to rep- resent feature distributions and different histogram similarity measures are evaluated. An on-line region grouping algorithm,
Online Appearance Learning or 3D Articulated Human Tracking
International Conference on Pattern Recognition, 2002
A human appearance modelling framework where colour distributions are associated with surface regions on an ar- ticulated body model is presented. In general, these dis- tributions are unknown, multi-modal and changing in time. We therefore propose using recursively updated histograms to represent them. For a certain pose, a set of histograms may be collected and a likelihood constructed based on
A robust appearance model for tracking human motions
2005
We propose an original method for tracking people based on the construction of a 2-D human appearance model. The general framework, which is a region-based tracking approach, is applicable to any type of object. We show how to specialize the method for taking advantage of the structural properties of the human body. We segment its visible parts, construct and update the appearance model. This latter one provides a discriminative feature capturing both color and shape properties of the different limbs, making it possible to recognize people after they have temporarily disappeared. The method does not make use of skin color detection, which allows us to perform tracking under any viewpoint. The only assumption for the recognition is the approximate viewpoint correspondence during the matching process between the different models. Several results in complex situations prove the efficiency of the algorithm, which runs in near real time. Finally, the model provides an important clue for further human motion analysis process.
Tracking humans using prior and learned representations of shape and appearance
Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings.
Tracking a moving person is challenging because a person's appearance in images changes significantly due to articulation, viewpoint changes, and lighting variation across a scene. And different people appear differently due to numerous factors such as body shape, clothing, skin color, and hair. In this paper, we introduce a multi-cue tracking technique that uses prior information about the 2-D image shape of people in general along with an appearance model that is learned on-line for a specific individual. Assuming a static camera, the background is modeled and updated on-line. Rather than performing thresholding and blob detection during tracking, a foreground probability map (FPM) is computed which indicates the likelihood that a pixel is not the projection of the background. Offline, a shape model of walking people is estimated from the FPMs computed from training sequences. During tracking, this generic prior model of human shape is used for person detection and to initialize a tracking process. As this prior model is very generic, a model of an individual's appearance is learned on-line during the tracking. As the person is tracked through a sequence using both shape and appearance, the appearance model is refined and multi-cue tracking becomes more robust.
APPEARANCE-BASED PERSON TRACKING AND 3D POSE ESTIMATION OF UPPER-BODY AND HEAD
In the field of human-robot interaction (HRI), recognition of humans in a robot's surroundings is a crucial task. Besides the localization, the estimation of a person's 3D pose based on monocular camera images is a challenging problem on a mobile platform. For this purpose, an appearancebased approach, using a 3D model of the human upper body, has been developed end experimentally investigated. For a real time tracking, the state of the person is estimated by a particle filter tracker, which uses different observation models for evaluating pose hypotheses. The 6D body pose is modeled by 4 parameters for the torso position and orientation as well as 2 for the head pan and tilt. In order to achieve real time operation, a smooth fit value function simplifies the particle filter's convergence. Futhermore, a sparse feature based model eliminates the need for computationally expensive geometric transformations of the image, as required by conventional Active Appearance Models (AAM). The initialization problem of the pose tracker is overcome by integrating a Histograms of Oriented Gradients (HOG) detector.
Integrated person tracking using stereo, color, and pattern detection
Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231), 1998
We present an approach to real-time person tracking in crowded and/or unknown environments using integration of multiple visual modalities. We combine stereo, color, and face detection modules into a single robust system, and show an initial application in an interactive, face-responsive display. Dense, real-time stereo processing is used to isolate users from other objects and people in the background. Skin-hue classification identifies and tracks likely body parts within the silhouette of a user. Face pattern detection discriminates and localizes the face within the identified body parts. Faces and bodies of users are tracked over several temporal scales: short-term (user stays within the field of view), medium-term (user exits/reenters within minutes), and long term (user returns after hours or days). Short-term tracking is performed using simple region position and size correspondences, while medium and long-term tracking are based on statistics of user appearance. We discuss the failure modes of each individual module, describe our integration method, and report results with the complete system in trials with thousands of users.
Tracking people with probabilistic appearance models
2002
This paper describes a real-time computer vision system for tracking people in monocular video sequences. The system tracks people as they move through the camera's field of view, by a combination of background subtraction and the learning of appearance models. The appearance models allow objects to be tracked through occlusions using a probabilistic pixel reclassification algorithm. The system is evaluated on the three test sequences of the PETS 2002 dataset, for which tracking results and processing time requirements are presented.
Human tracking using joint color-texture features and foreground-weighted histogram
Multimedia Tools and Applications, 2017
This paper proposes a new appearance model for human tracking based on Mean Shift framework. The proposed method uses a novel target representation by using joint Color-Texture features and Foreground-Weighted Histogram (CTFWH) for a more distinctive and effective target representation. Our contribution is threefold: firstly, to exploit the texture information of the target, we have used joint color-texture histogram to represent the target. Local Binary Pattern (LBP) technique is employed to identify texture features in the target region. Secondly, we have proposed a representation model of the foreground region named Foreground-Weighted Histogram (FWH), in order to exploit the significant features of the foreground region and to use it for selecting only the salient parts from the target model. Thirdly, we propose a simple method to update the foreground model due to the important foreground changes over the tracking process. Hence, by combining these concepts we generate new features for target representation and human tracking. The proposed method is designed for human tracking in complex scenarios and tested for comparative results with existing stateof-the-art algorithms. Experimental results on numerous challenging video sequences verify the significance of the proposed approach in terms of robustness and performance to complex Multimed Tools Appl
Stochastic tracking of 3D human figures using 2D image motion
2000
A probabilistic method for tracking 3D articulated human figures in monocular image sequences is presented. Within a Bayesian framework, we define a generative model of image appearance, a robust likelihood function based on image graylevel differences, and a prior probability distribution over pose and joint angles that models how humans move. The posterior probability distribution over model parameters is represented using a discrete set of samples and is propagated over time using particle filtering. The approach extends previous work on parameterized optical flow estimation to exploit a complex 3D articulated motion model. It also extends previous work on human motion tracking by including a perspective camera model, by modeling limb self occlusion, and by recovering 3D motion from a monocular sequence. The explicit posterior probability distribution represents ambiguities due to image matching, model singularities, and perspective projection. The method relies only on a frame-to-frame assumption of brightness constancy and hence is able to track people under changing viewpoints, in grayscale image sequences, and with complex unknown backgrounds.