Dynamic Codebook for Foreground Segmentation in a Video (original) (raw)

Real-time foreground-background segmentation using codebook model

We present a real-time algorithm for foreground-background segmentation. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. The codebook representation is efficient in memory and speed compared with other background modeling techniques. Our method can handle scenes containing moving backgrounds or illumination variations, and it achieves robust detection for different types of videos. We compared our method with other multimode modeling techniques.

Video Segmentation Framework by Dynamic Background Modelling

Lecture Notes in Computer Science, 2013

Detecting moving objects in video streams is the first relevant step of information extraction in many computer vision applications, e.g. video surveillance systems. In this work, a video segmentation framework by dynamic background modelling is presented. Our approach aims to update suitably the background model of a scene that is recorded by a static camera. For such purpose, we develop an optical flow based methodology to suitable track moving objects, which can stop or change smoothly their movement along the video. Moreover, a light variations identification stage, is employed to avoid possible confusions between illumination changes and objects in movement. Regarding this, our approach is able to ensure a suitable background modelling in real world scenarios. Attained results show that our framework outperforms, in well-known datasets, state of the art methodologies.

Change Detection based Real Time Video Object Segmentation

2012

Segmentation of video foreground objects from background has many important applications, such as human computer interaction, video compression, multimedia content editing and manipulation. The key idea in our paper is to obtain the moving object region which can be set as the possibility foreground, and the other region set as background. An efficient video object segmentation algorithm is proposed based on change detection and background updating that can quickly extract the moving object from video sequence. The change detection is used to analyse temporal information between successive frames to obtain the change region. Then, the combination of frame difference mask and background subtraction mask is adopted to acquire the initial object mask and further solve the uncovered background problem and still object problem. Moreover, the boundary refinement is introduced to overcome the shadow influence and residual background problem. The advantage of change detection based approaches is the low computational load and system complexity enabling real-time applications.

Changedetection.net: A new change detection benchmark dataset

2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2012

Change detection is one of the most commonly encountered low-level tasks in computer vision and video processing. A plethora of algorithms have been developed to date, yet no widely accepted, realistic, large-scale video dataset exists for benchmarking different methods. Presented here is a unique change detection benchmark dataset consisting of nearly 90,000 frames in 31 video sequences representing 6 categories selected to cover a wide range of challenges in 2 modalities (color and thermal IR). A distinguishing characteristic of this dataset is that each frame is meticulously annotated for ground-truth foreground, background, and shadow area boundaries -an effort that goes much beyond a simple binary label denoting the presence of change. This enables objective and precise quantitative comparison and ranking of change detection algorithms. This paper presents and discusses various aspects of the new dataset, quantitative performance metrics used, and comparative results for over a dozen previous and new change detection algorithms. The dataset, evaluation tools, and algorithm rankings are available to the public on a website 1 and will be updated with feedback from academia and industry in the future.

Accurate video object segmentation through change detection

2002

Abstract We propose an algorithm for the accurate extraction of video objects from color sequences. The semantics defining the video objects is motion, and the extraction algorithm is based on change detection. The color difference between frames is modeled so as to separate the contributions caused by sensor noise and illumination variations from those caused by meaningful objects.

Special Issue on “Background Modeling for Foreground Detection in Real-World Dynamic Scenes”

Special Issue on “Background Modeling for Foreground Detection in Real-World Dynamic Scenes”, 2014

Although background modeling and foreground detection are not mandatory steps for computer vision applications, they may prove useful as they separate the primal objects usually called ”foreground” from the remaining part of the scene called ”background”, and permits different algorithmic treatment in the video processing field such as video-surveillance, optical motion capture, multimedia applications, teleconferencing and human-computer interfaces. Conventional background modeling methods exploit the temporal variation of each pixel to model the background and the foreground detection is made by using change detection. The last decade witnessed very significant publications on background modeling but recently new applications in which background is not static, such as recordings taken from mobile devices or Internet videos, need new developments to detect robustly moving objects in challenging environments. Thus, effective methods for robustness to deal both with dynamic backgrounds, illumination changes in real scenes with fixed cameras or mobile devices are needed and so different strategies may be used such as automatic feature selection, model selection or hierarchical models. Another feature of background modeling methods is that the use of advanced models has to be computed in real-time and with low memory requirements. Algorithms may need to be redesigned to meet these requirements. Thus, the readers can find 1) new methods to model the background, 2) recent strategies to improve foreground detection to tackle challenges such as dynamic backgrounds and illumination changes, and 3) adaptive and incremental algorithms to achieve real-time applications.

Real-time foreground-background segmentation based on improved codebook model

2010

Real time segmentation of scene into objects and background is really important and represents an initial step of object tracking. Starting from the codebook method [4] we propose some modifications which show significant improvements in most of the normal and also difficult conditions. We include parameter of frequency for accessing, deleting, matching and adding codewords in codebook or to move cache codewords into codebook. We also propose an evaluation method in order to objectively compare several segmentation techniques, based on receiver operating characteristic (ROC) analysis and on precision and recall method. We propose to summarize the quality factor of a method by a single value based on a weighted Euclidean distance or on a harmonic mean between two related characteristics.

Hybrid Codebook Model for Foreground Object Segmentation and Shadow/Highlight Removal

Journal of Information Science and Engineering, 2014

Real-time foreground object extraction is an important subject for computer vision applications. Model-based background subtraction methods have been used to extract the foreground objects. Different from previous methods, this paper introduces a hybrid codebook-based background subtraction method by combining the mixture of Gaussian (MOG) with the codebook (CB) method. We propose an ellipsoid CB model for modeling the dynamic background with highlight and shadow, and develop a modified shadow/highlight removal method to overcome the influence of illumination change. Our method can avoid extracting the false foreground pixels (e.g., dark background) or missing the real foreground pixels (e.g., bright foreground). Finally, we have done two experiments to compare the performance of our method with the others based on [18] and the change detection benchmark dataset provided in CVPR 2011, respectively.

Effective scene change detection in complex environments

International Journal of Computational Vision and Robotics, 2019

One of the fundamental operations in computer vision applications is change detection, in which moving foreground objects are segmented from a static background. A common approach for change detection is the comparison of an image frame with the stored background model using a matching algorithm, a process known as background subtraction. However, such techniques fail in environments with dynamic backgrounds, illumination changes, or shadow and camera jitters. This study focuses on effectively detecting scene changes in complex environments. To this end, we proposed a new colour descriptor named local colour difference pattern (LCDP) that is insusceptible to shadow and is able to capture both colour and texture features at a pixel location. Furthermore, a scene change detection framework was proposed to handle dynamic scenes based on sample consensus that integrates LCDP and a novel spatial model fusion mechanism. Experiments using the CDnet benchmark dataset demonstrated the effectiveness of the proposed approach to change detection in complex environments.

A Self-adaptive CodeBook (SACB) model for real-time background subtraction

Image and Vision Computing, 2015

Effective and efficient background subtraction is important to a number of computer vision tasks. In this paper, we introduce several new techniques to address key challenges for background modeling for moving objects detection in videos. The novel features of our proposed Self-Adaptive CodeBook (SACB) background model are a more effective color model using YCbCr color space, a robust statistical parameter estimation method, and a new algorithm for adding new background codewords into the permanent model and deleting noisy codewords from the models. Also, a new block-based approach is introduced to exploit the local spatial information. The proposed model is rigorously tested and compared with several previous models and has shown significant performance improvements.