Real-time foreground-background segmentation using codebook model (original) (raw)

BACKGROUND MODELING AND SUBTRACTION BY CODEBOOK CONSTRUCTION

We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques.

Dynamic Codebook for Foreground Segmentation in a Video

ECTI Transactions on Computer and Information Technology (ECTI-CIT), 2017

The foreground segmentation in a video is a way to extract changes in image sequences. It is a key task in an early stage of many applications in the computer vision area. The information of changes in the scene must be segmented before any further analysis could be taken place. However, it remains with difficulties caused by several real-world challenges such as cluttered backgrounds, changes of the illumination, shadows, and long-term scene changes. This paper proposes a novel method, namely a dynamic codebook (DCB), to address such challenges of variations in the background scene. It relies on a dynamic modeling of the background scene. Initially, a codebook is constructed to represent the background information of each pixel over a period of time. Then, a dynamic boundary of the codebook will be made to support variations of the background. The revised codebook will always be adaptive to the new background's environments. This makes the foreground segmentation more robust to the changes of background scene. The proposed method has been evaluated by using the changedetection.net (CDnet) benchmark which is a well-known video dataset for testing change-detection algorithms. The experimental results and comprehensive comparisons have shown a very promising performance of the proposed method.

Real-time foreground-background segmentation based on improved codebook model

2010

Real time segmentation of scene into objects and background is really important and represents an initial step of object tracking. Starting from the codebook method [4] we propose some modifications which show significant improvements in most of the normal and also difficult conditions. We include parameter of frequency for accessing, deleting, matching and adding codewords in codebook or to move cache codewords into codebook. We also propose an evaluation method in order to objectively compare several segmentation techniques, based on receiver operating characteristic (ROC) analysis and on precision and recall method. We propose to summarize the quality factor of a method by a single value based on a weighted Euclidean distance or on a harmonic mean between two related characteristics.

Pixel-wise Background Segmentation with Moving Camera

Lecture Notes in Computer Science, 2013

This paper proposes a novel approach for background extraction of a scene captured by a moving camera. Proposed method uses a codebook, which is a compression technique used to store data from a long sequence of video frames. This technique has been used to construct a model which can segment out the foreground and that with using only few initial video frames as a training sequence. It is a dynamic model which keeps on learning from new video frames throughout its lifetime and simultaneously produces the output. It uses a pixel-wise approach, and the codebooks for each pixel are made independently. Special emphasis has been laid on the intensity of an image as the human eye is more sensitive to intensity variations. A two layer modelling has been performed where codebooks are passed from the cache to the background model after satisfying the frequency and negative run length conditions. Experimental results show the efficacy of the proposed method.

A Self-adaptive CodeBook (SACB) model for real-time background subtraction

Image and Vision Computing, 2015

Effective and efficient background subtraction is important to a number of computer vision tasks. In this paper, we introduce several new techniques to address key challenges for background modeling for moving objects detection in videos. The novel features of our proposed Self-Adaptive CodeBook (SACB) background model are a more effective color model using YCbCr color space, a robust statistical parameter estimation method, and a new algorithm for adding new background codewords into the permanent model and deleting noisy codewords from the models. Also, a new block-based approach is introduced to exploit the local spatial information. The proposed model is rigorously tested and compared with several previous models and has shown significant performance improvements.

Online codebook modeling based background subtraction with a moving camera

2017

This paper proposes a new background subtraction method by a moving camera for the object detection. Key points are firstly extracted and tracked. From the tracking results, spatial transformation relationships for the background scenes in consecutive frames are obtained while the current frame is warped to the previous image plane for the camera movement compensation. A codebook background model is constructed and updated in an online way by exploiting the full RGB color information, which is used to distinguish the foreground/background regions. Both qualitative and quantitative experimental results show that the proposed method outperforms its counterparts with a better performance.

Special Issue on “Background Modeling for Foreground Detection in Real-World Dynamic Scenes”

Special Issue on “Background Modeling for Foreground Detection in Real-World Dynamic Scenes”, 2014

Although background modeling and foreground detection are not mandatory steps for computer vision applications, they may prove useful as they separate the primal objects usually called ”foreground” from the remaining part of the scene called ”background”, and permits different algorithmic treatment in the video processing field such as video-surveillance, optical motion capture, multimedia applications, teleconferencing and human-computer interfaces. Conventional background modeling methods exploit the temporal variation of each pixel to model the background and the foreground detection is made by using change detection. The last decade witnessed very significant publications on background modeling but recently new applications in which background is not static, such as recordings taken from mobile devices or Internet videos, need new developments to detect robustly moving objects in challenging environments. Thus, effective methods for robustness to deal both with dynamic backgrounds, illumination changes in real scenes with fixed cameras or mobile devices are needed and so different strategies may be used such as automatic feature selection, model selection or hierarchical models. Another feature of background modeling methods is that the use of advanced models has to be computed in real-time and with low memory requirements. Algorithms may need to be redesigned to meet these requirements. Thus, the readers can find 1) new methods to model the background, 2) recent strategies to improve foreground detection to tackle challenges such as dynamic backgrounds and illumination changes, and 3) adaptive and incremental algorithms to achieve real-time applications.

Video Segmentation Framework by Dynamic Background Modelling

Lecture Notes in Computer Science, 2013

Detecting moving objects in video streams is the first relevant step of information extraction in many computer vision applications, e.g. video surveillance systems. In this work, a video segmentation framework by dynamic background modelling is presented. Our approach aims to update suitably the background model of a scene that is recorded by a static camera. For such purpose, we develop an optical flow based methodology to suitable track moving objects, which can stop or change smoothly their movement along the video. Moreover, a light variations identification stage, is employed to avoid possible confusions between illumination changes and objects in movement. Regarding this, our approach is able to ensure a suitable background modelling in real world scenarios. Attained results show that our framework outperforms, in well-known datasets, state of the art methodologies.

Foreground Detection Based on Real-time Background Modeling and Robust Subtraction

This paper presents a robust approach for detecting moving objects from a static background scene that contains slow illumination changes, physical changes and micromovements. First, we propose a new algorithm for background modeling that adapts to slow illumination and physical changes. This algorithm which is based on pixel state computation and background pixel state decision does not need such training sequences excluding moving objects. Second, we develop an efficient background subtraction algorithm that is able to cope with micro-movement of the background scene. This is done by calculating the similarity between the incoming pixel and its neighborhood pixels in the background model. Finally, we applied this robust approach to some video surveillance sequences of both indoor and outdoor scenes. The results demonstrate the effectiveness of our approach.

Efficient Background Subtraction Using Improved Multilayered Codebook

International Journal of Computer and organization Trends, 2016

Detection of moving objects in video is a highly demanding area of research for object tracking. Background subtraction is the technique used to extract the foreground for object recognition in a video. The fast background subtraction algorithms can yield optimal results in foreground detection models. Codebooks are used to store compressed information by demanding high speed processing and less memory usage. Multilayered codebook(MCB) model provides a mechanism which uses block based and pixel based codebooks for high speed background subtraction with out refining noise and smoothing of edges, so it detects the single object as the multiple objects. Improved MCB perform refining the results from MCB by employing medianfilter which is one kind of smoothing technique at the same time it reduces the noise in a video. As a result, the improved multilayered codebook model performs well against the noise and smoothing of edges compared to traditional models.