Can You See Me Now? Sensor Positioning for Automated and Persistent Surveillance (original) (raw)
Related papers
Sensor planning for automated and persistent object tracking with multiple cameras
2008
Most existing camera placement algorithms focus on coverage and/or visibility analysis, which ensures that the object of interest is visible in the camera's field of view (FOV). However, visibility, a fundamental requirement of object tracking, is insufficient for persistent and automated tracking. In such applications, a continuous and consistently labeled trajectory of the same object should be maintained across different cameraspsila views. Therefore, a sufficient overlap between the cameraspsila FOVs should be secured so that camera handoff can be executed successfully and automatically before the object of interest becomes untraceable or unidentifiable. The proposed sensor planning method improves existing algorithms by adding handoff rate analysis, which preserves necessary overlapped FOVs for an optimal handoff success rate. In addition, special considerations such as resolution and frontal view requirements are addressed using two approaches: direct constraint and adaptive weight. The resulting camera placement is compared with a reference algorithm by Erdem and Sclaroff. Significantly improved handoff success rate and frontal view percentage are illustrated via experiments using typical office floor plans.
Camera Placement Meeting Restrictions of Computer Vision
2020 IEEE International Conference on Image Processing (ICIP), 2020
In the blooming era of smart edge devices, surveillance cameras have been deployed in many locations. Surveillance cameras are most useful when they are spaced out to maximize coverage of an area. However, deciding where to place cameras is an NP-hard problem and researchers have proposed heuristic solutions. Existing work does not consider a significant restriction of computer vision: in order to track a moving object, the object must occupy enough pixels. The number of pixels depends on many factors (How far away is the object? What is the camera resolution? What is the focal length?). In this study, we propose a camera placement method that identifies effective camera placement in arbitrary spaces and can account for different camera types as well. Our strategy represents spaces as polygons, then uses a greedy algorithm to partition the polygons and determine the cameras' locations to provide the desired coverage. Our solution also makes it possible to perform object tracking via overlapping camera placement. Our method is evaluated against complex shapes and real-world museum floor plans, achieving up to 85% coverage and 25% overlap.
Multi-Camera Positioning for Automated Tracking Systems in Dynamic Environments
International Journal of Information Acquisition, 2010
Most existing camera placement algorithms focus on coverage and/or visibility analysis, which ensures that the object of interest is visible in the camera's field of view (FOV). According to recent literature, handoff safety margin is introduced to sensor planning so that sufficient overlapped FOVs among adjacent cameras are reserved for successful and smooth target transition. In this paper, we investigate the sensor planning problem when considering the dynamic interactions between moving targets and observing cameras. The probability of camera overload is explored to model the aforementioned interactions. The introduction of the probability of camera overload also considers the limitation that a given camera can simultaneously monitor or track a fixed number of targets and incorporates the target's dynamics into sensor planning. The resulting camera placement not only achieves the optimal balance between coverage and handoff success rate but also maintains the optimal bal...
Multiple camera coordination in a surveillance system
2000
In this paper, we present a distributed surveillance system that uses multiple cheap static cameras to track multiple people in indoor environments. The system has a set of Camera Processing Modules and a Central Module to coordinate the tracking tasks among the cameras. Since each object in the scene can be tracked by a number of cameras, the problem is how to choose the most appropriate camera for each object. This is important given the need to deal with limited resources (CPU, power etc.). We propose a novel algorithm to allocate objects to cameras using the object-to-camera distance while taking into account occlusion. The algorithm attempts to assign objects in the overlapping field of views to the nearest camera which can see the object without occlusion. Experimental results show that the system can coordinate cameras to track people properly and can deal well with occlusion.
Automated Placement of Cameras in a Floorplan to Satisfy Task-Specific Constraints
2003
In many multi-camera vision systems the effect of camera locations on the task-specific quality of service is ignored. Researchers in Computational Geometry have proposed elegant solutions for some sensor location problem classes. Unfortunately, these solutions utilize unrealistic assumptions about the cameras' capabilities that make these algorithms unsuitable for many real-world computer vision applications: unlimited field of view, infinite depth of field, and/or infinite servo precision and speed. In this paper, the general camera placement problem is first defined with assumptions that are more consistent with the capabilities of real-world cameras. The region to be observed by cameras may be volumetric, static or dynamic, and may include holes that are caused, for instance, by columns or furniture in a room that can occlude potential camera views. A subclass of this general problem can be formulated in terms of planar regions that are typical of building floorplans. Given a floorplan to be observed, the problem is then to efficiently compute a camera layout such that certain taskspecific constraints are met. A solution to this problem is obtained via binary optimization over a discrete problem space. In preliminary experiments the performance of the resulting system is demonstrated with different real floorplans.
Automated camera layout to satisfy task-specific and floor plan-specific coverage requirements
Computer Vision and Image Understanding, 2006
In many multi-camera vision systems the effect of camera locations on the task-specific quality of service is ignored. Researchers in Computational Geometry have proposed elegant solutions for some sensor location problem classes. Unfortunately, these solutions use unrealistic assumptions about the cameras' capabilities that make these algorithms unsuitable for many real world computer vision applications. In this paper, the general camera placement problem is first defined with assumptions that are more consistent with the capabilities of real world cameras. The region to be observed by cameras may be volumetric, static or dynamic, and may include holes. A subclass of this general problem can be formulated in terms of planar regions that are typical of building floor plans. Given a floor plan to be observed, the problem is then to reliably compute a camera layout such that certain task-specific constraints are met. A solution to this problem is obtained via binary optimization over a discrete problem space. In experiments the performance of the resulting system is demonstrated with different real indoor and outdoor floor plans.
Camera handoff and placement for automated tracking systems with multiple omnidirectional cameras
Computer Vision and Image Understanding - CVIU, 2010
In a multi-camera surveillance system, both camera handoff and placement play an important role in generating an automated and persistent object tracking, typical of most surveillance requirements. Camera handoff should comprise three fundamental components, time to trigger handoff process, the execution of consistent labeling, and the selection of the next optimal camera. In this paper, we design an observation measure to quantitatively formulate the effectiveness of object tracking so that we can trigger camera handoff timely and select the next camera appropriately before the tracked object falls out of the field of view (FOV) of the currently observing camera. In the meantime, we present a novel solution to the consistent labeling problem in omnidirectional cameras. A spatial mapping procedure is proposed to consider both the noise inherent to the tracking algorithms used by the system and the lens distortion introduced by omnidirectional cameras. This does not only avoid the te...
Computers, Environment and Urban Systems
Along with the rapidly growing volume of public security events, efficient camera planning and configuration methods have been one of the most crucial challenges in the video surveillance field. How to allocate different types of surveillance cameras in an area is one of the fundamental problems; however, limited methods have been available for generating the deployment parameters of cameras. The purpose of the paper is to explore camera planning based on multi-source Location-based Service data. The main idea is to infer the camera coverage by the building footprints, Point of Interests (POI) and social network record (WeChat) data, and to optimize the camera placement using the Maximal Coverage Location Problem-Complementary Coverage (MCLP-CC) model. Based on the probability of cell monitored with the calculation of viewshed analysis, the candidate location with max probability is selected. The essential spots in the surveillance area are uncovered by the combination of the kernel density estimation of POIs and WeChat data. The inference algorithm of the location, the field of view angle, orientation yaw, and visible distance parameters are proposed using the candidate location and critical spots in the viewshed polygon. The MCLP-CC is modeled and implemented by Python scripts and Gurobi software. The experiment shows that the proposed method can generate the detailed camera parameters including location, the field of view angle, orientation yaw, and visible distance with the lower occlusion and overlapping ratio for camera coverage. We believe that the integration of the coverage inference and optimization methods into the existing GIS platform will promote a variety of innovative applications in the camera planning area.
Camera Placement Optimization Conditioned on Human Behavior and 3D Geometry
2016
This paper proposes an algorithm to optimize the placement of surveillance cameras in a 3D infrastructure. The key differentiating feature in the algorithm design is the incorporation of human behavior within the infrastructure for optimization. Infrastructures depending on their geometries may exhibit regions with dominant human activity. In the absence of observations, this paper presents a method to predict this human behavior and identify such regions to deploy an effective surveillance scenario. Domain knowledge regarding the infrastructure was used to predict the possible human motion trajectories in the infrastructure. These trajectories were used to identify areas with dominant human activity. Furthermore, a metric that quantifies the position and orientation of a camera based on the observable space, activity in the space, pose of objects of interest within the activity, and their image resolution in camera view was defined for optimization. This method was compared with th...
Geometric Tools for Multicamera Surveillance Systems
2007 First ACM/IEEE International Conference on Distributed Smart Cameras, 2007
Our analysis and visualization tools use 3D building geometry to support surveillance tasks. These tools are part of DOTS,........ our multicamera surveillance system; a system with over 20 cameras spread throughout the public spaces of our building The geometric input to DOTS is a floor plan and information such as cubicle wall heights. From this input we construct a 3D model and an enhanced 2D floor plan that are the bases for more specific visualization and analysis tools. Foreground objects of interest can be placed within these models and dynamically updated in real time across camera views. Alternatively, a virtual first-person view suggests what a tracked person can see as she moves about. Interactive visualization tools support complex camera-placement tasks. Extrinsic camera calibration is supported both by visualizations of parameter adjustment results and by methods for establishing correspondences between image features and the 3D model.