Sensor planning for automated and persistent object tracking with multiple cameras (original) (raw)

Can You See Me Now? Sensor Positioning for Automated and Persistent Surveillance

IEEE Transactions on Systems, Man, and Cybernetics, 2010

Most existing camera placement algorithms focus on coverage and/or visibility analysis, which ensures that the object of interest is visible in the camera's field of view (FOV). However, visibility, which is a fundamental requirement of object tracking, is insufficient for automated persistent surveillance. In such applications, a continuous consistently labeled trajectory of the same object should be maintained across different camera views. Therefore, a sufficient uniform overlap between the cameras' FOVs should be secured so that camera handoff can successfully and automatically be executed before the object of interest becomes untraceable or unidentifiable. In this paper, we propose sensor-planning methods that improve existing algorithms by adding handoff rate analysis. Observation measures are designed for various types of cameras so that the proposed sensor-planning algorithm is general and applicable to scenarios with different types of cameras. The proposed sensor-planning algorithm preserves necessary uniform overlapped FOVs between adjacent cameras for an optimal balance between coverage and handoff success rate. In addition, special considerations such as resolution and frontal-view requirements are addressed using two approaches: 1) direct constraint and 2) adaptive weights. The resulting camera placement is compared with a reference algorithm published by Erdem and Sclaroff. Significantly improved handoff success rates and frontal-view percentages are illustrated via experiments using indoor and outdoor floor plans of various scales.

Automated Placement of Cameras in a Floorplan to Satisfy Task-Specific Constraints

2003

In many multi-camera vision systems the effect of camera locations on the task-specific quality of service is ignored. Researchers in Computational Geometry have proposed elegant solutions for some sensor location problem classes. Unfortunately, these solutions utilize unrealistic assumptions about the cameras' capabilities that make these algorithms unsuitable for many real-world computer vision applications: unlimited field of view, infinite depth of field, and/or infinite servo precision and speed. In this paper, the general camera placement problem is first defined with assumptions that are more consistent with the capabilities of real-world cameras. The region to be observed by cameras may be volumetric, static or dynamic, and may include holes that are caused, for instance, by columns or furniture in a room that can occlude potential camera views. A subclass of this general problem can be formulated in terms of planar regions that are typical of building floorplans. Given a floorplan to be observed, the problem is then to efficiently compute a camera layout such that certain taskspecific constraints are met. A solution to this problem is obtained via binary optimization over a discrete problem space. In preliminary experiments the performance of the resulting system is demonstrated with different real floorplans.

Multi-Camera Positioning for Automated Tracking Systems in Dynamic Environments

International Journal of Information Acquisition, 2010

Most existing camera placement algorithms focus on coverage and/or visibility analysis, which ensures that the object of interest is visible in the camera's field of view (FOV). According to recent literature, handoff safety margin is introduced to sensor planning so that sufficient overlapped FOVs among adjacent cameras are reserved for successful and smooth target transition. In this paper, we investigate the sensor planning problem when considering the dynamic interactions between moving targets and observing cameras. The probability of camera overload is explored to model the aforementioned interactions. The introduction of the probability of camera overload also considers the limitation that a given camera can simultaneously monitor or track a fixed number of targets and incorporates the target's dynamics into sensor planning. The resulting camera placement not only achieves the optimal balance between coverage and handoff success rate but also maintains the optimal bal...

Automated camera layout to satisfy task-specific and floor plan-specific coverage requirements

Computer Vision and Image Understanding, 2006

In many multi-camera vision systems the effect of camera locations on the task-specific quality of service is ignored. Researchers in Computational Geometry have proposed elegant solutions for some sensor location problem classes. Unfortunately, these solutions use unrealistic assumptions about the cameras' capabilities that make these algorithms unsuitable for many real world computer vision applications. In this paper, the general camera placement problem is first defined with assumptions that are more consistent with the capabilities of real world cameras. The region to be observed by cameras may be volumetric, static or dynamic, and may include holes. A subclass of this general problem can be formulated in terms of planar regions that are typical of building floor plans. Given a floor plan to be observed, the problem is then to reliably compute a camera layout such that certain task-specific constraints are met. A solution to this problem is obtained via binary optimization over a discrete problem space. In experiments the performance of the resulting system is demonstrated with different real indoor and outdoor floor plans.

Optimal Placement of Cameras in Floorplans to Satisfy Task Requirements and Cost Constraints

2004

In many multi-camera vision systems the effect of camera locations on the task-specific quality of service is ignored. Researchers in Computational Geometry have proposed elegant solutions for some sensor location problem classes. Unfortunately, these solutions utilize unrealistic assumptions about the cameras' capabilities that make these algorithms unsuitable for many real-world computer vision applications. In this paper, the general camera placement problem is first defined with assumptions that are more consistent with the capabilities of realworld cameras. Given a floorplan to be observed, the problem is to efficiently compute a camera layout such that certain task-specific constraints are met and with minimal camera setup cost. A solution to this problem is obtained via binary optimization over a discrete problem space. In preliminary experiments the performance of the system is demonstrated with two different practical experiments on a real floorplan.

Camera handoff and placement for automated tracking systems with multiple omnidirectional cameras

Computer Vision and Image Understanding - CVIU, 2010

In a multi-camera surveillance system, both camera handoff and placement play an important role in generating an automated and persistent object tracking, typical of most surveillance requirements. Camera handoff should comprise three fundamental components, time to trigger handoff process, the execution of consistent labeling, and the selection of the next optimal camera. In this paper, we design an observation measure to quantitatively formulate the effectiveness of object tracking so that we can trigger camera handoff timely and select the next camera appropriately before the tracked object falls out of the field of view (FOV) of the currently observing camera. In the meantime, we present a novel solution to the consistent labeling problem in omnidirectional cameras. A spatial mapping procedure is proposed to consider both the noise inherent to the tracking algorithms used by the system and the lens distortion introduced by omnidirectional cameras. This does not only avoid the te...

Automatic camera placement for robot vision tasks

1995

Remote sensors such as CCD cameras can be used for a variety of robot sensing tasks, but given restrictions on camera location and imaging geometry, task constraints, and visual occlusion it can be difficult to find viewing positions from which the task can be completed. The complexity of these constraints suggests that automated, quantitative methods of sensor placement are likely to be useful, particularly when the workspace is cluttered and a mobile robot-mounted sensor is being used to increase the sensible region, circumvent occlusions, and so forth.

Optimization of the location of camera in two dimensional floor layout

2013

Installation of the security cameras is increasing rapidly in our society that required a secure environment. It motivates us to discover an optimum camera placement in order to improve the coverage of a camera network. It is a significant design problem in order to have a proper camera placement in a distributed smart camera network by considering the number of cameras required. Thus, a method was proposed in order to determine the camera placement by using C and FORTRAN language. Besides that, it is advantageous to maximize the coverage area by using a minimum number of cameras. Hence, in order to reduce the number of cameras used, we divide the area of polygon into grid points. Then, we calculate the camera locations which can cover the grid points as much as possible. We formulate the above problem as a set of maximizing coverage problem. Moreover, the optimal camera problem was solved by developing a general visibility model for visual camera networks through Binary Integer Pro...

Camera Placement Meeting Restrictions of Computer Vision

2020 IEEE International Conference on Image Processing (ICIP), 2020

In the blooming era of smart edge devices, surveillance cameras have been deployed in many locations. Surveillance cameras are most useful when they are spaced out to maximize coverage of an area. However, deciding where to place cameras is an NP-hard problem and researchers have proposed heuristic solutions. Existing work does not consider a significant restriction of computer vision: in order to track a moving object, the object must occupy enough pixels. The number of pixels depends on many factors (How far away is the object? What is the camera resolution? What is the focal length?). In this study, we propose a camera placement method that identifies effective camera placement in arbitrary spaces and can account for different camera types as well. Our strategy represents spaces as polygons, then uses a greedy algorithm to partition the polygons and determine the cameras' locations to provide the desired coverage. Our solution also makes it possible to perform object tracking via overlapping camera placement. Our method is evaluated against complex shapes and real-world museum floor plans, achieving up to 85% coverage and 25% overlap.

Multiple camera coordination in a surveillance system

2000

In this paper, we present a distributed surveillance system that uses multiple cheap static cameras to track multiple people in indoor environments. The system has a set of Camera Processing Modules and a Central Module to coordinate the tracking tasks among the cameras. Since each object in the scene can be tracked by a number of cameras, the problem is how to choose the most appropriate camera for each object. This is important given the need to deal with limited resources (CPU, power etc.). We propose a novel algorithm to allocate objects to cameras using the object-to-camera distance while taking into account occlusion. The algorithm attempts to assign objects in the overlapping field of views to the nearest camera which can see the object without occlusion. Experimental results show that the system can coordinate cameras to track people properly and can deal well with occlusion.