Description and Discovery of Complex Events in Video Surveillance (original) (raw)

Intuitive event modeling for personalized behavior monitoring

2008 19th International Conference on Pattern Recognition, 2008

Behavior understanding and semantic interpretation of dynamic visual scenes have attracted a lot of attention in computer vision research community. Although the use of surveillance cameras has proliferated, the understanding of activities still remains complex. While users are mostly interested in high level and subjective semantics, only low level visual features can be extracted in a reliable way. This paper presents a novel framework for video guided behavior monitoring, built around the event modeling concept. It enables users to design their personal models of events combining elementary concept and low level features using expressive formalisms. The framework enables then detection of the events within video streams based on low level features extraction and manual annotations analysis, while taking in consideration uncertainty. Examples depicting content-based events modeling and detection from video surveillance are presented to illustrate the approach.

Behavior and event detection for annotation and surveillance

… , 2008. CBMI 2008. …, 2008

Visual surveillance and activity analysis is an active research field of computer vision. As a result, there are several different algorithms produced for this purpose. To obtain more robust systems it is desirable to integrate the different algorithms. To achieve this goal, the paper presents results in automatic event detection in surveillance videos, and a distributed application framework for supporting these methods. Results in motion analysis for static and moving cameras, automatic fight detection, shadow segmentation, discovery of unusual motion patterns, indexing and retrieval will be presented. These applications perform real time, and are suitable for real life applications. CBMI 2008

EDCAR: A knowledge representation framework to enhance automatic video surveillance

Expert Systems with Applications, 2019

The main purpose of video-based event recognition is to interpret activities or behaviors within video sequences, in order to detect and isolate specific events, which have to be readily recognized and prompted to the people responsible for their monitoring. In this paper, we present a knowledge representation framework and a system for automatic video surveillance, which analyzes record scenes in order to detect the occurrence of specific events defined as targets. The framework, named Elements and Descriptors of Context and Action Representations (EDCAR), enables the representation of relevant elements, general descriptors of the context, and actions that have to be captured, including the definition of action compositions and sequences, in order to monitor and recognize abnormal situations. EDCAR and the associated system also support video summarization of relevant scenes, providing an inference engine to handle complex queries. They have been used experimentally on several video surveillance scenarios, which enabled us to prove their effectiveness with respect to similar solutions described in the literature.

Representing and recognizing complex events in surveillance applications

2007

In this paper, we investigate the problem of representing and maintaining rule knowledge for a video surveillance application. We focus on complex events representation which cannot be straightforwardly represented by canonical means. In particular, we highlight the ongoing efforts for a unifying framework for computable rule and taxonomical knowledge representation.

Understanding Video Events: A Survey of Methods for Automatic Interpretation of Semantic Occurrences in Video

IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 2000

Understanding Video Events, the translation of low-level content in video sequences into highlevel semantic concepts, is a research topic that has received much interest in recent years. Important applications of this work include smart surveillance systems, semantic video database indexing, and interactive systems. This technology can be applied to several video domains including: airport terminal, parking lot, traffic, subway stations, aerial surveillance, and sign language data. In this work we survey the two main components of the event understanding process: Abstraction and Event modeling. Abstraction is the process of molding the data into informative units to be used as input to the event model. Event modeling is devoted to describing events of interest formally and enabling recognition of these events as they occur in the video sequence. Event modeling can be further decomposed in the categories of Pattern Recognition Methods, State Event Models, and Semantic Event Models. In this survey we discuss this proposed taxonomy of the literature, offer a unifying terminology, and discuss popular abstraction schemes (e.g. Motion History Images) and event modeling formalisms (e.g. Hidden Markov Model) and their use in video event understanding using extensive examples from the literature. Finally we consider the application domain of video event understanding in light of the proposed taxonomy, and propose future directions for research in this field. 1

A New System for Event Detection from Video Surveillance Sequences

Advanced Concepts for …, 2010

In this paper, we present an overview of a hybrid approach for event detection from video surveillance sequences that has been developed within the REGIMVid project. This system can be used to index and search the video sequence by the visual content. The platform provides moving object segmentation and tracking, High-level feature extraction and video event detection.We describe the architecture of the system as well as providing an overview of the descriptors supported to date. We then demonstrate the usefulness of the toolbox in the context of feature extraction, events learning and detection in large collection of video surveillance dataset.

Semantic Annotation of Complex Human Scenes for Multimedia Surveillance

2007

A Multimedia Surveillance System (MSS) is considered for automatically retrieving semantic content from complex outdoor scenes, involving both human behavior and traffic domains. To characterize the dynamic information attached to detected objects, we consider a deterministic modeling of spatio-temporal features based on abstraction processes towards fuzzy logic formalism. A situational analysis over conceptualized information will not only allow us to describe human actions within a scene, but also to suggest possible interpretations of the behaviors perceived, such as situations involving thefts or dangers of running over. Towards this end, the different levels of semantic knowledge implied throughout the process are also classified into a proposed taxonomy.

BEHAVIOUR RECOGNITION FROM VIDEO CONTENT: A LOGIC PROGRAMMING APPROACH

International Journal on Artificial Intelligence Tools, 2010

We present a system for recognising human behaviour given a symbolic representation of surveillance videos. The input of our system is a set of time-stamped short-term behaviours, that is, behaviours taking place in a short period of time -walking, running, standing still, etc -detected on video frames. The output of our system is a set of recognised long-term behaviours -fighting, meeting, leaving an object, collapsing, walking, etc -which are pre-defined temporal combinations of short-term behaviours. The definition of a long-term behaviour, including the temporal constraints on the short-term behaviours that, if satisfied, lead to the recognition of the long-term behaviour, is expressed in the Event Calculus. We present experimental results concerning videos with several humans and objects, temporally overlapping and repetitive behaviours. Moreover, we present how machine learning techniques may be employed in order to automatically develop long-term behaviour definitions.

Exploring Behavior Analysis in Video Surveillance Applications

International Journal of Computer Applications, 2014

Video surveillance is recently one of the most active research topics in computer vision. It has a wide spectrum of promising public safety and security applications. As the number of cameras exceeds the capability of human operators to monitor them, the traditional passive video surveillance is proving ineffective. Hence, converting to intelligent visual surveillance is inevitable. Intelligent visual surveillance aims to detect, recognize and track certain objects from image sequences automatically, and more generally to understand and describe object behaviors. Many researchers have contributed to the field of automated video surveillance through detection, classification, and tracking algorithms. Despite recent progress in computer vision and other related areas, there are still major technical challenges to be overcome before reliable automated video surveillance can be realized. Recently, the problem of analyzing behavior in videos has been the focus of several researchers' efforts. It aims to analyze and interpret individual behaviors and interactions between different objects found in the scene. Hence, obtaining a description of what is happening in a monitored area, and then taking appropriate action based on that interpretation. In this paper, we give a survey of behavior analysis work in video surveillance and compare the performance of the state-of-the-art algorithms on different datasets. Moreover, useful datasets are analyzed in order to provide help for initiating research projects.

Visual surveillance monitoring and watching

Lecture Notes in Computer Science, 1996

This paper describes the development of computational understanding for surveillance of moving objects and their interactions in real world situations. Understanding the activity of moving objects starts by tracking objects in an image sequence, but this is just the beginning. The objective of this work is to go further and form conceptual descriptions that capture the dynamic interactions of objects in a meaningful way. The computational approach uses results from the VIEWS project 1. The issues concerned with extending computational vision to address high-level vision are described in the context of a surveillance system. In this paper we describe two systems: a passive architecture based on "event reasoning" which is the identification of behavioural primitives, their selection and composition; and an active architecture based on "task-level control" which is the guidance of the system to comply with a given surveillance task.