Synthetic movies for computer vision applications (original) (raw)

A Virtual Vision Simulator for Camera Networks

Virtual Vision advocates developing visually and behaviorally realistic 3D synthetic environments to serve the needs of computer vision research. Virtual vision, especially, is well-suited for studying large-scale camera networks. A virtual vision simulator capable of generating "realistic" synthetic imagery from real-life scenes, involving pedestrians and other objects, is the sine qua non of carrying out virtual vision research. Here we develop a distributed, customizable virtual vision simulator capable of simulating pedestrian traffic in a variety of 3D environments. Virtual cameras deployed in this synthetic environment generate imagery using state-of-theart computer graphics techniques, boasting realistic lighting effects, shadows, etc. The synthetic imagery is fed into a visual analysis pipeline that currently supports pedestrian detection and tracking. The results of this analysis can then be used for subsequent processing, such as camera control, coordination, and handoff. It is important to bear in mind that our visual analysis pipeline is designed to handle real world imagery without any modifications. Consequently, it closely mimics the performance of visual analysis routines that one might deploy on physical cameras. Our virtual vision simulator is realized as a collection of modules that communicate with each other over the network. Consequently, we can deploy our simulator over a network of computers, allowing us to simulate much larger camera networks and much more complex scenes then is otherwise possible.

Real-time rendering system of moving objects

Proceedings IEEE Workshop on Multi-View Modeling and Analysis of Visual Scenes (MVIEW'99), 1999

We have been developing a system named "mutual tele-existence" which allows for face-to-face communication between remote users . Although imagebased rendering (IBR) is suitable for rendering human figures with a complex geometry, conventional IBR techniques cannot readily be applied to our system. Because most IBR techniques include timeconsuming processes, they cannot capture the source images and simultaneously render the destination images. In this paper, we propose a novel method focusing on real-time rendering. Moreover, we introduce equivalent depth of field to measure the fidelity of the synthesized image. If the object is in this range, accurate rendering is guaranteed. Then, we introduce a prototype machine that has 12 synchronized cameras on the linear actuator. Finally, we present some experimental results of our prototype machine.

Using computer vision to simulate the motion of virtual agents

Computer Animation and Virtual Worlds, 2007

In this paper, we propose a new model to simulate the movement of virtual humans based on trajectories captured automatically from filmed video sequences. These trajectories are grouped into similar classes using an unsupervised clustering algorithm, and an extrapolated velocity field is generated for each class. A physically-based simulator is then used to animate virtual humans, aiming to reproduce the trajectories fed to the algorithm and at the same time avoiding collisions with other agents. The pro-1 posed approach provides an automatic way to reproduce the motion of real people in a virtual environment, allowing the user to change the number of simulated agents while keeping the same goals observed in the filmed video.

Creation and animation of computer-generated images combined with a camera and smart graphics card

Microprocessors and Microsystems, 1991

The system described is used in teleoperated robotics to give the human operator a visual aid for the perception of the remote scene when indirect viewing using a video camera. The system enables graphic aids to be superimposed on the video image. These visual enhancements are based on a 3-D reconstruction of the imaged scene. The computer-generated image of moving objects is animated in real time using sensor data feedback, issued from the task site, measuring the displacements of these objects. Matching of the video and computer-generated images is carried out. The hardware structure consists of VME bus modules. It is a multiprocessor device comprising a master CPU, a data acquisition card and a slave CPU with smart graphics card allowing analogue mixing of the computer-generated and video images. It is a workoriented graphics card, configured according to the application's specifications.

Compositing computer graphics and real world video sequences

Computer Networks and ISDN Systems, 1998

This paper describes new techniques for compositing computer generated images and real world video sequences obtained using a camcorder. The methods described here can be used to produce animations of computer generated images in an accurate real background. Such animations can be applied, for example, in the environmental assessment and pre-evaluation of the visual impact of large scale constructions. The main advantage of the methods proposed here is that in order to track the camera position and merge computer graphics and real images, only the frames from the video sequence are analyzed, without the use of costly sensors usually applied to the camera movement detection. The problems of transition from analog images, captured by a camcorder, to the digital pictures used in computer graphics are also discussed. The proposed algorithms may be directly used in applications of computer graphics such as: multimedia, augmented reality, telepresence, visual simulators, and in all internetrWWW applications requiring those features.

An image generation sub-system for a realistic driving simulator

1999

This paper presents a description of the image generation sub-system developed to allow the presentation of realistic visual feedback in interactive visual simulation with large scene databases. The developed image generator applies all the standard state-of-art image generation algorithms aimed to real-time interactive simulation. Some of these important algorithms are also explained in this document. I addition, some innovative optimization techniques like the hierarchical back face rejection of objects, the visibility preprocessing and the automatic optimization of levels-of-detail are being developed and detailed in this paper. These techniques will allow a better use of any image generation system and improve significantly the visualization of huge scene databases even in high-end graphics architectures.

A Realistic Video Avatar System for Networked Virtual Environments

2002

With the advancements in collaborative virtual reality applications there is a need for representing users with a higher degree of realism for better immersion. Representing users with facial animation in an interactive collaborative virtual environment is a daunting task. This paper proposes an avatar system for a realistic representation of users. In working towards this goal, this paper will present a technique for head model reconstruction in tracked environments, which is rendered by view dependent texture mapping of video. The key feature of the proposed system is that it takes advantage of the tracking information available in a VR system for the entire process.

Production and playback of human figure motion for visual simulation

We describe a system for o -line production and real-time playback o f motion for articulated human gures in 3D virtual environments. The key notions are (1) the logical storage of full-body motion in posture graphs, which p r o vides a simple motion access method for playback, and (2) mapping the motions of higher DOF gures to lower DOF gures using slaving to provide human models at several levels of detail, both in geometry and articulation, for later playback. We present our system in the context of a simple problem: Animating human gures in a distributed simulation, using DIS protocols for communicating the human state information. We also discuss several related techniques for real-time animation of articulated gures in visual simulation.

Immersive Simulation for Computer Vision

2007

Synthetic imagery has often been considered unsuitable for demonstrating the performance of vision algorithms and systems. We argue that despite many remaining di culties simulation and computer graphics are at a point t o d a y that make them extremely useful for evaluation and training, even for complex outdoor applications. This is particularly valuable for autonomous and robotic applications, where the lack of suitable training data and ground truth information is a severe bottleneck. Extensive testing in a simulated environment should become an integral part of the systems development and evaluation process to reduce the possibility of failure in the real world. We describe ongoing e orts towards the development of an Immersive P erception Simulator" and discuss some of the speci c problems involved.