Innovative Tools for 3D Cinema Production (original) (raw)
Related papers
Converting 2D Video to 3D: An Efficient Path to a 3D Experience
IEEE Multimedia, 2000
W ide-scale deployment of 3D video technologies continues to experience rapid growth in such high-visibility areas as cinema, TV, and mobile devices. Of course, the visualization of 3D videos is actually a 4D experience, because three spatial dimensions are perceived as the video changes over the fourth dimension of time. However, because it's common to describe these videos as simply ''3D,'' we shall do the same, and understand that the time dimension is being ignored. So why is 3D suddenly so popular? For many, watching a 3D video allows for a highly realistic and immersive perception of dynamic scenes, with more deeply engaging experiences, as compared to traditional 2D video. This, coupled with great advances in 3D technologies and the appearance of the most successful movie of all time in vivid 3D (Avatar), has apparently put 3D video production on the map for good.
Scalable 3D video of dynamic scenes
The Visual Computer, 2005
In this paper we present a scalable 3D video framework for capturing and rendering dynamic scenes. The acquisition system is based on multiple sparsely placed 3D video bricks, each comprising a projector, two grayscale cameras, and a color camera. Relying on structured light with complementary patterns, texture images and pattern-augmented views of the scene are acquired simultaneously by time-multiplexed projections and synchronized camera exposures. Using space-time stereo on the acquired pattern images, high-quality depth maps are extracted, whose corresponding surface samples are merged into a view-independent, point-based 3D data structure. This representation allows for effective photo-consistency enforcement and outlier removal, leading to a significant decrease of visual artifacts and a high resulting rendering quality using EWA volume splatting. Our framework and its view-independent representation allow for simple and straightforward editing of 3D video. In order to demonstrate its flexibility, we show compositing techniques and spatiotemporal effects.
Re@ CT-Immersive Production and Delivery of Interactive 3D Content
This paper describes the aims and concepts of the FP7 RE@CT project. Building upon the latest advances in 3D capture and free-viewpoint video RE@CT aims to revolutionise the production of realistic characters and significantly reduce costs by developing an automated process to extract and represent animated characters from actor performance capture in a multiple camera studio. The key innovation is the development of methods for analysis and representation of 3D video to allow reuse for real-time interactive animation. This will enable efficient authoring of interactive characters with video quality appearance and motion.
State-of-the Art Motion Estimation in the Context of 3D TV june 18 2012
Progress in image sensors and computation power has fueled studies to improve acquisition, processing, and analysis of 3D streams along with 3D scenes/objects reconstruction. The role of motion compensation/motion estimation (MCME) in 3D TV from end-to-end user is investigated in this chapter. Motion vectors (MVs) are closely related to the concept of disparities and they can help improving dynamic scene acquisition, content creation, 2D to 3D conversion, compression coding, decompression/decoding, scene rendering, error concealment, virtual/augmented reality handling, intelligent content retrieval and displaying. Although there are different 3D shape extraction methods, this text focuses mostly on shape-from-motion (SfM) techniques due to their relevance to 3D TV. SfM extraction can restore 3D shape information from a single camera data.
State-of-the Art Motion Estimation in the Context of 3D TV
2013
Progress in image sensors and computation power has fueled studies to improve acquisition, processing, and analysis of 3D streams along with 3D scenes/objects reconstruction. The role of motion compensation/ motion estimation (MCME) in 3D TV from end-to-end user is investigated in this chapter. Motion vectors (MVs) are closely related to the concept of disparities, and they can help improving dynamic scene acquisition, content creation, 2D to 3D conversion, compression coding, decompression/decoding, scene rendering, error concealment, virtual/augmented reality handling, intelligent content retrieval, and displaying. Although there are different 3D shape extraction methods, this chapter focuses mostly on shape-from-motion (SfM) techniques due to their relevance to 3D TV. SfM extraction can restore 3D shape information from a single camera data.
Samera: a scalable and memory-efficient feature extraction algorithm for short 3D video segments
2009
Abstract Tele-immersive systems, are growing in popularity and sophistication. They generate 3D video content in large scale, yielding challenges for executing data-mining tasks. Some of the tasks include classification of actions, recognizing and learning actor movements and so on. Fundamentally, these tasks require tagging and identifying of the features present in the tele-immersive 3D videos. We target the problem of 3D feature extraction, a relatively unexplored direction.
Realtime KLT Feature Point Tracking for High Definition Video
2009
1 JOANNEUM RESEARCH, Institute of Information Systems, Steyrergasse 17, 8010 Graz, Austria 2 Silesian University of Technology, Faculty of Auto matic Control and Robotics, Ulica Academicka 2, 44-10 0 Gliwice, Poland ABSTRACT Automatic detection and tracking of feature points is an important part of many computer vision method s. A widely used method is the KLT tracker proposed by K anade, Lucas and Tomasi. This paper reports work do ne on porting the KLT tracker to the GPU, using the CU DA technology by NVIDIA. For the feature point dete ction, we propose to do all steps of the detection process , except the final one (enforcing a minimum distanc e between feature points), on the GPU. The feature point trac king is done on a multi-resolution image representa tion to allow tracking of large motion. Each feature point is calculated in parallel on the GPU. We compare th e CUDA implementation with the corresponding OpenCV (using SSE and OpenMP) routines in terms of quality and speed, no...
Real-time 2D to 3D video conversion
Journal of Real-Time Image Processing, 2007
We present a real-time implementation of 2D to 3D video conversion using compressed video. In our method, compressed 2D video is analyzed by extracting motion vectors. Using the motion vector maps, depth maps are built for each frame and the frames are segmented to provide object-wise depth ordering. These data are then used to synthesize stereo pairs. 3D video synthesized in this fashion can be viewed using any stereoscopic display. In our implementation, anaglyph projection was selected as the 3D visualization method, because it is mostly suited to standard displays.
Workshop Report 08w5070 Multi-View and Geometry Processing for 3D Cinematography
By 3D cinematography we refer to techniques to generate 3D models of dynamic scenes from multiple cameras at video frame rates. Recent developments in computer vision and computer graphics, especially in such areas as multiple-view geometry and image-based rendering have made 3D cinematography possible. Important applications areas include production of stereoscopic movies, full 3D animation from multiple videos, special effects for more traditional movies, and broadcasting of multiple-viewpoint television, among others. The aim of this workshop was to bring together scientists and practitioner who have contributed to the mathematical foundations of the field, as well as those who have developed working systems. There were 20 participants from Canada, the United States, Europe and Asia. A total of 20 talks of length 30 minutes were presented during the five-day workshop. A book comprising extended versions of these presentations is currently under production, and will be published by Springer-Verlag in 2010 [3].
Real-time 3D graphics streaming using mpeg-4
In this paper, we consider a real-time MPEG-4 streaming architecture to facilitate remote visualization of large scale 3D models on thin clients, which denote most of the hand-held devices that have limited computing resources. MPEG-4 serves as a key component to handle the compression, transmission, and visualization of the high-end supercomputer rendered image sequence, allowing the synchronization of the data in both the terminal and the server. The MPEG-4 encoding speed is thus the bottleneck of the system, in particular, the motion estimation process takes more than half of the total encoding time. We propose a fast motion estimation algorithm that expedites the MPEG-4 encoding process. Our algorithm utilizes the 3D data available at the server and is able to directly calculate the motion vector on a block basis without having to employ the expensive MPEG motion searching procedure. In addition, our algorithm can be implemented on the Graphic Processor Units(GPUs) such that most of the motion estimation process can be done in parallel to the encoding process. Our preliminary results show that the proposed motion estimation is able to significantly speed up the encoding process while maintaining the encoding quality.