COVID‐19 transient snoring (CVTS): Clinical and laboratory description (original) (raw)

A novel interpolation method for 3D view synthesis

In this paper a novel intermediate view synthesis algorithm for 3D video rendering is presented. The proposal is based on the representation of the pixels to be warped as intervals of finite length. The warping, hole-filling and merging stages of the view synthesis procedure can be recast according to this notation with significant advantages from the point of view of algorithm complexity, memory requirements and image synthesis quality. A novel interpolation rule based on the definition of a foreground and a background weight associated to each warped pixel is proposed and compared with linear interpolation based on pixels distance. The experimental results show that the designed technique yields a noticeable gain in terms of rendered image quality, while at the same time halving the execution time with respect to the MPEG view synthesis reference software.

View Synthesis for Advanced 3D Video Systems

EURASIP Journal on Image and Video Processing, 2008

Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD) is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, highquality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo-and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.

Post-processing of 3D Video Extension of H.264/AVC for a Quality Enhancement of Synthesized View Sequences

ETRI Journal, 2014

Since July of 2012, the 3D video extension of H.264/AVC has been under development to support the multi-view video plus depth format. In 3D video applications such as multi-view and free-view point applications, synthesized views are generated using coded texture video and coded depth video. Such synthesized views can be distorted by quantization noise and inaccuracy of 3D wrapping positions, thus it is important to improve their quality where possible. To achieve this, the relationship among the depth video, texture video, and synthesized view is investigated herein. Based on this investigation, an edge noise suppression filtering process to preserve the edges of the depth video and a method based on a total variation approach to maximum a posteriori probability estimates for reducing the quantization noise of the coded texture video. The experiment results show that the proposed methods improve the peak signal-to-noise ratio and visual quality of a synthesized view compared to a synthesized view without post processing methods.

View Interpolation for Medical Images on Autostereoscopic Displays

IEEE Transactions on Circuits and Systems for Video Technology, 2012

We present an approach for efficient rendering and transmitting views to a high-resolution autostereoscopic display for medical purposes. Displaying biomedical images on an autostereoscopic display poses different requirements than in a consumer case. For medical usage, it is essential that the perceived image represents the actual clinical data and offers sufficiently high quality for diagnosis or understanding. Autostereoscopic display of multiple views introduces two hurdles: transmission of multi-view data through a bandwidth-limited channel and the computation time of the volume rendering algorithm. We address both issues by generating and transmitting limited set of views enhanced with a depth signal per view. We propose an efficient view interpolation and rendering algorithm at the receiver side based on texture+depth data representation, which can operate with a limited amount of views. We study the main artifacts that occur during rendering-occlusions, and we quantify them first for a synthetic model and then for real-world biomedical data. The experimental results allow us to quantify the Peak Signal-to-Noise Ratio (PSNR) for rendered texture and depth as well as the amount of disoccluded pixels as a function of the angle between surrounding cameras.

High Efficiency 3D Video Coding Using New Tools Based on View Synthesis

IEEE Transactions on Image Processing, 2000

We propose a new coding technology for 3D video represented by multiple views and the respective depth maps. The proposed technology is demonstrated as an extension of the recently developed High Efficiency Video Coding (HEVC). One base view is compressed into a standard bitstream (like in HEVC). The remaining views and the depth maps are compressed using new coding tools that mostly rely on view synthesis. In the decoder, those views and the depth maps are derived via synthesis in the 3D space from the decoded base view and from data corresponding to small disoccluded regions. The shapes and locations of those disoccluded regions can be derived by the decoder without any side information transmitted. In order to achieve high compression efficiency, we propose several new tools like Depth-Based Motion Prediction, Joint High Frequency Layer Coding, Consistent Depth Representation and Nonlinear Depth Representation. The experiments show high compression efficiency of the proposed technology. The bitrate needed for transmission of even two side views with depth maps is mostly below 50% of the bitrate for a single-view video. Index Terms-3D video, coding, compression, MVD representation, HEVC, depth maps. Krzysztof Wegner received the M.Sc. degree from Poznań University of Technology in 2008. Currently he is working towards his Ph.D. there. He is co-author of several papers on free view television, depth estimation and view synthesis. His professional interests include video compression in multipoint view systems, depth estimation form stereoscopic images, view synthesis for free view television, face detection and recognition. He is involved in ISO standardization activities where he contributes to the development of the future 3D video coding standards. Jacek Konieczny received the M.Sc. and Ph.D. degrees from Poznań University of Technology, Poznań, Poland, in 2008 and 2013, respectively. He has been involved in several projects focused on multiview and 3D video coding. His research interests include representation and coding of multiview video scenes, free-viewpoint video, and 2-D and 3-D video-based rendering. He is involved in ISO standardization activities where he contributes to the development of the 3D video coding standard. Maciej Kurc received his M.Sc. (2008) from the Faculty of Electronics and Telecommunications, Poznań University of Technology, PL, where he currently is a Ph.D. student. His main areas of research are video compression and FPGA logic design. Jakub Siast received the M.Sc. degree (2009) from the Faculty of Electronics and Telecommunications, Poznań University of Technology, PL, where he is Ph.D. student. His current research interests include image processing and coding, developing of video coding algorithms, FPGA and microprocessor architecture design. Jakub Stankowski received the M.Sc. degree (2009) from the Faculty of Electronics and Telecommunications, Poznań University of Technology, PL, where he is a Ph.D. student. His current research interests include video compression, performance optimized video processing algorithms, software optimization techniques. Robert Ratajczak received the M.Sc. degree from the Faculty of Electronics and Telecommunications, Poznań University of Technology, PL, in 2010, where he is currently a Ph.D. student. His current research interests include stereoscopic images processing and coding, 3D surface reconstruction, object classification and detection. Tomasz Grajek received his M.Sc. and Ph.D. degrees from Poznań University of Technology in 2004 and 2010 respectively. At present he is an assistant at the Chair of Multimedia Telecommunications and Microelectronics.

Partial Depth Image Based Re-Rendering for Synthesized View Distortion Computation

IEEE Transactions on Circuits and Systems for Video Technology

3D video systems transmit depth maps in order to render synthesized views (SVs) at a receiver. To anticipate this purpose when processing a depth map, a sender-side depth processing algorithm (DPA), as e.g. a depth encoder, can also render the SVs, compute their SV distortion (SVD) and adapt to it. This requires a low-complexity algorithm as computational resources are usually limited. We propose such an algorithm in this paper. First, we discuss a measure that relates a depth change to an SVD change using rendering. Then, we present an optimized process combining basic rendering steps, as warping, occlusion handling, interpolation, hole filling, and blending. Furthermore, we analyze which parts of an SV are affected by a depth change and modify the process to re-render only them. The resulting algorithm is significantly less complex than an unoptimized rendering-based variant and quantifies the SVD more accurately than existing estimation methods. The algorithm is used by the 3D-HEVC reference software encoder as the main method for distortion computation and can also be used by other DPAs.

Image-Based Rendering and Synthesis

IEEE Signal Processing Magazine, 2000

Technological advances and challenges ] M ultiview imaging (MVI) has attracted considerable attention recently due to its increasingly wide range of applications and the decreasing cost of digital cameras. This opens up many new and interesting research topics and applications, such as virtual view synthesis for three-dimensional (3-D) television (3DTV) and entertainment, high-performance imaging, video processing and analysis for surveillance, distance learning, industry inspection, etc.

3DAV exploration of video-based rendering technology in MPEG

IEEE Transactions on Circuits and Systems for Video Technology, 2004

New kinds of media are emerging that extend the functionality of available technology. The growth of immersive recording technologies has led to video-based rendering systems for photographing and reproducing environments in motion. This lends itself to new forms of interactivity for the viewer, including the ability to explore a photographic scene and interact with its features. The three-dimensional (3-D) qualities of objects in the scene can be extracted by analysis techniques and displayed by the use of stereo vision. The data types and image bandwidth needed for this type of media experience may require especially efficient formats for representation, coding, and transmission. An overview is presented here of the MPEG activity exploring the need for standardization in this area to support these new applications, under the name of 3DAV (for 3-D audio-visual). As an example, a detailed solution for omnidirectional video is presented as one of the application scenarios in 3DAV.

Compression and interpolation of 3D stereoscopic and multiview video

Stereoscopic Displays and Virtual Reality Systems IV, 1997

Compression and interpolation each require, given part of an image, or part of a collection or stream of images, being able to predict other parts. Compression is achieved by transmitting part of the imagery along with instructions for predicting the rest of it; of course, the instructions are usually much shorter than the unsent data. Interpolation is just a matter of predicting part of the way between two extreme images; however, whereas in compression the original image is known at the encoder, and thus the residual can be calculated, compressed, and transmitted, in interpolation the actual intermediate image is not known, so it is not possible to improve the final image quality by adding back the residual image. Practical 3D-video compression methods typically use a system with four modules: (1) coding one of the streams (the main stream) using a conventional method (e.g., MPEG), (2) calculating the disparity map(s) between corresponding points in the main stream and the auxiliary stream(s), (3) coding the disparity maps, and (4) coding the residuals. It is natural and usually advantageous to integrate motion compensation with the disparity calculation and coding. The efficient coding and transmission of the residuals is usually the only practical way to handle occlusions, and the ultimate performance of beginning-to-end systems is usually dominated by the cost of this coding. In this paper we summarize the background principles, explain the innovative features of our implementation steps, and provide quantitative measures of component and system performance.

The effects of multiview depth video compression on multiview rendering

Signal Processing-image Communication, 2009

This article investigates the interaction between different techniques for depth compression and view synthesis rendering with multiview video plus scene depth data. Two different approaches for depth coding are compared, namely H.264/MVC, using temporal and inter-view reference images for efficient prediction, and the novel platelet-based coding algorithm, characterized by being adapted to the special characteristics of depth-images. Since depth-images are a 2D representation of the 3D scene geometry, depth-image errors lead to geometry distortions. Therefore, the influence of geometry distortions resulting from coding artifacts is evaluated for both coding approaches in two different ways. First, the variation of 3D surface meshes is analyzed using the Hausdorff distance and second, the distortion is evaluated for 2D view synthesis rendering, where color and depth information are used together to render virtual intermediate camera views of the scene. The results show that-although its rate-distortion (R-D) performance is worse-platelet-based depth coding outperforms H.264, due to improved sharp edge preservation. Therefore, depth coding needs to be evaluated with respect to geometry distortions.