Scalable Video Coding Research Papers (original) (raw)
In video coding, it is commonly accepted that the encoding parameters such as the quantization step-size have an influence on the perceived quality. It is also sometimes accepted that using given encoding parameters, the perceived quality... more
In video coding, it is commonly accepted that the encoding parameters such as the quantization step-size have an influence on the perceived quality. It is also sometimes accepted that using given encoding parameters, the perceived quality does not change significantly according to the encoded source content. In this paper, we present the outcomes of two video subjective quality assessment experiments in the context of Scalable Video Coding. We encoded a large set of video sequences under a group of constant quality scenarios based on two spatially scalable layers. One first experiment explores of the relation between a wide range of quantization parameters for each layer and the perceived quality, while the second experiment uses a subset of the encoding scenarios on a large number of video sequences. The two experiments are aligned on a common scale using a set of shared processed video sequences, resulting in a database containing the subjective scores for 60 different sources combined with 20 SVC scenarios. We propose a detailed analysis of the experimental results of the two experiments, bringing a clear insight of the relation between the encoding parameters combination of the scalable layers and the perceived quality, as well as spreading light on the differences in terms of quality depending on the encoded source content. As an endeavour to analyse these differences, we propose a classification of the sources with regards to their relative behaviour when compared to the average of other source contents. We use this classification to identify potential factors to explain the differences between source contents.
... 5: Scalable Video Coding (in integrated form with ITU-T Rec. H.264 | ISO/IEC 14996-10). JointVideo Team (JVT), Doc. JVT-R201, Jan. 2006. 4. Luby, M., Stockhammer, T., Watson, M., Gasiba, T., and Xu, W. Raptor Codes for Reliable... more
... 5: Scalable Video Coding (in integrated form with ITU-T Rec. H.264 | ISO/IEC 14996-10). JointVideo Team (JVT), Doc. JVT-R201, Jan. 2006. 4. Luby, M., Stockhammer, T., Watson, M., Gasiba, T., and Xu, W. Raptor Codes for Reliable Download Delivery in Wireless Broadcast ...
Video compression plays a vital part in many digital video processing for applications such as digital video transmission, also thousands of websites like YouTube, Netflix etc. that requires large storage space. Video compression... more
Video compression plays a vital part in many digital video processing for applications such as digital video transmission, also thousands of websites like YouTube, Netflix etc. that requires large storage space. Video compression technologies are about reducing and removing redundant video data so that a digital video file can be effectively sent over a network or can be put in storage on computer disks with the reduction of data size. In this research paper video compression using motion compensation technique that reduces video data based on motion estimation from one frame to another, is proposed. Diamond Search (DS) motion compensation is an algorithmic technique employed for the encoding of video data for video compression. Motion compensation vectors describe a frame in terms of the transformation of a reference frame with respect to the current frame. The reference frame may be previous in time or even from the future. The proposed method reduces the searching of compression portion based on DS Algorithm in temporal redundancy video sequences. Motion blocks are further compressed by using the Scalable Video Compression methods, Adaptive Dual Tree Complex Wavelet Transform (ADT-CWT) and SPIHT. The performance of the proposed methodology is evaluated in terms of the peak signal-to-noise ratio (PSNR) and the compression ratio (CR).
We provide an overview of an architecture of today's Internet streaming media delivery networks and describe various problems that such systems pose with regard to video coding. We demonstrate that based on the distribution model (live or... more
We provide an overview of an architecture of today's Internet streaming media delivery networks and describe various problems that such systems pose with regard to video coding. We demonstrate that based on the distribution model (live or on-demand), the type of the network delivery mechanism (unicast versus multicast), and optimization criteria associated with particular segments of the network (e.g., minimization of distortion for a given connection rate, minimization of traffic in the dedicated delivery network, etc.), it is possible to identify several models of communication that may require different treatment from both source and channel coding perspectives. We explain how some of these problems can be addressed using a conventional framework of temporal motion-compensated, transform-based video compression algorithm, supported by appropriate channel-adaptation mechanisms in client and server components of a streaming media system. Most of these techniques have already been implemented in RealNetworks(R) RealSystem(R) 8 and its RealVideo(R) 8 codec, which we use throughout the paper to illustrate our results
Degradation of network performance during video transmission may lead to disturbing visual artifacts. Some packets might be lost, corrupted or delayed, making it impossible to properly decode the video data on time at the receiver. The... more
Degradation of network performance during video transmission may lead to disturbing visual artifacts. Some packets might be lost, corrupted or delayed, making it impossible to properly decode the video data on time at the receiver. The quality of the error-concealment technique, as well as the spatial and temporal position of the artifacts have a large impact on the perceived quality after decoding. In this paper, we use the spatial scalability feature of Scalable Video Coding (SVC) for error-concealment. This enables the transmission of a lower resolution video with a higher robustness, for example using unequal error protection. Under the assumption that only the higher resolution video would be affected, we evaluated the visual impact of packet losses in a large scale subjective video quality experiment using the Absolute Category Rating method. The number of impairments, the duration, and the interval between impairments as well the quality of the encoded lower resolution video are varied in a systematic evaluation. This allows for analyzing the influence of each factor both independently and jointly.
In this paper, we describe an FPGA H.264/AVC encoder architecture performing at real-time. To reduce the critical path length and to increase throughput, the encoder uses a parallel and pipeline architecture and all modules have been... more
In this paper, we describe an FPGA H.264/AVC encoder architecture performing at real-time. To reduce the critical path length and to increase throughput, the encoder uses a parallel and pipeline architecture and all modules have been optimized with respect the area cost. Our design is described in VHDL and synthesized to Altera Stratix III FPGA. The throughput of the FPGA architecture reaches a processing rate higher than 177 million of pixels per second at 130 MHz, permitting its use in H.264/AVC standard directed to HDTV.
- by István Gódor and +1
- •
- Distributed Computing, Multimedia, Modeling, Resource Allocation
Data encryption is one of the key information security technologies used for safeguarding multimedia content from unauthorised access and manipulation in end-to-end delivery and access chains. This technology, combined with the... more
Data encryption is one of the key information security technologies used for safeguarding multimedia content from unauthorised access and manipulation in end-to-end delivery and access chains. This technology, combined with the appropriate cryptographic methods, effectively prevents the content against malicious attacks, so as to protect its authenticity as well as integrity. While encryption-based security is ensuring the authorised consumption of
H.264 Scalable Video Coding (SVC) is an extension to the Advanced Video Coding (AVC) H.264 standard which provides efficient scalability functionalities on top of the high coding efficiency of H.264/AVC. SVC allows for temporal, spatial,... more
H.264 Scalable Video Coding (SVC) is an extension to the Advanced Video Coding (AVC) H.264 standard which provides efficient scalability functionalities on top of the high coding efficiency of H.264/AVC. SVC allows for temporal, spatial, and quality scalability of the output video stream, encoding the video information into an H.264/AVC base layer and a series of enhancement layers which incrementally improve the quality, increase screen resolution and/or frame rate. SVC is particularly suited for mobile TV reception, since the received video quality is adaptable to variable reception conditions and heterogeneous receiver capabilities. However, mobile TV Digital Video Broadcasting (DVB) standards such as DVB-H and DVB-SH were designed prior to the introduction of SVC, and therefore the underlying transmission protocols are not optimized for scalable video delivery. In this overview paper, we review recently proposed solutions for SVC stream adaptation on the underlying DVB-H/SH protocols, and point out novel technical solutions that are currently under consideration for the next generation mobile broadcasting standard DVB-NGH.
Abstract: We describe a system for multipoint videoconferencing that offers extremely low end-to-end delay, low cost and complexity, and high scalability, alongside standard features associated with high-end solutions such as rate... more
Abstract: We describe a system for multipoint videoconferencing that offers extremely low end-to-end delay, low cost and complexity, and high scalability, alongside standard features associated with high-end solutions such as rate matching and per-sonal video layout. ...
Distributed Video Coding (DVC) is a new paradigm for video compression based on the information theoretical results of Slepian–Wolf (SW) and Wyner–Ziv (WZ). In this work, a performance analysis of image and video coding schemes based on... more
Distributed Video Coding (DVC) is a new paradigm for video compression based on the information theoretical results of Slepian–Wolf (SW) and Wyner–Ziv (WZ). In this work, a performance analysis of image and video coding schemes based on DVC is presented, addressing temporal, quality and spatial scalability. More specifically, conventional coding is used to obtain a base layer while WZ coding generates the enhancement layers. At the decoder, the base layer is used to construct Side Information (SI) for the DVC decoding ...
As the concept of quantization matrix becomes an important feature in recent video CODECs, an optimized quantization matrix is being considered in the High-Efficiency Video Coding (HEVC) standard. This paper describes the entropy encoding... more
As the concept of quantization matrix becomes an important feature in recent video CODECs, an optimized quantization matrix is being considered in the High-Efficiency Video Coding (HEVC) standard. This paper describes the entropy encoding by familiarizing optimized quantization matrix, and so higher rate of compression can be accomplished over the improved entropy encoding. Experiments show that for the eight benchmark video sequences and PSNR for varying rate of data transmission is explored. Comparative analysis is made with the improved (WE-Encoding) and standard entropy encoding based on the performance measurements. The simulation results show that the proposed method (WE-OQM) can save the originality of the decoded video sequence far better even though the compression rate is increased. In addition, the overall analysis states that the proposed method is 35.29% better than the Standard Encoding and 62.5% better than the WE-Encoding. Ó 2017 Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
The fast growing of multimedia applications and enhanced device (i.e., in capacity and computing) leads the network infrastructure to manage a number of users with different channel qualities, application requirements, and service... more
The fast growing of multimedia applications and enhanced device (i.e., in capacity and computing) leads the network infrastructure to manage a number of users with different channel qualities, application requirements, and service constraints. In such a scenario, is evident the need to find a resource scheduling procedure able to guarantee good levels of performance not only on the network-side but also to the user-side. To this end, this paper introduces a novel approach for multicast resource allocation based on the idea of exploiting a multi-criteria decision method (i.e., namely TOPSIS) properly designed to simultaneously take into account both provider and user benefits during the spectrum allocation process. In particular, we compared a promising multicast radio resource strategy, i.e., subgrouping, tailored to exploit different cost functions represented by (i) local throughput, (ii) local fairness, and (iii) subgroup minimum dissatisfaction index. The obtained results, performed for the delivery of scalable multicast video flows in a Long Term Evolution (LTE) macrocell, demonstrate the effectiveness of the TOPSIS-based radio resource management scheme, which outperforms existing approaches from the literature. Indeed, it succeeds to provide higher data rate and an improved user satisfaction when considering multicast users experiencing different levels of channel and service quality.
The high bitrates of High-Definition or 3D-services require a huge share of the valuable terrestrial spectrum, especially when targeting wide coverage areas. This paper describes how to provide future services with the state-of-the-art... more
The high bitrates of High-Definition or 3D-services require a huge share of the valuable terrestrial spectrum, especially when targeting wide coverage areas. This paper describes how to provide future services with the state-of-the-art digital terrestrial TV technology DVB-T2 in a flexible and cost-efficient way. The combination of layered media such as the scalable and 3D extension of the H.264/AVC or the emerging H.265/HEVC format with the physical layer pipes feature of DVB-T2 enables flexible broadcast of services with differentiated protection of the quality layers. This opens up new ways of service provisioning such as graceful degradation for mobile or fixed reception. This paper shows how existing DVB-T2 and MPEG-2 Transport Stream mechanisms need to be configured for offering such services over DVB-T2. A detailed description on the setup of such services and the involved components is given.
In this paper we address the problem of efficient layered video streaming over peer-to-peer networks and we propose a new receiver-driven streaming mechanism. The main design goal of our new layered video requesting policy is to optimize... more
In this paper we address the problem of efficient layered video streaming over peer-to-peer networks and we propose a new receiver-driven streaming mechanism. The main design goal of our new layered video requesting policy is to optimize the overall distribution of video streams in terms of reliability and overhead. Since the layered peer-to-peer streaming problem is NP-Hard, we show that the classic approaches widely used in layered P2P streaming systems have some limitations and we propose an optimization technique based on harmony search which aims at increasing the rate of successful data transmissions for the most important video layers, while reducing the protocol overhead and ensuring load balancing among the participating peers. Analytical results have demonstrated that our new requesting policy enhances the streaming of layered video over mesh-based peer-to-peer networks and outperforms classic approaches.