Eric Dubois | University of Ottawa | Université d'Ottawa (original) (raw)
Papers by Eric Dubois
IEEE Signal Processing Letters, 2005
This letter presents a new and simplified derivation of the frequency-domain representation of co... more This letter presents a new and simplified derivation of the frequency-domain representation of color images sampled with the Bayer color filter array. Two new demosaicking algorithms based on the frequency-domain representation are described and shown to give excellent results.
The goal of this work is to investigate a high quality, low bit rate image sequence coding scheme... more The goal of this work is to investigate a high quality, low bit rate image sequence coding scheme which takes full advantage of the temporal redundancies. Such schemes will be important for high-compression video coding, such as the one studied in MPEG-4. In conventional image sequence coding, a motion-compensated DPCM configuration is used to remove the temporal redundancies. The performance of the DPCM for such highly correlated sources at rates below one bps degenerates substantially. We have recently shown that for a highly correlated Gauss-Markov source, practical near rate-distortion function performance is possible. We used the entropy-constrained code-excited linear predictive (EC-CELP) quantization scheme to obtain such performances. Hence for low rate image sequence coding, to improve the rate-distortion performance, an EC-CELP configuration can replace the DPCM configuration to quantize the image intensities along motion trajectories. To apply EC-CELP quantization, we first investigate the methods of motion trajectory estimation and propose a temporal block motion compensation configuration. Then, the performance advantage of EC-CELP configuration over the conventional DPCM structures is shown
Ibm Systems Journal, 1997
In September 1993, the Canadian Institute for Telecommunications Research, in collaboration with ... more In September 1993, the Canadian Institute for Telecommunications Research, in collaboration with the IBM Toronto Laboratory Centre for Advanced Studies, initiated a major project on Broadband Services. The goal of this major project is to provide the software technologies required for the development of distributed multimedia applications. Of particular interest are \presentational" applications where multimedia documents, stored in database servers, are retrieved by remote users over a broadband network. Emphasis is placed on e ciency and service exibility. By e ciency, w e mean the ability to support many users and many m ultimedia documents. As to service exibility, w e mean the application is able to support a wide range of quality of service requirements from the users, adapt to changing network conditions, and support multiple document t ypes. The research program consists of six constituent projects: multimedia data management, continuous media le server, quality of service negotiation and adaptation, scalable video encoding, synchronization of multimedia data, and project integration. These projects are investigated by a m ulti-disciplinary team from eight institutions across Canada. Multimedia news has been selected as a target application for development, and the results from the various projects have b e e n i n tegrated into a multimedia news prototype. In this paper, the system architecture, research results, and the prototyping e ort, are presented.
Multidimensional Systems and Signal Processing, 1992
This paper presents the theory of motion-compensated spatiotemporal filtering of time-varying ima... more This paper presents the theory of motion-compensated spatiotemporal filtering of time-varying imagery. The properties of motion trajectories and their relation to displacement fields and velocity fields are presented. The constraints that image motion places on the time-varying image in both the spatiotemporal domain and in the frequency domain are described, along with the implications of these results on motion-compensated filtering and on sampling. An iterative method for estimating motion which generalizes many pixel-oriented and block-oriented methods is presented. Motion-compensated filtering is then applied to the problems of prediction, interpolation, and smoothing.
IEEE Transactions on Image Processing, 2005
This paper presents a new formulation of the regularized image up-sampling problem that incorpora... more This paper presents a new formulation of the regularized image up-sampling problem that incorporates models of the image acquisition and display processes. We give a new analytic perspective that justifies the use of total-variation regularization from a signal processing perspective, based on an analysis that specifies the requirements of edge-directed filtering. This approach leads to a new data fidelity term that has been coupled with a total-variation regularizer to yield our objective function. This objective function is minimized using a level-sets motion that is based on the level-set method, with two types of motion that interact simultaneously. A new choice of these motions leads to a stable solution scheme that has a unique minimum. One aspect of the human visual system, perceptual uniformity, is treated in accordance with the linear nature of the data fidelity term. The method was implemented and has been verified to provide improved results, yielding crisp edges without introducing ringing or other artifacts.
IEEE Transactions on Computers, 1978
Necessary and sufficient conditions for a direct sum of local rings to support a generalized disc... more Necessary and sufficient conditions for a direct sum of local rings to support a generalized discrete Fourier transform are derived. In particular, these conditions can be appled to any finite ring. Tle function O(N) defined by Agarwal and Burrus for transforms over ZN is extended to any finite ring R as O(R)and it isshown that R supports a length m discrete Fourier transform if and only if m is a divisor of O(R) Tlhis result is applied to the homomorphic images of rings-of algebraic integers
Visual communications over wireless networks require the efficient and robust coding of video sig... more Visual communications over wireless networks require the efficient and robust coding of video signals for transmission over wireless links having time-varying channel capacity. We compare several schemes for encoding video data into two priority streams, thereby enabling the transmission of video data over wireless links to be switched between two bit rates. An H.261 (p 64) algorithm is modified to implement each candidate scheme. The algorithms are evaluated for a microcellular wireless environment and a clear-channel bit rate of 65 kb/s. Our results show that by combining layering with automatic-repeat request (wireless-)link control, almost-wireline visual quality can be achieved.
IEEE Transactions on Circuits and Systems for Video Technology, 1996
We consider the transmission of QCIF resolution (176×144 pixels) video signals over wireless chan... more We consider the transmission of QCIF resolution (176×144 pixels) video signals over wireless channels at transmission rates of 64 kb/s and below. The bursty nature of the errors on the wireless channel requires careful control of transmission performance without unduly increasing the overhead for error protection. A dual-rate source coder is presented that adaptively selects a coding rate according to the current channel conditions. An automatic repeat request (ARQ) error control technique is employed to retransmit erroneous data-frames. The source coding rate is selected based on the occupancy level of the ARQ transmission buffer. Error detection followed by retransmission results in less overhead than forward error correction for the same quality. Simulation results are provided for the statistics of the frame-error bursts of the proposed system over code division multiple access (CDMA) channels with average bit error rates of 10-3 to 10-4
In this paper, we propose two adaptive interlaced-to-progressive conversion techniques in which t... more In this paper, we propose two adaptive interlaced-to-progressive conversion techniques in which the adequacy of the estimated motion vector is evaluated. If the motion vector is unlikely to give a good temporal motion compensated interpolation result, spatial interpolation is favored or selected to avoid temporal artifacts. In the rst proposed interlaced-to-progressive conversion technique, called spatio-temporal weighted adaptive interlaced-to-progressive conversion, the interpolated value is a weighted sum of four interpolation lter results: the result by spatial vertical interpolation, the result by spatial directional interpolation using steerable lters, the result by temporal interpolation without motion compensation and the result by temporal motion-compensated interpolation. The most favored interpolation result of these four will receive the highest weight. In the second proposed technique, called similarity adaptive interlaced-to-progressive conversion, the spatial directional interpolation using steerable lters will be selected if it is likely to yield a better interpolated result than the temporal motion-compensated interpolation. The selection is done based on a similarity test. Our subjective viewing showed that these two interlaced-to-progressive conversion techniques correctly identify badly estimated motion vectors and occlusion areas. The use of spatial directional interpolation using steerable lters avoids the loss of image resolution encountered when shift-invariant spatial interpolation is used.
The purpose of this paper is to introduce a fast automated whitenoise estimation method which giv... more The purpose of this paper is to introduce a fast automated whitenoise estimation method which gives reliable estimates in images with smooth and textured areas. This method is a block-based method that takes image structure into account and uses a measure other than the variance to determine if a block is homogeneous. It uses no thresholds and automates the way that blockbased methods stop the averaging of block variances. The proposed method selects intensity-homogeneous blocks in an image by rejecting blocks of structure using a new structure analyzer. The analyzer used is based on high-pass operators and special masks for corners to allow implicit detection of structure and to stabilize the homogeneity estimation. For typical image quality (PSNR of 20-40 dB) the proposed method outperforms other methods significantly and the worst-case estimation error is 3 dB which is suitable for real applications such as video surveillance or broadcasts. The method performs well even in images with few smooth areas and in highly noisy images.
IEEE Transactions on Circuits and Systems for Video Technology, 2005
Noise can significantly impact the effectiveness of video processing algorithms. This paper propo... more Noise can significantly impact the effectiveness of video processing algorithms. This paper proposes a fast white-noise variance estimation that is reliable even in images with large textured areas. This method finds intensity-homogeneous blocks first and then estimates the noise variance in these blocks, taking image structure into account. This paper proposes a new measure to determine homogeneous blocks and a new structure analyzer for rejecting blocks with structure. This analyzer is based on high-pass operators and special masks for corners to stabilize the homogeneity estimation. For typical video quality (PSNR of 20-40 dB), the proposed method outperforms other methods significantly and the worst-case estimation error is 3 dB, which is suitable for real applications such as video broadcasts. The method performs well both in highly noisy and good-quality images. It also works well in images including few uniform blocks.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992
The estimation of 2D motion from spatio-temporally sampled image sequences is discussed, concentr... more The estimation of 2D motion from spatio-temporally sampled image sequences is discussed, concentrating on the optimization aspect of the problem formulated through a Bayesian framework based on Markov random field (MRF) models. First, the Maximum A Posteriori Probability (MAP) formulation for motion estimation over discrete and continuous state spaces is reviewed along with the solution method using simulated annealing (SA). Then, instantaneous 'freezing' is applied to ,the stochastic algorithms resulting in well known deterministic methods. The stochastic algorithms are compared with their deterministic approximations over image sequences with natural data and synthetic as well as natural motion.
IEEE Transactions on Circuits and Systems Ii: Analog and Digital Signal Processing, 1995
This paper describes a new approach to the design of multidimensional (M-D) nitewordlength digita... more This paper describes a new approach to the design of multidimensional (M-D) nitewordlength digital lters with speci cations in the frequency and spatial domains. The approach is based on stochastic optimization and extends previous work on nite impulse response (FIR) lters in two ways: by inclusion of spatial constraints and by application to the case of in nite impulse response (IIR) lters. The formulation proposed is based on a multiple-term objective function that, in addition to magnitude constraints, also includes step response, group delay and stability constraints. Our attention to these characteristics stems from the application of such lters to video processing that we are actively pursuing. Since lter coe cients are of nite precision and since the objective function is multivariable, non-di erentiable and likely to have multiple minima, we use simulated annealing for optimization. We show numerous examples of the design of practical lters such as channel and luminance/chrominance separation lters used in the NTSC system. We demonstrate the impact of coe cient precision as well as of group delay and step response constraints on lter parameters.
Image and Vision Computing, 1991
The estimation of 2D motion from spatio-temporally sampled image sequences is discussed, concentr... more The estimation of 2D motion from spatio-temporally sampled image sequences is discussed, concentrating on the optimization aspect of the problem formulated through a Bayesian framework based on Markov random field (MRF) models. First, the Maximum A Posteriori Probability (MAP) formulation for motion estimation over discrete and continuous state spaces is reviewed along with the solution method using simulated annealing (SA). Then, instantaneous 'freezing' is applied to ,the stochastic algorithms resulting in well known deterministic methods. The stochastic algorithms are compared with their deterministic approximations over image sequences with natural data and synthetic as well as natural motion.
IEEE Transactions on Image Processing, 2000
Stereoscopic visualization systems based on liquid crystal shutter (LCS) eyewear and cathode-ray ... more Stereoscopic visualization systems based on liquid crystal shutter (LCS) eyewear and cathode-ray tube (CRT) displays provide today the best overall quality of three-dimensional (3-D) images and therefore have a dominant position in commercial as well as professional markets. Due to the CRT and LCS characteristics, however, such systems suffer from perceptual crosstalk (“shadows”) at object boundaries that can reduce, and at times inhibit, the ability to perceive depth. In this paper, we propose a method to reduce such crosstalk. We present a simple model for intensity leak, we assess model parameters for a time-sequential LCS/CRT system and we propose a computationally efficient algorithm to eliminate the crosstalk. Since the full crosstalk elimination implies an unacceptable image degradation (reduction of contrast), we study the tradeoff between crosstalk elimination and image contrast. We describe experiments on synthetic and natural stereoscopic images and we discuss informal subjective viewing of processed images. Overall, the viewer response has been very positive; 3-D perception of many objects became either much easier or even effortless. Since the proposed algorithm can be easily implemented in real time (only linear scaling and table look-up are needed), we believe that it can be successfully used today in various stereoscopic applications suffering from image crosstalk. This is particularly true for PC-based 3-D viewing where the algorithm can be executed by the CPU or by an advanced graphics board
IEEE Transactions on Circuits and Systems for Video Technology, 1991
A study of block-oriented motion estimation algorithms is presented, and their application to mot... more A study of block-oriented motion estimation algorithms is presented, and their application to motion-compensated temporal interpolation is described. In the proposed approach, the motion field within each block is described by a function of a few parameters that can represent typical local motion vector fields.. A probabilistic formulation is then used to develop maximum-likelihood (ML) and maximum a posteriori probability
IEEE Signal Processing Letters, 2005
This letter presents a new and simplified derivation of the frequency-domain representation of co... more This letter presents a new and simplified derivation of the frequency-domain representation of color images sampled with the Bayer color filter array. Two new demosaicking algorithms based on the frequency-domain representation are described and shown to give excellent results.
The goal of this work is to investigate a high quality, low bit rate image sequence coding scheme... more The goal of this work is to investigate a high quality, low bit rate image sequence coding scheme which takes full advantage of the temporal redundancies. Such schemes will be important for high-compression video coding, such as the one studied in MPEG-4. In conventional image sequence coding, a motion-compensated DPCM configuration is used to remove the temporal redundancies. The performance of the DPCM for such highly correlated sources at rates below one bps degenerates substantially. We have recently shown that for a highly correlated Gauss-Markov source, practical near rate-distortion function performance is possible. We used the entropy-constrained code-excited linear predictive (EC-CELP) quantization scheme to obtain such performances. Hence for low rate image sequence coding, to improve the rate-distortion performance, an EC-CELP configuration can replace the DPCM configuration to quantize the image intensities along motion trajectories. To apply EC-CELP quantization, we first investigate the methods of motion trajectory estimation and propose a temporal block motion compensation configuration. Then, the performance advantage of EC-CELP configuration over the conventional DPCM structures is shown
Ibm Systems Journal, 1997
In September 1993, the Canadian Institute for Telecommunications Research, in collaboration with ... more In September 1993, the Canadian Institute for Telecommunications Research, in collaboration with the IBM Toronto Laboratory Centre for Advanced Studies, initiated a major project on Broadband Services. The goal of this major project is to provide the software technologies required for the development of distributed multimedia applications. Of particular interest are \presentational" applications where multimedia documents, stored in database servers, are retrieved by remote users over a broadband network. Emphasis is placed on e ciency and service exibility. By e ciency, w e mean the ability to support many users and many m ultimedia documents. As to service exibility, w e mean the application is able to support a wide range of quality of service requirements from the users, adapt to changing network conditions, and support multiple document t ypes. The research program consists of six constituent projects: multimedia data management, continuous media le server, quality of service negotiation and adaptation, scalable video encoding, synchronization of multimedia data, and project integration. These projects are investigated by a m ulti-disciplinary team from eight institutions across Canada. Multimedia news has been selected as a target application for development, and the results from the various projects have b e e n i n tegrated into a multimedia news prototype. In this paper, the system architecture, research results, and the prototyping e ort, are presented.
Multidimensional Systems and Signal Processing, 1992
This paper presents the theory of motion-compensated spatiotemporal filtering of time-varying ima... more This paper presents the theory of motion-compensated spatiotemporal filtering of time-varying imagery. The properties of motion trajectories and their relation to displacement fields and velocity fields are presented. The constraints that image motion places on the time-varying image in both the spatiotemporal domain and in the frequency domain are described, along with the implications of these results on motion-compensated filtering and on sampling. An iterative method for estimating motion which generalizes many pixel-oriented and block-oriented methods is presented. Motion-compensated filtering is then applied to the problems of prediction, interpolation, and smoothing.
IEEE Transactions on Image Processing, 2005
This paper presents a new formulation of the regularized image up-sampling problem that incorpora... more This paper presents a new formulation of the regularized image up-sampling problem that incorporates models of the image acquisition and display processes. We give a new analytic perspective that justifies the use of total-variation regularization from a signal processing perspective, based on an analysis that specifies the requirements of edge-directed filtering. This approach leads to a new data fidelity term that has been coupled with a total-variation regularizer to yield our objective function. This objective function is minimized using a level-sets motion that is based on the level-set method, with two types of motion that interact simultaneously. A new choice of these motions leads to a stable solution scheme that has a unique minimum. One aspect of the human visual system, perceptual uniformity, is treated in accordance with the linear nature of the data fidelity term. The method was implemented and has been verified to provide improved results, yielding crisp edges without introducing ringing or other artifacts.
IEEE Transactions on Computers, 1978
Necessary and sufficient conditions for a direct sum of local rings to support a generalized disc... more Necessary and sufficient conditions for a direct sum of local rings to support a generalized discrete Fourier transform are derived. In particular, these conditions can be appled to any finite ring. Tle function O(N) defined by Agarwal and Burrus for transforms over ZN is extended to any finite ring R as O(R)and it isshown that R supports a length m discrete Fourier transform if and only if m is a divisor of O(R) Tlhis result is applied to the homomorphic images of rings-of algebraic integers
Visual communications over wireless networks require the efficient and robust coding of video sig... more Visual communications over wireless networks require the efficient and robust coding of video signals for transmission over wireless links having time-varying channel capacity. We compare several schemes for encoding video data into two priority streams, thereby enabling the transmission of video data over wireless links to be switched between two bit rates. An H.261 (p 64) algorithm is modified to implement each candidate scheme. The algorithms are evaluated for a microcellular wireless environment and a clear-channel bit rate of 65 kb/s. Our results show that by combining layering with automatic-repeat request (wireless-)link control, almost-wireline visual quality can be achieved.
IEEE Transactions on Circuits and Systems for Video Technology, 1996
We consider the transmission of QCIF resolution (176×144 pixels) video signals over wireless chan... more We consider the transmission of QCIF resolution (176×144 pixels) video signals over wireless channels at transmission rates of 64 kb/s and below. The bursty nature of the errors on the wireless channel requires careful control of transmission performance without unduly increasing the overhead for error protection. A dual-rate source coder is presented that adaptively selects a coding rate according to the current channel conditions. An automatic repeat request (ARQ) error control technique is employed to retransmit erroneous data-frames. The source coding rate is selected based on the occupancy level of the ARQ transmission buffer. Error detection followed by retransmission results in less overhead than forward error correction for the same quality. Simulation results are provided for the statistics of the frame-error bursts of the proposed system over code division multiple access (CDMA) channels with average bit error rates of 10-3 to 10-4
In this paper, we propose two adaptive interlaced-to-progressive conversion techniques in which t... more In this paper, we propose two adaptive interlaced-to-progressive conversion techniques in which the adequacy of the estimated motion vector is evaluated. If the motion vector is unlikely to give a good temporal motion compensated interpolation result, spatial interpolation is favored or selected to avoid temporal artifacts. In the rst proposed interlaced-to-progressive conversion technique, called spatio-temporal weighted adaptive interlaced-to-progressive conversion, the interpolated value is a weighted sum of four interpolation lter results: the result by spatial vertical interpolation, the result by spatial directional interpolation using steerable lters, the result by temporal interpolation without motion compensation and the result by temporal motion-compensated interpolation. The most favored interpolation result of these four will receive the highest weight. In the second proposed technique, called similarity adaptive interlaced-to-progressive conversion, the spatial directional interpolation using steerable lters will be selected if it is likely to yield a better interpolated result than the temporal motion-compensated interpolation. The selection is done based on a similarity test. Our subjective viewing showed that these two interlaced-to-progressive conversion techniques correctly identify badly estimated motion vectors and occlusion areas. The use of spatial directional interpolation using steerable lters avoids the loss of image resolution encountered when shift-invariant spatial interpolation is used.
The purpose of this paper is to introduce a fast automated whitenoise estimation method which giv... more The purpose of this paper is to introduce a fast automated whitenoise estimation method which gives reliable estimates in images with smooth and textured areas. This method is a block-based method that takes image structure into account and uses a measure other than the variance to determine if a block is homogeneous. It uses no thresholds and automates the way that blockbased methods stop the averaging of block variances. The proposed method selects intensity-homogeneous blocks in an image by rejecting blocks of structure using a new structure analyzer. The analyzer used is based on high-pass operators and special masks for corners to allow implicit detection of structure and to stabilize the homogeneity estimation. For typical image quality (PSNR of 20-40 dB) the proposed method outperforms other methods significantly and the worst-case estimation error is 3 dB which is suitable for real applications such as video surveillance or broadcasts. The method performs well even in images with few smooth areas and in highly noisy images.
IEEE Transactions on Circuits and Systems for Video Technology, 2005
Noise can significantly impact the effectiveness of video processing algorithms. This paper propo... more Noise can significantly impact the effectiveness of video processing algorithms. This paper proposes a fast white-noise variance estimation that is reliable even in images with large textured areas. This method finds intensity-homogeneous blocks first and then estimates the noise variance in these blocks, taking image structure into account. This paper proposes a new measure to determine homogeneous blocks and a new structure analyzer for rejecting blocks with structure. This analyzer is based on high-pass operators and special masks for corners to stabilize the homogeneity estimation. For typical video quality (PSNR of 20-40 dB), the proposed method outperforms other methods significantly and the worst-case estimation error is 3 dB, which is suitable for real applications such as video broadcasts. The method performs well both in highly noisy and good-quality images. It also works well in images including few uniform blocks.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992
The estimation of 2D motion from spatio-temporally sampled image sequences is discussed, concentr... more The estimation of 2D motion from spatio-temporally sampled image sequences is discussed, concentrating on the optimization aspect of the problem formulated through a Bayesian framework based on Markov random field (MRF) models. First, the Maximum A Posteriori Probability (MAP) formulation for motion estimation over discrete and continuous state spaces is reviewed along with the solution method using simulated annealing (SA). Then, instantaneous 'freezing' is applied to ,the stochastic algorithms resulting in well known deterministic methods. The stochastic algorithms are compared with their deterministic approximations over image sequences with natural data and synthetic as well as natural motion.
IEEE Transactions on Circuits and Systems Ii: Analog and Digital Signal Processing, 1995
This paper describes a new approach to the design of multidimensional (M-D) nitewordlength digita... more This paper describes a new approach to the design of multidimensional (M-D) nitewordlength digital lters with speci cations in the frequency and spatial domains. The approach is based on stochastic optimization and extends previous work on nite impulse response (FIR) lters in two ways: by inclusion of spatial constraints and by application to the case of in nite impulse response (IIR) lters. The formulation proposed is based on a multiple-term objective function that, in addition to magnitude constraints, also includes step response, group delay and stability constraints. Our attention to these characteristics stems from the application of such lters to video processing that we are actively pursuing. Since lter coe cients are of nite precision and since the objective function is multivariable, non-di erentiable and likely to have multiple minima, we use simulated annealing for optimization. We show numerous examples of the design of practical lters such as channel and luminance/chrominance separation lters used in the NTSC system. We demonstrate the impact of coe cient precision as well as of group delay and step response constraints on lter parameters.
Image and Vision Computing, 1991
The estimation of 2D motion from spatio-temporally sampled image sequences is discussed, concentr... more The estimation of 2D motion from spatio-temporally sampled image sequences is discussed, concentrating on the optimization aspect of the problem formulated through a Bayesian framework based on Markov random field (MRF) models. First, the Maximum A Posteriori Probability (MAP) formulation for motion estimation over discrete and continuous state spaces is reviewed along with the solution method using simulated annealing (SA). Then, instantaneous 'freezing' is applied to ,the stochastic algorithms resulting in well known deterministic methods. The stochastic algorithms are compared with their deterministic approximations over image sequences with natural data and synthetic as well as natural motion.
IEEE Transactions on Image Processing, 2000
Stereoscopic visualization systems based on liquid crystal shutter (LCS) eyewear and cathode-ray ... more Stereoscopic visualization systems based on liquid crystal shutter (LCS) eyewear and cathode-ray tube (CRT) displays provide today the best overall quality of three-dimensional (3-D) images and therefore have a dominant position in commercial as well as professional markets. Due to the CRT and LCS characteristics, however, such systems suffer from perceptual crosstalk (“shadows”) at object boundaries that can reduce, and at times inhibit, the ability to perceive depth. In this paper, we propose a method to reduce such crosstalk. We present a simple model for intensity leak, we assess model parameters for a time-sequential LCS/CRT system and we propose a computationally efficient algorithm to eliminate the crosstalk. Since the full crosstalk elimination implies an unacceptable image degradation (reduction of contrast), we study the tradeoff between crosstalk elimination and image contrast. We describe experiments on synthetic and natural stereoscopic images and we discuss informal subjective viewing of processed images. Overall, the viewer response has been very positive; 3-D perception of many objects became either much easier or even effortless. Since the proposed algorithm can be easily implemented in real time (only linear scaling and table look-up are needed), we believe that it can be successfully used today in various stereoscopic applications suffering from image crosstalk. This is particularly true for PC-based 3-D viewing where the algorithm can be executed by the CPU or by an advanced graphics board
IEEE Transactions on Circuits and Systems for Video Technology, 1991
A study of block-oriented motion estimation algorithms is presented, and their application to mot... more A study of block-oriented motion estimation algorithms is presented, and their application to motion-compensated temporal interpolation is described. In the proposed approach, the motion field within each block is described by a function of a few parameters that can represent typical local motion vector fields.. A probabilistic formulation is then used to develop maximum-likelihood (ML) and maximum a posteriori probability