Philippe Bekaert | Universiteit Hasselt (original) (raw)
Uploads
Papers by Philippe Bekaert
Proceedings of the 9th International Conference on Computer Vision Theory and Applications, 2014
Proceedings of the International Conference on Computer Vision Theory and Applications, 2011
We present a fully functional prototype to convincingly restore eye contact between two video cha... more We present a fully functional prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, to achieve real-time performance up to 30 frames per second for 800×600 resolution images. Furthermore, an optimal set of finetuned parameters are presented, that optimizes the end-to-end performance of the application, and therefore is still able to achieve high subjective visual quality.
Hasselt Univ, Expertise Ctr Digital Media, Diepenbeek, Belgium. De Decker, B, Hasselt Univ, Exper... more Hasselt Univ, Expertise Ctr Digital Media, Diepenbeek, Belgium. De Decker, B, Hasselt Univ, Expertise Ctr Digital Media, Wetenschapspark 2, Diepenbeek, Belgium.
One of the main problems in the radiosity method is how to discretise the surfaces of a scene int... more One of the main problems in the radiosity method is how to discretise the surfaces of a scene into mesh elements that allow us to accurately represent illumination. In this paper we present a robust information-theoretic refinement criterion (oracle) based on kernel smoothness for hierarchical radiosity. This oracle improves on previous ones in that at equal cost it gives a better discretisation, approaching the optimal one from an information theory point of view, and also needs less visibility computations for a similar image quality.
... Accelerating Path Tracing by Re-Using Paths Philippe Bekaert Max-Planck-Institut für Informat... more ... Accelerating Path Tracing by Re-Using Paths Philippe Bekaert Max-Planck-Institut für Informatik, Saarbrücken, Germany. Philippe.Bekaert@mpi-sb.mpg.de Mateu Sbert Institut d'Informàtica i Aplicacions, Universitat de Girona, Spain. mateu@ima.udg.es ...
In this paper, we provide examples to optimize signal processing or visual computing algorithms w... more In this paper, we provide examples to optimize signal processing or visual computing algorithms written for SIMT-based GPU architectures. These implementations demonstrate the optimizations for CUDA or its successors OpenCL and DirectCompute. We discuss the effect and optimization principles of memory coalescing, bandwidth reduction, processor occupancy, bank conflict reduction, local memory elimination and instruction optimization. The effect of the optimization steps are illustrated by state-of-the-art examples. A comparison with optimized and unoptimized algorithms is provided. A first example discusses the construction of joint histograms using shared memory, where optimizations lead to a significant speedup compared to the original implementation. A second example presents convolution and the acquired results.
Procedings of the British Machine Vision Conference 2008, 2008
Applied Sciences, 2020
Light field 3D displays require a precise alignment between the display source and the micromirro... more Light field 3D displays require a precise alignment between the display source and the micromirror-array screen for error free 3D visualization. Hence, calibrating the system using an external camera becomes necessary, before displaying any 3D contents. The inter-dependency of the intrinsic and extrinsic parameters of display-source, calibration-camera, and micromirror-array screen, makes the calibration process very complex and error-prone. Thus, several assumptions are made with regard to the display setup, in order to simplify the calibration. A fully automatic calibration method based on several such assumptions was reported by us earlier. Here, in this paper, we report a method that uses no such assumptions, but yields a better calibration. The proposed method adapts an optical solution where the micromirror-array screen is fabricated as a computer generated hologram with a tiny diffuser engraved at one corner of each elemental micromirror in the array. The calibration algorith...
Proceedings of the 10th International Conference on Signal Processing and Multimedia Applications and 10th International Conference on Wireless Information Networks and Systems, 2013
Proceedings of Identification of dark matter 2008 — PoS(idm2008), 2009
Electronic Workshops in Computing, 2008
Optics letters, 2018
Concave micro-mirror arrays fabricated as holographic optical elements are used in projector-base... more Concave micro-mirror arrays fabricated as holographic optical elements are used in projector-based light field displays due to their see-through characteristics. The optical axes of each micro-mirror in the array are usually made parallel to each other, which simplifies the fabrication, integral image rendering, and calibration process. However, this demands that the beam from the projector be collimated and made parallel to the optical axis of each elemental micro-mirror. This requires additional collimation optics, which puts serious limitations on the size of the display. In this Letter, we propose a solution to the above issue by introducing a new method to fabricate holographic concave micro-mirror array sheets and explain how they work in detail. 3D light field reconstructions of the size 20 cm×10 cm and 6 cm in depth are achieved using a conventional projector without any collimation optics.
2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012
ACM SIGGRAPH 2016 Posters on - SIGGRAPH '16, 2016
In recent years there is a growing interest in the generation of virtual views from a limited set... more In recent years there is a growing interest in the generation of virtual views from a limited set of input cameras. This is especially useful for applications such as Free Viewpoint Navigation and light field displays [Tanimoto 2015]. The latter often requires tens to hundreds of input views, while it is often not feasible to record with as many cameras. View interpolation algorithms often traverse a set of depths to find correspondences between the input images [Stankiewicz et al. 2013; Goorts et al. 2013]. Most algorithms choose a uniform set of depths to traverse (as shown in Figure 2(a)), but this often leads to an excessive amount of unnecessary calculations in regions where no objects are located. It also results in an increased amount of mismatches, and thus, inaccuracies in the generated views. These problems also occur when a too large depth range is selected. Hence, typically a depth range that encloses the scene tightly is manually selected to mitigate these errors. A depth distribution that organizes the depth layers around the objects in the scene, as shown in Figure 2(b), would reduce these errors and decrease the number of computations by reducing the number of depths to search through. [Goorts et al. 2013] determine a nonuniform global depth distribution by reusing the generated depth information from the previous time stamp. This makes the algorithm dependent on previous results.
The inversion of a gravitational lens system is, as is well known, plagued by the so-called mass-... more The inversion of a gravitational lens system is, as is well known, plagued by the so-called mass-sheet degeneracy: one can always rescale the density distribution of the lens and add a constant-density mass-sheet such that the, also properly rescaled, source plane is projected onto the same observed images. For strong lensing systems, it is often claimed that this degeneracy is broken as soon as two or more sources at different redshifts are available. This is definitely true in the strict sense that it is then impossible to add a constant-density mass-sheet to the rescaled density of the lens without affecting the resulting images. However, often one can easily construct a more general mass distribution -- instead of a constant-density sheet of mass -- which gives rise to the same effect: a uniform scaling of the sources involved without affecting the observed images. We show that this can be achieved by adding one or more circularly symmetric mass distributions, each with its own center of symmetry, to the rescaled mass distribution of the original lens. As it uses circularly symmetric distributions, this procedure can lead to the introduction of ring shaped features in the mass distribution of the lens. In this paper, we show explicitly how degenerate inversions for a given strong lensing system can be constructed. It then becomes clear that many constraints are needed to effectively break this degeneracy.
Proceedings of the 9th International Conference on Computer Vision Theory and Applications, 2014
Proceedings of the International Conference on Computer Vision Theory and Applications, 2011
We present a fully functional prototype to convincingly restore eye contact between two video cha... more We present a fully functional prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, to achieve real-time performance up to 30 frames per second for 800×600 resolution images. Furthermore, an optimal set of finetuned parameters are presented, that optimizes the end-to-end performance of the application, and therefore is still able to achieve high subjective visual quality.
Hasselt Univ, Expertise Ctr Digital Media, Diepenbeek, Belgium. De Decker, B, Hasselt Univ, Exper... more Hasselt Univ, Expertise Ctr Digital Media, Diepenbeek, Belgium. De Decker, B, Hasselt Univ, Expertise Ctr Digital Media, Wetenschapspark 2, Diepenbeek, Belgium.
One of the main problems in the radiosity method is how to discretise the surfaces of a scene int... more One of the main problems in the radiosity method is how to discretise the surfaces of a scene into mesh elements that allow us to accurately represent illumination. In this paper we present a robust information-theoretic refinement criterion (oracle) based on kernel smoothness for hierarchical radiosity. This oracle improves on previous ones in that at equal cost it gives a better discretisation, approaching the optimal one from an information theory point of view, and also needs less visibility computations for a similar image quality.
... Accelerating Path Tracing by Re-Using Paths Philippe Bekaert Max-Planck-Institut für Informat... more ... Accelerating Path Tracing by Re-Using Paths Philippe Bekaert Max-Planck-Institut für Informatik, Saarbrücken, Germany. Philippe.Bekaert@mpi-sb.mpg.de Mateu Sbert Institut d'Informàtica i Aplicacions, Universitat de Girona, Spain. mateu@ima.udg.es ...
In this paper, we provide examples to optimize signal processing or visual computing algorithms w... more In this paper, we provide examples to optimize signal processing or visual computing algorithms written for SIMT-based GPU architectures. These implementations demonstrate the optimizations for CUDA or its successors OpenCL and DirectCompute. We discuss the effect and optimization principles of memory coalescing, bandwidth reduction, processor occupancy, bank conflict reduction, local memory elimination and instruction optimization. The effect of the optimization steps are illustrated by state-of-the-art examples. A comparison with optimized and unoptimized algorithms is provided. A first example discusses the construction of joint histograms using shared memory, where optimizations lead to a significant speedup compared to the original implementation. A second example presents convolution and the acquired results.
Procedings of the British Machine Vision Conference 2008, 2008
Applied Sciences, 2020
Light field 3D displays require a precise alignment between the display source and the micromirro... more Light field 3D displays require a precise alignment between the display source and the micromirror-array screen for error free 3D visualization. Hence, calibrating the system using an external camera becomes necessary, before displaying any 3D contents. The inter-dependency of the intrinsic and extrinsic parameters of display-source, calibration-camera, and micromirror-array screen, makes the calibration process very complex and error-prone. Thus, several assumptions are made with regard to the display setup, in order to simplify the calibration. A fully automatic calibration method based on several such assumptions was reported by us earlier. Here, in this paper, we report a method that uses no such assumptions, but yields a better calibration. The proposed method adapts an optical solution where the micromirror-array screen is fabricated as a computer generated hologram with a tiny diffuser engraved at one corner of each elemental micromirror in the array. The calibration algorith...
Proceedings of the 10th International Conference on Signal Processing and Multimedia Applications and 10th International Conference on Wireless Information Networks and Systems, 2013
Proceedings of Identification of dark matter 2008 — PoS(idm2008), 2009
Electronic Workshops in Computing, 2008
Optics letters, 2018
Concave micro-mirror arrays fabricated as holographic optical elements are used in projector-base... more Concave micro-mirror arrays fabricated as holographic optical elements are used in projector-based light field displays due to their see-through characteristics. The optical axes of each micro-mirror in the array are usually made parallel to each other, which simplifies the fabrication, integral image rendering, and calibration process. However, this demands that the beam from the projector be collimated and made parallel to the optical axis of each elemental micro-mirror. This requires additional collimation optics, which puts serious limitations on the size of the display. In this Letter, we propose a solution to the above issue by introducing a new method to fabricate holographic concave micro-mirror array sheets and explain how they work in detail. 3D light field reconstructions of the size 20 cm×10 cm and 6 cm in depth are achieved using a conventional projector without any collimation optics.
2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012
ACM SIGGRAPH 2016 Posters on - SIGGRAPH '16, 2016
In recent years there is a growing interest in the generation of virtual views from a limited set... more In recent years there is a growing interest in the generation of virtual views from a limited set of input cameras. This is especially useful for applications such as Free Viewpoint Navigation and light field displays [Tanimoto 2015]. The latter often requires tens to hundreds of input views, while it is often not feasible to record with as many cameras. View interpolation algorithms often traverse a set of depths to find correspondences between the input images [Stankiewicz et al. 2013; Goorts et al. 2013]. Most algorithms choose a uniform set of depths to traverse (as shown in Figure 2(a)), but this often leads to an excessive amount of unnecessary calculations in regions where no objects are located. It also results in an increased amount of mismatches, and thus, inaccuracies in the generated views. These problems also occur when a too large depth range is selected. Hence, typically a depth range that encloses the scene tightly is manually selected to mitigate these errors. A depth distribution that organizes the depth layers around the objects in the scene, as shown in Figure 2(b), would reduce these errors and decrease the number of computations by reducing the number of depths to search through. [Goorts et al. 2013] determine a nonuniform global depth distribution by reusing the generated depth information from the previous time stamp. This makes the algorithm dependent on previous results.
The inversion of a gravitational lens system is, as is well known, plagued by the so-called mass-... more The inversion of a gravitational lens system is, as is well known, plagued by the so-called mass-sheet degeneracy: one can always rescale the density distribution of the lens and add a constant-density mass-sheet such that the, also properly rescaled, source plane is projected onto the same observed images. For strong lensing systems, it is often claimed that this degeneracy is broken as soon as two or more sources at different redshifts are available. This is definitely true in the strict sense that it is then impossible to add a constant-density mass-sheet to the rescaled density of the lens without affecting the resulting images. However, often one can easily construct a more general mass distribution -- instead of a constant-density sheet of mass -- which gives rise to the same effect: a uniform scaling of the sources involved without affecting the observed images. We show that this can be achieved by adding one or more circularly symmetric mass distributions, each with its own center of symmetry, to the rescaled mass distribution of the original lens. As it uses circularly symmetric distributions, this procedure can lead to the introduction of ring shaped features in the mass distribution of the lens. In this paper, we show explicitly how degenerate inversions for a given strong lensing system can be constructed. It then becomes clear that many constraints are needed to effectively break this degeneracy.