Volkmar Wieser - Academia.edu (original) (raw)
Papers by Volkmar Wieser
Proceedings of SPIE, Jun 28, 2011
abstract In this paper the problem of high performance software engineering is addressed in the c... more abstract In this paper the problem of high performance software engineering is addressed in the context of image processing regarding productivity and optimized exploitation of hardware resources. Therefore, we introduce the functional array processing language Single Assignment C (SaC), which relies on a hardware virtualization concept for automated, parallel machine code generation. An illustrative benchmarking example proves both utility and adequacy of SaC for image processing.
Communications in computer and information science, 2022
Communications in computer and information science, 2022
This industrial spotlight paper outlines a Riemannian geometry inspired approach to measure geome... more This industrial spotlight paper outlines a Riemannian geometry inspired approach to measure geometric quantities in the plane of focus of a Scheimpflug camera in the presence of nonlinear distortions caused by the Scheimpflug model and non-linear lens distortion.
Procedia Computer Science, 2021
Federated machine learning frameworks, which take into account confidentiality of distributed dat... more Federated machine learning frameworks, which take into account confidentiality of distributed data sources are of increasing interest in smart manufacturing. However, the scope of applicability of most such frameworks is restricted in industrial settings due to limitations in the assumptions on the data sources involved. In this work, first, we shed light on the nature of this arising gap between current federated learning and requirements in industrial settings. Our discussion aims at clarifying related notions in emerging sub-disciplines of machine learning, which are partially overlapping. Second, we envision a new confidentialitypreserving approach for smart manufacturing applications based on the more general setting of transfer learning, and envision its implementation in a module-based platform.
This paper addresses the gap between envisioned hardware-virtualized techniques for GPU programmi... more This paper addresses the gap between envisioned hardware-virtualized techniques for GPU programming and a conventional approach from the point of view of an application engineer taking software engineering aspects like maintainability, understandability and productivity, and resulting achieved gain in performance and scalability into account. This gap is discussed on the basis of use cases from the field of image processing, and illustrated by means of performance benchmarks as well as evaluations regarding software engineering productivity.
Software: Practice and Experience, 2020
Source code comments contain key information about the underlying software system. Many redocumen... more Source code comments contain key information about the underlying software system. Many redocumentation approaches, however, cannot exploit this valuable source of information. This is mainly due to the fact that not all comments have the same goals and target audience and can therefore only be used selectively for redocumentation. Performing a required classification manually, for example, in the form of heuristics, is usually time‐consuming and error‐prone and strongly dependent on programming languages and guidelines of concrete software systems. By leveraging machine learning (ML), it should be possible to classify comments and thus transfer valuable information from the source code into documentation with less effort but the same quality. We applied classical ML techniques but also deep learning (DL) approaches to legacy systems by transferring source code comments into meaningful representations using, for example, word embeddings but also novel approaches using quick response...
SPIE Proceedings, 2007
Thin-film sensors for use in automotive or aeronautic applications must conform to very high qual... more Thin-film sensors for use in automotive or aeronautic applications must conform to very high quality standards. Due to defects that cannot be addressed by conventional electronic measurements, an accurate optical inspection is imperative to ensure long-term quality aspects of the produced thin-film sensor. In this particular case, resolutions of 1 mum per pixel are necessary to meet the required high quality standards. Furthermore, it has to be guaranteed that defects are detected robustly with high reliability. In this paper, a new method is proposed that solves the problem of handling local deformations due to production variabilities without having to use computational intensive local image registration operations. The main idea of this method is based on a combination of efficient morphological preprocessing and a multi-step comparison strategy based on logical implication. The main advantage of this approach is that the neighborhood operations that care for the robustness of the image comparison can be computed in advance and stored in a modified reference image. By virtue of this approach, no further neighborhood operations have to be carried out on the acquired test image during inspection time. A systematic, experimental study shows that this method is superior to existing approaches concerning reliability, robustness, and computational efficiency. As a result, the requirements of high-resolution inspection and high-performance throughput while accounting for local deformations are met very well by the implemented inspection system. The work is substantiated with theoretical arguments and a comprehensive analysis of the obtained performance and practical usability in the above-mentioned, challenging industrial environment.
Die Erfindung beschreibt ein Verfahren zur Güteprüfung von Oberflächen (5) mit einer Lichtquelle ... more Die Erfindung beschreibt ein Verfahren zur Güteprüfung von Oberflächen (5) mit einer Lichtquelle (1) und einer optischen Bilderfassungsvorrichtung (2), wobei zwischen der Bilderfassungsvorrichtung (2) und einer zu prüfenden Oberfläche (5) eine Relativbewegung (24) stattfindet. Ein divergentes Strahlenbündel (3) wird von der Lichtquelle (1) auf einen Abschnitt (4) der zu prüfenden Oberfläche (5) gerichtet und durch die Bilderfassungsvorrichtung (2), in zeitlicher Abfolge, zumindest zwei Abbilder der beleuchteten Oberfläche (8) erfasst. Die erfassten Abbilder werden von einer Auswertevorrichtung verglichen und daraus eine Abweichung ermittelt. Die Erfindung betrifft ferner eine divergente Lichtquelle (1), umfassend eine optische Strahlungsquelle (9) und einen Reflektor (10), wobei der Reflektor (10) eine zumindest abschnittsweise reflektierende Mantelfläche (14) aufweist. Die Mantelfläche (14) ist konvex ausgebildet und reflektiert ein aus einer ersten Richtung (29) anfallendes Strahl...
Due to the fact that digital photographs now are a mandatory component of digital passports, it h... more Due to the fact that digital photographs now are a mandatory component of digital passports, it has become necessary to consider different guidelines and standards to represent passport photographs in an uniform way. These criteria are determined in ISO Standard [6] ”ISO/IEC CD 19794-5“ and are intended to be trend-setting for the process of face based biometric personal identification. By observing these regulations it is possible to avoid many sources of errors already in advance because identification is done on so-called ”Canonical Face Images“. This diploma thesis discusses the ”Automatic Calculation of Canonical Face Images“ which essentially consists of three steps: Face detection - Eye detection - Calculation of canonical face images. Care has been taken to consider and use State-of-the-artmethods and also to discover novel methods to obtain optimum results for this task. Accordingly, it will be described in the sequel, to which degree the aim of faceand eye detection by transf...
Image Processing: Machine Vision Applications VI, 2013
ABSTRACT This paper proposes a novel approach to determine the texture periodicity, the texture e... more ABSTRACT This paper proposes a novel approach to determine the texture periodicity, the texture element size and further characteristics like the area of the basin of attraction in the case of computing the similarity of a test image patch with a reference. The presented method utilizes the properties of a novel metric, the so-called discrepancy norm. Due to the Lipschitz and the monotonicity property the discrepancy norm distinguishes itself from other metrics by well-formed and stable convergence regions. Both the periodicity and the convergence regions are closely related and have an immediate impact on the performance of a subsequent template matching and evaluation step. The general form of the proposed approach relies on the generation of discrepancy norm induced similarity maps at random positions in the image. By applying standard image processing operations like watershed and blob analysis on the similarity maps a robust estimation of the characteristic periodicity can be computed. From the general approach a tailored version for orthogonal aligned textures is derived which shows robustness to noise disturbed images and is suitable for estimation on near regular textures. In an experimental set-up the estimation performance is tested on samples of standardized image databases and is compared with state-of-theart methods. Results show that the proposed method is applicable to a wide range of nearly regular textures and estimation results keeps up with current methods. When adding a hypothesis generation/selection mechanism it even outperforms the current state-or-the-art.
This paper introduces the ADVANCE approach to engineering concur-rent systems using a new compone... more This paper introduces the ADVANCE approach to engineering concur-rent systems using a new component-based approach. A cost-directed tool-chain maps concurrent programs onto emerging hardware architectures, where costs are expressed in terms of programmer annotations for the throughput, latency and jit-ter of components. These are then synthesized using advanced statistical analysis techniques to give overall cost information about the concurrent system that can be exploited by the hardware virtualisation layer to drive mapping and scheduling decisions. Initial performance results are presented, showing that the ADVANCE technologies provide a promising approach to dealing with near-and future-term complexities of programming heterogeneous multi-core systems.
Hagenberg Research, 2010
@incollection{RISC3817, author = {Wolfgang Schreiner and Karoly Bosa and Andreas Langegger and Th... more @incollection{RISC3817, author = {Wolfgang Schreiner and Karoly Bosa and Andreas Langegger and Thomas Leitner and Bernhard Moser and Szilard Pall and Volkmar Wieser and Wolfram Wöß }, title = {{Parallel, Distributed, and Grid Computing}}, booktitle = {{Hagenberg Research}}, language = {english}, chapter = {VII}, pages = {333--378}, publisher = {Springer}, address = {Berlin}, isbn_issn = {ISBN 78-3-642-02126-8}, year = {2009}, editor ...
Journal of Electronic Imaging, 2012
UvA-DARE (Digital Academic Repository) Combining high productivity and high performance in image ... more UvA-DARE (Digital Academic Repository) Combining high productivity and high performance in image processing using single assignment C on multi-core CPUs and many-core GPUs Wieser, V.
Journal of Electronic Imaging, 2012
ABSTRACT Time-of-flight (TOF) full-field range cameras use a correlative imaging technique to gen... more ABSTRACT Time-of-flight (TOF) full-field range cameras use a correlative imaging technique to generate three-dimensional measurements of the environment. Though reliable and cheap they have the disadvantage of high measurement noise and errors that limit the practical use of these cameras in industrial applications. We show how some of these limitations can be overcome with standard image processing techniques specially adapted to TOF camera data. Additional information in the multimodal images recorded in this setting, and not available in standard image processing settings, can be used to improve reduction of measurement noise. Three extensions of standard techniques, wavelet thresholding, adaptive smoothing on a clustering based image segmentation, and an extended anisotropic diffusion filtering, make use of this information and are compared on synthetic data and on data acquired from two different off-the-shelf TOF cameras. Of these methods, the adapted anisotropic diffusion technique gives best results, and is implementable to perform in real time using current graphics processing unit (GPU) hardware. Like traditional anisotropic diffusion, it requires some parameter adaptation to the scene characteristics, but allows for low visualization delay and improved visualization of moving objects by avoiding long averaging periods when compared to traditional TOF image denoising.
Proceedings of SPIE, Jun 28, 2011
abstract In this paper the problem of high performance software engineering is addressed in the c... more abstract In this paper the problem of high performance software engineering is addressed in the context of image processing regarding productivity and optimized exploitation of hardware resources. Therefore, we introduce the functional array processing language Single Assignment C (SaC), which relies on a hardware virtualization concept for automated, parallel machine code generation. An illustrative benchmarking example proves both utility and adequacy of SaC for image processing.
Communications in computer and information science, 2022
Communications in computer and information science, 2022
This industrial spotlight paper outlines a Riemannian geometry inspired approach to measure geome... more This industrial spotlight paper outlines a Riemannian geometry inspired approach to measure geometric quantities in the plane of focus of a Scheimpflug camera in the presence of nonlinear distortions caused by the Scheimpflug model and non-linear lens distortion.
Procedia Computer Science, 2021
Federated machine learning frameworks, which take into account confidentiality of distributed dat... more Federated machine learning frameworks, which take into account confidentiality of distributed data sources are of increasing interest in smart manufacturing. However, the scope of applicability of most such frameworks is restricted in industrial settings due to limitations in the assumptions on the data sources involved. In this work, first, we shed light on the nature of this arising gap between current federated learning and requirements in industrial settings. Our discussion aims at clarifying related notions in emerging sub-disciplines of machine learning, which are partially overlapping. Second, we envision a new confidentialitypreserving approach for smart manufacturing applications based on the more general setting of transfer learning, and envision its implementation in a module-based platform.
This paper addresses the gap between envisioned hardware-virtualized techniques for GPU programmi... more This paper addresses the gap between envisioned hardware-virtualized techniques for GPU programming and a conventional approach from the point of view of an application engineer taking software engineering aspects like maintainability, understandability and productivity, and resulting achieved gain in performance and scalability into account. This gap is discussed on the basis of use cases from the field of image processing, and illustrated by means of performance benchmarks as well as evaluations regarding software engineering productivity.
Software: Practice and Experience, 2020
Source code comments contain key information about the underlying software system. Many redocumen... more Source code comments contain key information about the underlying software system. Many redocumentation approaches, however, cannot exploit this valuable source of information. This is mainly due to the fact that not all comments have the same goals and target audience and can therefore only be used selectively for redocumentation. Performing a required classification manually, for example, in the form of heuristics, is usually time‐consuming and error‐prone and strongly dependent on programming languages and guidelines of concrete software systems. By leveraging machine learning (ML), it should be possible to classify comments and thus transfer valuable information from the source code into documentation with less effort but the same quality. We applied classical ML techniques but also deep learning (DL) approaches to legacy systems by transferring source code comments into meaningful representations using, for example, word embeddings but also novel approaches using quick response...
SPIE Proceedings, 2007
Thin-film sensors for use in automotive or aeronautic applications must conform to very high qual... more Thin-film sensors for use in automotive or aeronautic applications must conform to very high quality standards. Due to defects that cannot be addressed by conventional electronic measurements, an accurate optical inspection is imperative to ensure long-term quality aspects of the produced thin-film sensor. In this particular case, resolutions of 1 mum per pixel are necessary to meet the required high quality standards. Furthermore, it has to be guaranteed that defects are detected robustly with high reliability. In this paper, a new method is proposed that solves the problem of handling local deformations due to production variabilities without having to use computational intensive local image registration operations. The main idea of this method is based on a combination of efficient morphological preprocessing and a multi-step comparison strategy based on logical implication. The main advantage of this approach is that the neighborhood operations that care for the robustness of the image comparison can be computed in advance and stored in a modified reference image. By virtue of this approach, no further neighborhood operations have to be carried out on the acquired test image during inspection time. A systematic, experimental study shows that this method is superior to existing approaches concerning reliability, robustness, and computational efficiency. As a result, the requirements of high-resolution inspection and high-performance throughput while accounting for local deformations are met very well by the implemented inspection system. The work is substantiated with theoretical arguments and a comprehensive analysis of the obtained performance and practical usability in the above-mentioned, challenging industrial environment.
Die Erfindung beschreibt ein Verfahren zur Güteprüfung von Oberflächen (5) mit einer Lichtquelle ... more Die Erfindung beschreibt ein Verfahren zur Güteprüfung von Oberflächen (5) mit einer Lichtquelle (1) und einer optischen Bilderfassungsvorrichtung (2), wobei zwischen der Bilderfassungsvorrichtung (2) und einer zu prüfenden Oberfläche (5) eine Relativbewegung (24) stattfindet. Ein divergentes Strahlenbündel (3) wird von der Lichtquelle (1) auf einen Abschnitt (4) der zu prüfenden Oberfläche (5) gerichtet und durch die Bilderfassungsvorrichtung (2), in zeitlicher Abfolge, zumindest zwei Abbilder der beleuchteten Oberfläche (8) erfasst. Die erfassten Abbilder werden von einer Auswertevorrichtung verglichen und daraus eine Abweichung ermittelt. Die Erfindung betrifft ferner eine divergente Lichtquelle (1), umfassend eine optische Strahlungsquelle (9) und einen Reflektor (10), wobei der Reflektor (10) eine zumindest abschnittsweise reflektierende Mantelfläche (14) aufweist. Die Mantelfläche (14) ist konvex ausgebildet und reflektiert ein aus einer ersten Richtung (29) anfallendes Strahl...
Due to the fact that digital photographs now are a mandatory component of digital passports, it h... more Due to the fact that digital photographs now are a mandatory component of digital passports, it has become necessary to consider different guidelines and standards to represent passport photographs in an uniform way. These criteria are determined in ISO Standard [6] ”ISO/IEC CD 19794-5“ and are intended to be trend-setting for the process of face based biometric personal identification. By observing these regulations it is possible to avoid many sources of errors already in advance because identification is done on so-called ”Canonical Face Images“. This diploma thesis discusses the ”Automatic Calculation of Canonical Face Images“ which essentially consists of three steps: Face detection - Eye detection - Calculation of canonical face images. Care has been taken to consider and use State-of-the-artmethods and also to discover novel methods to obtain optimum results for this task. Accordingly, it will be described in the sequel, to which degree the aim of faceand eye detection by transf...
Image Processing: Machine Vision Applications VI, 2013
ABSTRACT This paper proposes a novel approach to determine the texture periodicity, the texture e... more ABSTRACT This paper proposes a novel approach to determine the texture periodicity, the texture element size and further characteristics like the area of the basin of attraction in the case of computing the similarity of a test image patch with a reference. The presented method utilizes the properties of a novel metric, the so-called discrepancy norm. Due to the Lipschitz and the monotonicity property the discrepancy norm distinguishes itself from other metrics by well-formed and stable convergence regions. Both the periodicity and the convergence regions are closely related and have an immediate impact on the performance of a subsequent template matching and evaluation step. The general form of the proposed approach relies on the generation of discrepancy norm induced similarity maps at random positions in the image. By applying standard image processing operations like watershed and blob analysis on the similarity maps a robust estimation of the characteristic periodicity can be computed. From the general approach a tailored version for orthogonal aligned textures is derived which shows robustness to noise disturbed images and is suitable for estimation on near regular textures. In an experimental set-up the estimation performance is tested on samples of standardized image databases and is compared with state-of-theart methods. Results show that the proposed method is applicable to a wide range of nearly regular textures and estimation results keeps up with current methods. When adding a hypothesis generation/selection mechanism it even outperforms the current state-or-the-art.
This paper introduces the ADVANCE approach to engineering concur-rent systems using a new compone... more This paper introduces the ADVANCE approach to engineering concur-rent systems using a new component-based approach. A cost-directed tool-chain maps concurrent programs onto emerging hardware architectures, where costs are expressed in terms of programmer annotations for the throughput, latency and jit-ter of components. These are then synthesized using advanced statistical analysis techniques to give overall cost information about the concurrent system that can be exploited by the hardware virtualisation layer to drive mapping and scheduling decisions. Initial performance results are presented, showing that the ADVANCE technologies provide a promising approach to dealing with near-and future-term complexities of programming heterogeneous multi-core systems.
Hagenberg Research, 2010
@incollection{RISC3817, author = {Wolfgang Schreiner and Karoly Bosa and Andreas Langegger and Th... more @incollection{RISC3817, author = {Wolfgang Schreiner and Karoly Bosa and Andreas Langegger and Thomas Leitner and Bernhard Moser and Szilard Pall and Volkmar Wieser and Wolfram Wöß }, title = {{Parallel, Distributed, and Grid Computing}}, booktitle = {{Hagenberg Research}}, language = {english}, chapter = {VII}, pages = {333--378}, publisher = {Springer}, address = {Berlin}, isbn_issn = {ISBN 78-3-642-02126-8}, year = {2009}, editor ...
Journal of Electronic Imaging, 2012
UvA-DARE (Digital Academic Repository) Combining high productivity and high performance in image ... more UvA-DARE (Digital Academic Repository) Combining high productivity and high performance in image processing using single assignment C on multi-core CPUs and many-core GPUs Wieser, V.
Journal of Electronic Imaging, 2012
ABSTRACT Time-of-flight (TOF) full-field range cameras use a correlative imaging technique to gen... more ABSTRACT Time-of-flight (TOF) full-field range cameras use a correlative imaging technique to generate three-dimensional measurements of the environment. Though reliable and cheap they have the disadvantage of high measurement noise and errors that limit the practical use of these cameras in industrial applications. We show how some of these limitations can be overcome with standard image processing techniques specially adapted to TOF camera data. Additional information in the multimodal images recorded in this setting, and not available in standard image processing settings, can be used to improve reduction of measurement noise. Three extensions of standard techniques, wavelet thresholding, adaptive smoothing on a clustering based image segmentation, and an extended anisotropic diffusion filtering, make use of this information and are compared on synthetic data and on data acquired from two different off-the-shelf TOF cameras. Of these methods, the adapted anisotropic diffusion technique gives best results, and is implementable to perform in real time using current graphics processing unit (GPU) hardware. Like traditional anisotropic diffusion, it requires some parameter adaptation to the scene characteristics, but allows for low visualization delay and improved visualization of moving objects by avoiding long averaging periods when compared to traditional TOF image denoising.