Jürgen Brehm - Academia.edu (original) (raw)

Papers by Jürgen Brehm

Research paper thumbnail of Task Execution in Distributed Smart Systems

Lecture Notes in Computer Science, 2015

This paper presents a holistic approach to execute tasks in distributed smart systems. This is sh... more This paper presents a holistic approach to execute tasks in distributed smart systems. This is shown by the example of monitoring tasks in smart camera networks. The proposed approach is general and thus not limited to a specific scenario. A job-resource model is introduced to describe the smart system and the tasks, with as much order as necessary and as few rules as possible. Based on that model, a local algorithm is presented, which is developed to achieve optimization transparency. This means that the optimization on system-wide criteria will not be visible to the participants. To a task, the system-wide optimization is a virtual local single-step optimization. The algorithm is based on proactive quotation broadcasting to the local neighborhood. Additionally, it allows the parallel execution of tasks on resources and includes the optimization of multiple-task-to-resource assignments.

Research paper thumbnail of Sequentielle Lösung von großen, linearen Gleichungssystemen mit dünn besetzter Koeffizientenmatrix

In vielen ingenieurwissenschaftlichen Anwendungen, wie z.B. der Schaltkreissimulation, Netzwerkth... more In vielen ingenieurwissenschaftlichen Anwendungen, wie z.B. der Schaltkreissimulation, Netzwerktheorie, der numerischen Losung partieller Differentialgleichungssysteme (Finite Elemente Methoden oder Finite Differenzen) entstehen grose lineare Gleichungssysteme (LGSe). Fur die meisten realen Probleme [Bun76], [Duf86] ist die Koeffizientenmatrix A nur dunn besetzt, d.h. die Anzahl ihrer Nichmullelemente ist klein im Vergleich zur Gesamtzahl der Elemente der Matrix. Die effiziente Losung solcher Systeme auf dem Rechner erfordert spezielle Techniken: Spezielle numerische Algorithmen, die die zugrundeliegende Struktur der Matrix (schwache Besetzung) berucksichtigen. Spezielle Speichertechniken, die die Speicheranforderungen minimieren.

Research paper thumbnail of Sparse matrix algorithms for SUPRENUM

Lecture Notes in Computer Science, 1990

In this talk we witl present the SUPRENUM multiprocessor system and some implementations of paral... more In this talk we witl present the SUPRENUM multiprocessor system and some implementations of paralletized sparse matrix algorithms, The SUPRENUM multiprocessor system was delivered late in 1989for the first time. It is the result of a research project where German research institutes, universities and industrial companies worked together to built a 256 processor distributed memory machine. In parallel with the construction of the SUPRENUM a lot of time and man power was invested for the software support of the project. As an important application in scientific computation we parallelized the solution of systems of linear equations Ax=b. For realistic problems the large coefficient matrix A is sparse most of the time, i.e, a large number of its entries are zero. We show how direct algorithms based on Gauss Elimination and semi-iterative algorithms (Conjugate Gradient Methods) can be implemented on SUPRENUM. Especially the Conjugate Gradient Methods which are very well suited for parallelization and vectorization proved to be very efficient on multiprocessor architectures.

Research paper thumbnail of Parallel conjugate gradient algorithms for solving the Neutron Diffusion Equation on SUPERNUM

In this paper we present an implementation of a parallelized sparse matrix algorithm for solving ... more In this paper we present an implementation of a parallelized sparse matrix algorithm for solving the Neutron Diffusion Equation on the SUPRENUM multiprocessor system. The solution of the steady-state and transient Neutron Diffusion Equation is one of the major task in reactor physics. We used standard and preconditioned Conjugate Gradient Methods well suited for parallelization and vectorization on multiprocessor architectures. All presented algorithms were implemented on the 2 ChIster SUPRENUM at the University of Erlangen-Nuremberg.

Research paper thumbnail of Parallele Rechnerarchitekturen

Deutscher Universitätsverlag eBooks, 1992

Research paper thumbnail of Bewertung der Ergebnisse

Die Rechner haben in den letzten Jahrzehnten bezuglich der Hardware eine rasante Entwicklung geno... more Die Rechner haben in den letzten Jahrzehnten bezuglich der Hardware eine rasante Entwicklung genommen, bei der die Software nicht Schritt halten konnte. Am Anfang wurden samtliche Programme fur sequentielle Rechner mit relativ kleinem Hauptspeicher optimiert. Dementsprechend entschied man sich bei der Auswahl seiner Verfahren fur Algorithmen, die diese Moglichkeit der Optimierung erlaubten. Mit der steigenden Integrationsdichte sanken die Speicherpreise und die Benutzer der Rechenanlagen konnten ohne Probleme grosere Anwendungen berechnen lassen. In dieser Zeit entstanden viele der heute mit “dusty deck Fortran” bezeichneten Programme, die zwar auf dem sequentiellen Rechner ihre Aufgabe erfullen. aber oftmals so unstrukturiert programmiert sind, das man sie nur schwierig vektorisieren und parallelisieren kann. Die automatischen Vektorisierer haben dann zu einer besseren Akzeptanz der Superrechner (Vektormaschinen) gefuhrt. Bevor die MIMD-Rechner bei den Anwendern zu einer ahnlichen Akzeptanz kommen, mussen noch einige Fortschritte bei den parallelen Verfahren, bei der Kommunikationsleistung der Multiprozessoren und vor allem bei der Programmierumgebung fur den Benutzer gemacht werden.

Research paper thumbnail of Eine parallele, architekturunabhängige Programmierumgebung

Die in diesem Kapitel beschriebenen Konsequenzen resultieren einerseits aus den Schwachen der par... more Die in diesem Kapitel beschriebenen Konsequenzen resultieren einerseits aus den Schwachen der parallelen Rechnerarchitekturen und zeigen andererseits den Nachholbedarf im Bereich der parallelen Algorithmen auf. Wichtige Probleme der Parallelverarbeitung (Kommunikationsleistung, Parallelisierung von Algorithmen, Programmierbarkeit) wurden anhand konkreter Beispiele diskutiert und zusammengefast. Dieses Buch wurde mit dem Anspruch angetreten, die Wechselwirkungen von Algorithmen und Architektur zu beschreiben. Der Vorschlag einer portablen, parallelen Programmierumgebung stellt eine Synthese dieser beiden Arbeitsgebiete her. Nach der Diskussion der existierenden Ansatze zur parallelen Programmierung wird ein neues, architekturunabhangiges Konzept vorgestellt. Ausgehend von der Beobachtung, das es im wesentlichen zwei unterschiedliche Kopplungsmoglichkeiten fur Multiprozessoren (Nachrichten-und Speicherkopplung) gibt, sollte es nicht zu schwierig sein, eine portable und echt parallele Programmiersprache zu entwickeln. Echt parallel bedeutet in diesem Zusammenhang, das man nicht einfach eine herkommliche sequentielle Sprache nimmt und um einige Konstrukte zur Kommunikation und Prozeserzeugung erweitert, sondern das man von einem parallelen Programmiermodell ausgeht. Die meisten publizierten Implementierungen von parallelen Algorithmen stellen fur bestimmte Multiprozessorarchitekturen optimierte Verfahren dar. Das bedeutet, das der parallele Algorithmus X auf seiner Zielmaschine Y in der Regel sehr effizient ist, fur jede andere Architektur aber neu programmiert werden mus. Das zu losende Problem ist es, einen geeigneten Kompromis zwischen hardwarenaher, architekturabhangiger, effizienter Programmierung und automatischer, im allgemeinen ineffizienter, paralleler Programmierung zu finden.

Research paper thumbnail of Activity Recognition using Optical Sensors on Mobile Phones

GI Jahrestagung, 2009

Each mobile phone with a built-in CMOS sensor can inherently be seen as sophisticated optical sen... more Each mobile phone with a built-in CMOS sensor can inherently be seen as sophisticated optical sensor being able to analyze its environment in terms of visual events and its own mobility. Due to mass production their price decreases steadily, although their processing capacity increases. Mobile phones are usually attached to people, who are driven by mobility. We define activities arising from this mobility as internal activities in contrast to external activities, that are caused by visual events. Both activities can be recognized by measuring the sensor's optical flow. We present a method to identify internal activities based on optical flow measurements and probabilistic reasoning. We implement a lifelogging application, running on a Linux-based mobile phone, that can detect internal activities such as moving left-hand, right-hand or walking with a recognition rate of 80%. While standing still external activities are recognized using object detection.

Research paper thumbnail of Rollback Recovery in Multiprocessor Ring Configurations

Informatik-Fachberichte, 1987

This paper describes a technique for distributed recovery in multiprocessor ring configurations, ... more This paper describes a technique for distributed recovery in multiprocessor ring configurations, which has been developed and implemented for the multiprocessor system DIRMU 25 — a 25 processor system which is operational at the University of Erlangen-Nuremberg. First a short overview of the DIRMU hardware architecture and the distributed operating system DIRMOS is given. The steps of distributed recovery using distributed system checkpoints are described. By measurement of the runtime overhead of a realistic application (2D-Poisson-multigrid) its efficiency is discussed in comparasion to recovery techniques using central system checkpoints.

Research paper thumbnail of Multigrid solution of diffusion equations on distributed memory multiprocessor systems / Multigrid-Lösung der Diffusionsgleichungen auf Multiprozessorsystemen mit verteiltem Speicher

Research paper thumbnail of A Model based Approach for Computing Speedup on Parallel Machines using Static Code Analysis

Research paper thumbnail of Towards Utilizing Open Data for Interactive Knowledge Transfer

ABSTRACT Increasing heterogeneous Open Data is an ongoing trend in the current Social Semantic We... more ABSTRACT Increasing heterogeneous Open Data is an ongoing trend in the current Social Semantic Web (s2w). Generic concepts and how-tos for higher-layered reuse of the arbitrary information overload for interactive knowledge transfer and learning - mentioning the Internet of Services (IoS) - are not covered very well yet. For further, directed use of distributed services and sources, inquiry, interlinking, analysis, machine- and human-interpretable representation are as essential as lightweight user-oriented interoperation and competency in handling. In the following we introduce the qKAI application framework (qualifying Knowledge Acquisition and Inquiry) - a service-oriented, generic and hybrid approach combining knowledge related offers for convenient reuse and tweaking them with interaction for improved access with rich user experience. qKAI aims at closing some residual gaps between the "sophisticated" Semantic Web and "hands-on" Web 2.0 enabling loose-coupled knowledge and information services focussed on knowledge life cycles, learning aspects

Research paper thumbnail of Performance prediction for complex parallel applications

Government retains a nonexciusive. royalty-free license to publish or reproduce the published for... more Government retains a nonexciusive. royalty-free license to publish or reproduce the published form of this contribution. or allow othtn to do so, for US. Government purposes.

Research paper thumbnail of An InfoSpace Paradigm for Local and ad hoc Peer-to-Peer Communication

Lecture Notes in Computer Science, 2003

Research paper thumbnail of Towards Enhanced User Interaction to Qualify Web Resources for Higher-Layered Applications

The Web offers autonomous and frequently useful resources in growing manner. User Generated Conte... more The Web offers autonomous and frequently useful resources in growing manner. User Generated Content (UGC) like Wikis, Weblogs or Webfeeds often do not have one responsible authorship or declared experts who checked the created content for e.g. accuracy, availability, objectivity or reputation. The user is not able easily, to control the quality of the content he receives. If we want to utilize the distributed information flood as a linked knowledge base for higher-layered applications – e.g. for knowledge transfer and learning – information quality (iq) is a very important and complex aspect to analyze, personalize and annotate resources. In general, low information quality is one of the main discriminators of data sources on the Web. Assessing information quality with measurable terms can offer a personalized and smart view on a broad, global knowledge base. We developed the qKAI application framework to utilize available, distributed data sets in a practically manner. In the following we present our adaption of information quality aspects to qualify Web resources based on a three-level assessment model. We deploy knowledge-related iq-criteria as tool to implement iq-mechanisms stepwise into the qKAI framework. Here, we exemplify selected criteria of information quality in qKAI like relevance or accuracy. We derived assessment methods for certain iq-criteria enabling rich, game-based user interaction and semantic resource annotation. Open Content is embedded into knowledge games to increase the users’ access and learning motivation. As side effect the resources’ quality is enhanced stepwise by ongoing user interaction.

Research paper thumbnail of A Parallel Adaptive Swarm Search Framework for Solving Black-Box Optimization Problems

Lecture Notes in Computer Science, 2019

This paper presents a framework to support parallel swarm search algorithms for solving black-box... more This paper presents a framework to support parallel swarm search algorithms for solving black-box optimization problems. Looking at swarm based optimization, it is important to find a well fitted set of parameters to increase the convergence rate for finding the optimum. This fitting is problem dependent and time-consuming. The presented framework automates this fitting. After finding parameters for the best algorithm, a good mapping of algorithmic properties onto a parallel hardware is crucial for the overall efficiency of a parallel implementation. Swarm based algorithms are population based, the best number of individuals per swarm and, in the parallel case, the best number of swarms in terms of efficiency and/or performance has to be found. Data dependencies result in communication patterns that have to be cheaper in terms of execution times than the computing in between communications. Taking all this into account, the presented framework enables the programmer to implement efficient and adaptive parallel swarm search algorithms. The approach is evaluated through benchmarks and real world problems.

Research paper thumbnail of Object tracking as job-scheduling problem

This paper establishes a connection between object tracking from a systems point of view and the ... more This paper establishes a connection between object tracking from a systems point of view and the job-scheduling or job-shop problem. Often, surveillance areas cannot be fully monitored by a set of smart cameras at any given point in time. Decisions have to be made, which objects are to be tracked. The computer vision aspects of object tracking have made substantial strides which permits for elaborately planning the monitoring jobs. In this paper, object tracking is handled as a jobscheduling problem. As a result, tracked objects are considered as scheduling jobs that rely on smart cameras as resources that follow according tracking policies. The presented job-scheduling approach is based on proactive quotations advertised by the jobs. The main advantages of this algorithm are the avoidance of negotiation chains and the acceptance of local non-optimal solutions to benefit the overall performance.

Research paper thumbnail of Utilizing Open Content for Higher-Layered Rich Client Applications

Abstract- Accomplishing user interoperation and standardized web techniques is a promising mixtur... more Abstract- Accomplishing user interoperation and standardized web techniques is a promising mixture to build a next generation of web applications in the currently arising Social Semantic Web. Increasing heterogeneous Open Content is an ongoing trend. Generic concepts for higher-layered reuse of the arbitrary information overload- mentioning the Internet of Services- are not covered very well yet. For further, directed use of distributed services and sources, inquiry, interlinking, analysis, machine- and human-interpretable representation are as essential as lightweight user-oriented interoperation and competency in handling. In the following we introduce the qKAI application framework (qualifying Knowledge Acquisition and Inquiry) [1]- a service-oriented, generic and hybrid approach combining knowledge related offers for convenient reuse and tweaking them with interaction for improved access.

Research paper thumbnail of Ornl/tm-13254

Research paper thumbnail of Towards Utilizing Open Data for Interactive Knowledge Transfer

Increasing heterogeneous Open Data is an ongoing trend in the current Social Semantic Web (s2w). ... more Increasing heterogeneous Open Data is an ongoing trend in the current Social Semantic Web (s2w). Generic concepts and how-tos for higher-layered reuse of the arbitrary information overload for interactive knowledge transfer and learning - mentioning the Internet of Services (IoS) - are not covered very well yet. For further, directed use of distributed services and sources, inquiry, interlinking, analysis, machine- and human-interpretable representation are as essential as lightweight user-oriented interoperation and competency in handling. In the following we introduce the qKAI application framework (qualifying Knowledge Acquisition and Inquiry) - a service-oriented, generic and hybrid approach combining knowledge related offers for convenient reuse and tweaking them with interaction for improved access with rich user experience. qKAI aims at closing some residual gaps between the "sophisticated" Semantic Web and "hands-on" Web 2.0 enabling loose-coupled knowled...

Research paper thumbnail of Task Execution in Distributed Smart Systems

Lecture Notes in Computer Science, 2015

This paper presents a holistic approach to execute tasks in distributed smart systems. This is sh... more This paper presents a holistic approach to execute tasks in distributed smart systems. This is shown by the example of monitoring tasks in smart camera networks. The proposed approach is general and thus not limited to a specific scenario. A job-resource model is introduced to describe the smart system and the tasks, with as much order as necessary and as few rules as possible. Based on that model, a local algorithm is presented, which is developed to achieve optimization transparency. This means that the optimization on system-wide criteria will not be visible to the participants. To a task, the system-wide optimization is a virtual local single-step optimization. The algorithm is based on proactive quotation broadcasting to the local neighborhood. Additionally, it allows the parallel execution of tasks on resources and includes the optimization of multiple-task-to-resource assignments.

Research paper thumbnail of Sequentielle Lösung von großen, linearen Gleichungssystemen mit dünn besetzter Koeffizientenmatrix

In vielen ingenieurwissenschaftlichen Anwendungen, wie z.B. der Schaltkreissimulation, Netzwerkth... more In vielen ingenieurwissenschaftlichen Anwendungen, wie z.B. der Schaltkreissimulation, Netzwerktheorie, der numerischen Losung partieller Differentialgleichungssysteme (Finite Elemente Methoden oder Finite Differenzen) entstehen grose lineare Gleichungssysteme (LGSe). Fur die meisten realen Probleme [Bun76], [Duf86] ist die Koeffizientenmatrix A nur dunn besetzt, d.h. die Anzahl ihrer Nichmullelemente ist klein im Vergleich zur Gesamtzahl der Elemente der Matrix. Die effiziente Losung solcher Systeme auf dem Rechner erfordert spezielle Techniken: Spezielle numerische Algorithmen, die die zugrundeliegende Struktur der Matrix (schwache Besetzung) berucksichtigen. Spezielle Speichertechniken, die die Speicheranforderungen minimieren.

Research paper thumbnail of Sparse matrix algorithms for SUPRENUM

Lecture Notes in Computer Science, 1990

In this talk we witl present the SUPRENUM multiprocessor system and some implementations of paral... more In this talk we witl present the SUPRENUM multiprocessor system and some implementations of paralletized sparse matrix algorithms, The SUPRENUM multiprocessor system was delivered late in 1989for the first time. It is the result of a research project where German research institutes, universities and industrial companies worked together to built a 256 processor distributed memory machine. In parallel with the construction of the SUPRENUM a lot of time and man power was invested for the software support of the project. As an important application in scientific computation we parallelized the solution of systems of linear equations Ax=b. For realistic problems the large coefficient matrix A is sparse most of the time, i.e, a large number of its entries are zero. We show how direct algorithms based on Gauss Elimination and semi-iterative algorithms (Conjugate Gradient Methods) can be implemented on SUPRENUM. Especially the Conjugate Gradient Methods which are very well suited for parallelization and vectorization proved to be very efficient on multiprocessor architectures.

Research paper thumbnail of Parallel conjugate gradient algorithms for solving the Neutron Diffusion Equation on SUPERNUM

In this paper we present an implementation of a parallelized sparse matrix algorithm for solving ... more In this paper we present an implementation of a parallelized sparse matrix algorithm for solving the Neutron Diffusion Equation on the SUPRENUM multiprocessor system. The solution of the steady-state and transient Neutron Diffusion Equation is one of the major task in reactor physics. We used standard and preconditioned Conjugate Gradient Methods well suited for parallelization and vectorization on multiprocessor architectures. All presented algorithms were implemented on the 2 ChIster SUPRENUM at the University of Erlangen-Nuremberg.

Research paper thumbnail of Parallele Rechnerarchitekturen

Deutscher Universitätsverlag eBooks, 1992

Research paper thumbnail of Bewertung der Ergebnisse

Die Rechner haben in den letzten Jahrzehnten bezuglich der Hardware eine rasante Entwicklung geno... more Die Rechner haben in den letzten Jahrzehnten bezuglich der Hardware eine rasante Entwicklung genommen, bei der die Software nicht Schritt halten konnte. Am Anfang wurden samtliche Programme fur sequentielle Rechner mit relativ kleinem Hauptspeicher optimiert. Dementsprechend entschied man sich bei der Auswahl seiner Verfahren fur Algorithmen, die diese Moglichkeit der Optimierung erlaubten. Mit der steigenden Integrationsdichte sanken die Speicherpreise und die Benutzer der Rechenanlagen konnten ohne Probleme grosere Anwendungen berechnen lassen. In dieser Zeit entstanden viele der heute mit “dusty deck Fortran” bezeichneten Programme, die zwar auf dem sequentiellen Rechner ihre Aufgabe erfullen. aber oftmals so unstrukturiert programmiert sind, das man sie nur schwierig vektorisieren und parallelisieren kann. Die automatischen Vektorisierer haben dann zu einer besseren Akzeptanz der Superrechner (Vektormaschinen) gefuhrt. Bevor die MIMD-Rechner bei den Anwendern zu einer ahnlichen Akzeptanz kommen, mussen noch einige Fortschritte bei den parallelen Verfahren, bei der Kommunikationsleistung der Multiprozessoren und vor allem bei der Programmierumgebung fur den Benutzer gemacht werden.

Research paper thumbnail of Eine parallele, architekturunabhängige Programmierumgebung

Die in diesem Kapitel beschriebenen Konsequenzen resultieren einerseits aus den Schwachen der par... more Die in diesem Kapitel beschriebenen Konsequenzen resultieren einerseits aus den Schwachen der parallelen Rechnerarchitekturen und zeigen andererseits den Nachholbedarf im Bereich der parallelen Algorithmen auf. Wichtige Probleme der Parallelverarbeitung (Kommunikationsleistung, Parallelisierung von Algorithmen, Programmierbarkeit) wurden anhand konkreter Beispiele diskutiert und zusammengefast. Dieses Buch wurde mit dem Anspruch angetreten, die Wechselwirkungen von Algorithmen und Architektur zu beschreiben. Der Vorschlag einer portablen, parallelen Programmierumgebung stellt eine Synthese dieser beiden Arbeitsgebiete her. Nach der Diskussion der existierenden Ansatze zur parallelen Programmierung wird ein neues, architekturunabhangiges Konzept vorgestellt. Ausgehend von der Beobachtung, das es im wesentlichen zwei unterschiedliche Kopplungsmoglichkeiten fur Multiprozessoren (Nachrichten-und Speicherkopplung) gibt, sollte es nicht zu schwierig sein, eine portable und echt parallele Programmiersprache zu entwickeln. Echt parallel bedeutet in diesem Zusammenhang, das man nicht einfach eine herkommliche sequentielle Sprache nimmt und um einige Konstrukte zur Kommunikation und Prozeserzeugung erweitert, sondern das man von einem parallelen Programmiermodell ausgeht. Die meisten publizierten Implementierungen von parallelen Algorithmen stellen fur bestimmte Multiprozessorarchitekturen optimierte Verfahren dar. Das bedeutet, das der parallele Algorithmus X auf seiner Zielmaschine Y in der Regel sehr effizient ist, fur jede andere Architektur aber neu programmiert werden mus. Das zu losende Problem ist es, einen geeigneten Kompromis zwischen hardwarenaher, architekturabhangiger, effizienter Programmierung und automatischer, im allgemeinen ineffizienter, paralleler Programmierung zu finden.

Research paper thumbnail of Activity Recognition using Optical Sensors on Mobile Phones

GI Jahrestagung, 2009

Each mobile phone with a built-in CMOS sensor can inherently be seen as sophisticated optical sen... more Each mobile phone with a built-in CMOS sensor can inherently be seen as sophisticated optical sensor being able to analyze its environment in terms of visual events and its own mobility. Due to mass production their price decreases steadily, although their processing capacity increases. Mobile phones are usually attached to people, who are driven by mobility. We define activities arising from this mobility as internal activities in contrast to external activities, that are caused by visual events. Both activities can be recognized by measuring the sensor's optical flow. We present a method to identify internal activities based on optical flow measurements and probabilistic reasoning. We implement a lifelogging application, running on a Linux-based mobile phone, that can detect internal activities such as moving left-hand, right-hand or walking with a recognition rate of 80%. While standing still external activities are recognized using object detection.

Research paper thumbnail of Rollback Recovery in Multiprocessor Ring Configurations

Informatik-Fachberichte, 1987

This paper describes a technique for distributed recovery in multiprocessor ring configurations, ... more This paper describes a technique for distributed recovery in multiprocessor ring configurations, which has been developed and implemented for the multiprocessor system DIRMU 25 — a 25 processor system which is operational at the University of Erlangen-Nuremberg. First a short overview of the DIRMU hardware architecture and the distributed operating system DIRMOS is given. The steps of distributed recovery using distributed system checkpoints are described. By measurement of the runtime overhead of a realistic application (2D-Poisson-multigrid) its efficiency is discussed in comparasion to recovery techniques using central system checkpoints.

Research paper thumbnail of Multigrid solution of diffusion equations on distributed memory multiprocessor systems / Multigrid-Lösung der Diffusionsgleichungen auf Multiprozessorsystemen mit verteiltem Speicher

Research paper thumbnail of A Model based Approach for Computing Speedup on Parallel Machines using Static Code Analysis

Research paper thumbnail of Towards Utilizing Open Data for Interactive Knowledge Transfer

ABSTRACT Increasing heterogeneous Open Data is an ongoing trend in the current Social Semantic We... more ABSTRACT Increasing heterogeneous Open Data is an ongoing trend in the current Social Semantic Web (s2w). Generic concepts and how-tos for higher-layered reuse of the arbitrary information overload for interactive knowledge transfer and learning - mentioning the Internet of Services (IoS) - are not covered very well yet. For further, directed use of distributed services and sources, inquiry, interlinking, analysis, machine- and human-interpretable representation are as essential as lightweight user-oriented interoperation and competency in handling. In the following we introduce the qKAI application framework (qualifying Knowledge Acquisition and Inquiry) - a service-oriented, generic and hybrid approach combining knowledge related offers for convenient reuse and tweaking them with interaction for improved access with rich user experience. qKAI aims at closing some residual gaps between the "sophisticated" Semantic Web and "hands-on" Web 2.0 enabling loose-coupled knowledge and information services focussed on knowledge life cycles, learning aspects

Research paper thumbnail of Performance prediction for complex parallel applications

Government retains a nonexciusive. royalty-free license to publish or reproduce the published for... more Government retains a nonexciusive. royalty-free license to publish or reproduce the published form of this contribution. or allow othtn to do so, for US. Government purposes.

Research paper thumbnail of An InfoSpace Paradigm for Local and ad hoc Peer-to-Peer Communication

Lecture Notes in Computer Science, 2003

Research paper thumbnail of Towards Enhanced User Interaction to Qualify Web Resources for Higher-Layered Applications

The Web offers autonomous and frequently useful resources in growing manner. User Generated Conte... more The Web offers autonomous and frequently useful resources in growing manner. User Generated Content (UGC) like Wikis, Weblogs or Webfeeds often do not have one responsible authorship or declared experts who checked the created content for e.g. accuracy, availability, objectivity or reputation. The user is not able easily, to control the quality of the content he receives. If we want to utilize the distributed information flood as a linked knowledge base for higher-layered applications – e.g. for knowledge transfer and learning – information quality (iq) is a very important and complex aspect to analyze, personalize and annotate resources. In general, low information quality is one of the main discriminators of data sources on the Web. Assessing information quality with measurable terms can offer a personalized and smart view on a broad, global knowledge base. We developed the qKAI application framework to utilize available, distributed data sets in a practically manner. In the following we present our adaption of information quality aspects to qualify Web resources based on a three-level assessment model. We deploy knowledge-related iq-criteria as tool to implement iq-mechanisms stepwise into the qKAI framework. Here, we exemplify selected criteria of information quality in qKAI like relevance or accuracy. We derived assessment methods for certain iq-criteria enabling rich, game-based user interaction and semantic resource annotation. Open Content is embedded into knowledge games to increase the users’ access and learning motivation. As side effect the resources’ quality is enhanced stepwise by ongoing user interaction.

Research paper thumbnail of A Parallel Adaptive Swarm Search Framework for Solving Black-Box Optimization Problems

Lecture Notes in Computer Science, 2019

This paper presents a framework to support parallel swarm search algorithms for solving black-box... more This paper presents a framework to support parallel swarm search algorithms for solving black-box optimization problems. Looking at swarm based optimization, it is important to find a well fitted set of parameters to increase the convergence rate for finding the optimum. This fitting is problem dependent and time-consuming. The presented framework automates this fitting. After finding parameters for the best algorithm, a good mapping of algorithmic properties onto a parallel hardware is crucial for the overall efficiency of a parallel implementation. Swarm based algorithms are population based, the best number of individuals per swarm and, in the parallel case, the best number of swarms in terms of efficiency and/or performance has to be found. Data dependencies result in communication patterns that have to be cheaper in terms of execution times than the computing in between communications. Taking all this into account, the presented framework enables the programmer to implement efficient and adaptive parallel swarm search algorithms. The approach is evaluated through benchmarks and real world problems.

Research paper thumbnail of Object tracking as job-scheduling problem

This paper establishes a connection between object tracking from a systems point of view and the ... more This paper establishes a connection between object tracking from a systems point of view and the job-scheduling or job-shop problem. Often, surveillance areas cannot be fully monitored by a set of smart cameras at any given point in time. Decisions have to be made, which objects are to be tracked. The computer vision aspects of object tracking have made substantial strides which permits for elaborately planning the monitoring jobs. In this paper, object tracking is handled as a jobscheduling problem. As a result, tracked objects are considered as scheduling jobs that rely on smart cameras as resources that follow according tracking policies. The presented job-scheduling approach is based on proactive quotations advertised by the jobs. The main advantages of this algorithm are the avoidance of negotiation chains and the acceptance of local non-optimal solutions to benefit the overall performance.

Research paper thumbnail of Utilizing Open Content for Higher-Layered Rich Client Applications

Abstract- Accomplishing user interoperation and standardized web techniques is a promising mixtur... more Abstract- Accomplishing user interoperation and standardized web techniques is a promising mixture to build a next generation of web applications in the currently arising Social Semantic Web. Increasing heterogeneous Open Content is an ongoing trend. Generic concepts for higher-layered reuse of the arbitrary information overload- mentioning the Internet of Services- are not covered very well yet. For further, directed use of distributed services and sources, inquiry, interlinking, analysis, machine- and human-interpretable representation are as essential as lightweight user-oriented interoperation and competency in handling. In the following we introduce the qKAI application framework (qualifying Knowledge Acquisition and Inquiry) [1]- a service-oriented, generic and hybrid approach combining knowledge related offers for convenient reuse and tweaking them with interaction for improved access.

Research paper thumbnail of Ornl/tm-13254

Research paper thumbnail of Towards Utilizing Open Data for Interactive Knowledge Transfer

Increasing heterogeneous Open Data is an ongoing trend in the current Social Semantic Web (s2w). ... more Increasing heterogeneous Open Data is an ongoing trend in the current Social Semantic Web (s2w). Generic concepts and how-tos for higher-layered reuse of the arbitrary information overload for interactive knowledge transfer and learning - mentioning the Internet of Services (IoS) - are not covered very well yet. For further, directed use of distributed services and sources, inquiry, interlinking, analysis, machine- and human-interpretable representation are as essential as lightweight user-oriented interoperation and competency in handling. In the following we introduce the qKAI application framework (qualifying Knowledge Acquisition and Inquiry) - a service-oriented, generic and hybrid approach combining knowledge related offers for convenient reuse and tweaking them with interaction for improved access with rich user experience. qKAI aims at closing some residual gaps between the "sophisticated" Semantic Web and "hands-on" Web 2.0 enabling loose-coupled knowled...