Karol Matiaško | University of Zilina (original) (raw)

Papers by Karol Matiaško

Research paper thumbnail of Klasifikácia založená na metóde stabilných rozhodovacích stromov

Decision tree induction is one of useful approaches for extracting classification knowledge from ... more Decision tree induction is one of useful approaches for extracting classification knowledge from set instances. Considerable part of these instances obtains from formal analysis and modeling of human activities, which has fuzzy nature. It is often the case that real-world tasks can be handled easily by humans, they are often too difficult to be handled by machines. Fuzzy logic allows us to describe this problem. Fuzzy decision tree is a very popular method for fuzzy classification. We introduced term of cumulative information estimations based on Theory of Information approach. We used these cumulative estimations for synthesis of different criteria of decision tree induction. Usage these criteria allow us to produce new type of trees. In this paper we introduce Stable Ordered Fuzzy Decision Tree (FDT). The tree is oriented to parallel and stable processing of input attributes with differing cost. Usage this FDT allows us to realize a sub-optimal classification. Such classification ...

Research paper thumbnail of Integrácia dátových zdrojov

Integrating heterogeneous data sources is a big deal for most companies, organizations, universit... more Integrating heterogeneous data sources is a big deal for most companies, organizations, universities ... Thanks to this every organization can yield the most important and relevant information for every user (in sense of employee, manager, student, professor,...) without bigger effort. Our paper describes what everything can be integrated in portal and how easily it can be done. The biggest problem for us is to choose the most effective and functional technique for solving our problems.Príspevok dokumentuje možnosti integrácie dátových zdrojov pri tvorbe rozsiahlych informačných systémov, ktoré čerpajú dáta z rôznych a popritom heterogénnych zdrojov. Článok demonštruje vybrané spôsoby integrácie tak, aby mohli byť prezentované v portlových riešeniach

Research paper thumbnail of S.: Learning fuzzy rules from fuzzy decision trees

Classification rules are an important tool for discovering knowledge from databases. Integrating ... more Classification rules are an important tool for discovering knowledge from databases. Integrating fuzzy logic algorithms into databases allows us to reduce uncertainty which is connected with data in databases and to increase discovered knowledge’s accuracy. In this paper, we analyze some possible variants of making classification rules from a given fuzzy decision based on cumulative information. We compare their classification accuracy with the accuracy which is reached by statistical methods and other fuzzy classification rules.

Research paper thumbnail of Algorithm for Dynamic Analysis of Multi-State System by Structure Function

EUROCON 2005 - The International Conference on "Computer as a Tool", 2005

The reliability of the multistate system is investigated in this paper. The new class of reliabil... more The reliability of the multistate system is investigated in this paper. The new class of reliability indices is proposed. They are dynamic reliability indices. These indices estimate influence upon the multistate system reliability by the state of a system component. The multiple valued logic mathematical tools are used for calculation of dynamic reliability indices

Research paper thumbnail of Various Approaches Proposed for Eliminating Duplicate Data in a System

Communications - Scientific letters of the University of Zilina

The growth of big data processing market led to an increase in the overload of computation data c... more The growth of big data processing market led to an increase in the overload of computation data centers, change of methods used in storing the data, communication between the computing units and computational time needed to process or edit the data. Methods of distributed or parallel data processing brought new problems related to computations with data which need to be examined. Unlike the conventional cloud services, a tight connection between the data and the computations is one of the main characteristics of the big data services. The computational tasks can be done only if relevant data are available. Three factors, which influence the speed and efficiency of data processing are-data duplicity, data integrity and data security. We are motivated to study the problems related to the growing time needed for data processing by optimizing these three factors in geographically distributed data centers.

Research paper thumbnail of Efficiency of the relational database tuple access

2019 IEEE 15th International Scientific Conference on Informatics

Relational databases form the core part of the data management in current information systems. Th... more Relational databases form the core part of the data management in current information systems. The number of data is still rising and the structure is more and more complicated and evolving. If the data are bordered by the time spectrum, problem is even deeper. This paper deals with the relational database system architecture and proposes own techniques for optimizing data location and access to the tuples to get relevant data effectively in a proper time. Own proposed solution is based on limiting the impact of the whole table scanning necessity. Thanks to that, the performance is significantly shifted and improved, whereas, in our solution, index approach can be always used, since each data row is delimited by the primary key definition.

Research paper thumbnail of Grid Computing for Complex Problems GCCP 2008 BOOK OF ABSTRACTS

The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastruct... more The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastructure in Europe and beyond. Based on the requirements of the user communities and by combining the strength and views of the National Grid Initiatives (NGI), EGI is expected to deliver the next step towards a permanent and common grid infrastructure. The effort is currently driven by the EGI Design Study, an FP7 funded project defining the structure and functionality of the future EGI and providing support to the NGIs in their evolution. The goal is the setup of an organizational model, with the EGI Organization (EGI.org) as the glue between the national efforts, which provides seamless access to grid resources for all application domains.

Research paper thumbnail of Grid Computing for Complex Problems GCCP 2009 BOOK OF ABSTRACTS

The ultimate goal of quantum chemistry is an a priori prediction. Despite indubitable achievement... more The ultimate goal of quantum chemistry is an a priori prediction. Despite indubitable achievements, precise predictions of molecular energies and properties still represent a challenge to method developers. This lecture is an attempt to introduce the world of highly accurate ab initio calculations of small molecules in the perspective of the author's contribution to it and from the perspective of the programmer. A link to astrophysical applications is outlined via an example of the full nine-dimensional potential energy surface of H2O-H2 interaction, which resulted from a combination of about 1000 intensive calculations using supercomputers with almost half a million less intensive calculations spread over within a grid environment. .

Research paper thumbnail of Various Approaches Proposed for Eliminating Duplicate Data in a System

Communications - Scientific letters of the University of Zilina

The growth of big data processing market led to an increase in the overload of computation data c... more The growth of big data processing market led to an increase in the overload of computation data centers, change of methods used in storing the data, communication between the computing units and computational time needed to process or edit the data. Methods of distributed or parallel data processing brought new problems related to computations with data which need to be examined. Unlike the conventional cloud services, a tight connection between the data and the computations is one of the main characteristics of the big data services. The computational tasks can be done only if relevant data are available. Three factors, which influence the speed and efficiency of data processing are-data duplicity, data integrity and data security. We are motivated to study the problems related to the growing time needed for data processing by optimizing these three factors in geographically distributed data centers.

Research paper thumbnail of Léo GERVILLE-RÉACHE, Mikhail NIKULIN On Statistical Modelling in Accelerated Life Testing.......................................................................48

Cost – effective maintenance with preventive replacement of oldest components....................... more Cost – effective maintenance with preventive replacement of oldest components.........................37

Research paper thumbnail of Efficiency of the relational database tuple access

2019 IEEE 15th International Scientific Conference on Informatics

Relational databases form the core part of the data management in current information systems. Th... more Relational databases form the core part of the data management in current information systems. The number of data is still rising and the structure is more and more complicated and evolving. If the data are bordered by the time spectrum, problem is even deeper. This paper deals with the relational database system architecture and proposes own techniques for optimizing data location and access to the tuples to get relevant data effectively in a proper time. Own proposed solution is based on limiting the impact of the whole table scanning necessity. Thanks to that, the performance is significantly shifted and improved, whereas, in our solution, index approach can be always used, since each data row is delimited by the primary key definition.

Research paper thumbnail of Grid Computing for Complex Problems GCCP 2008 BOOK OF ABSTRACTS

The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastruct... more The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastructure in Europe and beyond. Based on the requirements of the user communities and by combining the strength and views of the National Grid Initiatives (NGI), EGI is expected to deliver the next step towards a permanent and common grid infrastructure. The effort is currently driven by the EGI Design Study, an FP7 funded project defining the structure and functionality of the future EGI and providing support to the NGIs in their evolution. The goal is the setup of an organizational model, with the EGI Organization (EGI.org) as the glue between the national efforts, which provides seamless access to grid resources for all application domains.

Research paper thumbnail of Grid Computing for Complex Problems GCCP 2009 BOOK OF ABSTRACTS

The ultimate goal of quantum chemistry is an a priori prediction. Despite indubitable achievement... more The ultimate goal of quantum chemistry is an a priori prediction. Despite indubitable achievements, precise predictions of molecular energies and properties still represent a challenge to method developers. This lecture is an attempt to introduce the world of highly accurate ab initio calculations of small molecules in the perspective of the author's contribution to it and from the perspective of the programmer. A link to astrophysical applications is outlined via an example of the full nine-dimensional potential energy surface of H2O-H2 interaction, which resulted from a combination of about 1000 intensive calculations using supercomputers with almost half a million less intensive calculations spread over within a grid environment. .

Research paper thumbnail of Reducing Data Access Time using Table Partitioning Techniques

2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA), 2020

Data are used in a large number of different fields. Large databases with huge amounts of data co... more Data are used in a large number of different fields. Large databases with huge amounts of data come to the fore. They are a very important part of many of information systems, from commercial systems, through technical and technological systems, the web, and mobile applications to the management of scientific data in various fields. Fast access to data is, therefore, becoming increasingly important today and great emphasis is placed on improving it. Initially, the data access time may be only slightly increased working with smaller tasks, but with a larger number of larger tasks, there is a significantly higher data access time that needs to be reduced. From the point of view of efficiency, it is not appropriate or necessary to access all data, and therefore it is necessary to divide this data into smaller parts and thus create partitions, which will facilitate the execution of certain operations and bring efficiency, whether in terms of time or performance. This paper discusses partitioning, its various techniques, methods, and the benefits it brings. It compares access times to tables with and without partitions, regarding various numbers of table parts that are accessed.

Research paper thumbnail of Hospital Information System

Encyclopedia of Public Health

Research paper thumbnail of Reducing Data Access Time using Table Partitioning Techniques

2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA), 2020

Data are used in a large number of different fields. Large databases with huge amounts of data co... more Data are used in a large number of different fields. Large databases with huge amounts of data come to the fore. They are a very important part of many of information systems, from commercial systems, through technical and technological systems, the web, and mobile applications to the management of scientific data in various fields. Fast access to data is, therefore, becoming increasingly important today and great emphasis is placed on improving it. Initially, the data access time may be only slightly increased working with smaller tasks, but with a larger number of larger tasks, there is a significantly higher data access time that needs to be reduced. From the point of view of efficiency, it is not appropriate or necessary to access all data, and therefore it is necessary to divide this data into smaller parts and thus create partitions, which will facilitate the execution of certain operations and bring efficiency, whether in terms of time or performance. This paper discusses partitioning, its various techniques, methods, and the benefits it brings. It compares access times to tables with and without partitions, regarding various numbers of table parts that are accessed.

Research paper thumbnail of Visual support of workflow composition involving collaboration

The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastruct... more The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastructure in Europe and beyond. Based on the requirements of the user communities and by combining the strength and views of the National Grid Initiatives (NGI), EGI is expected to deliver the next step towards a permanent and common grid infrastructure. The effort is currently driven by the EGI Design Study, an FP7 funded project defining the structure and functionality of the future EGI and providing support to the NGIs in their evolution. The goal is the setup of an organizational model, with the EGI Organization (EGI.org) as the glue between the national efforts, which provides seamless access to grid resources for all application domains.

Research paper thumbnail of Unified data access framework for integrated systems

In this paper we are introducing the concept for the unified data access framework. The main aim ... more In this paper we are introducing the concept for the unified data access framework. The main aim of our work is to allow the unified data access on the international level for educational, commercial and security purposes. The idea of the unified access is important in the current days mostly for the national/international security, international labor policy, market restrictions or diseases prevention.

Research paper thumbnail of Building the Unified Data Access Framework

In this paper we are introducing the concept for the unified data access framework. The main aim ... more In this paper we are introducing the concept for the unified data access framework. The main aim of our work is to allow the unified data access on the international level for educational, commercial and security purposes. The idea of the unified access is important in the current days mostly for the national/international security, international labor policy, market restrictions or diseases prevention.

Research paper thumbnail of Placement fragments of the distributed database

Článek se týká distribuce fragmentů databáze na základě matematického modelu a kriteriální funkce... more Článek se týká distribuce fragmentů databáze na základě matematického modelu a kriteriální funkce zahrnující vliv transakčního zpracování a paralelizmu databázového systému. Řeší také varianty (podle definice omezujících podmínek) s ohledem na replikace fragmentů databáze. Toto řešení umožní revidovat distribuci na základě skutečných hodnot obsažených v databázi.

Research paper thumbnail of Klasifikácia založená na metóde stabilných rozhodovacích stromov

Decision tree induction is one of useful approaches for extracting classification knowledge from ... more Decision tree induction is one of useful approaches for extracting classification knowledge from set instances. Considerable part of these instances obtains from formal analysis and modeling of human activities, which has fuzzy nature. It is often the case that real-world tasks can be handled easily by humans, they are often too difficult to be handled by machines. Fuzzy logic allows us to describe this problem. Fuzzy decision tree is a very popular method for fuzzy classification. We introduced term of cumulative information estimations based on Theory of Information approach. We used these cumulative estimations for synthesis of different criteria of decision tree induction. Usage these criteria allow us to produce new type of trees. In this paper we introduce Stable Ordered Fuzzy Decision Tree (FDT). The tree is oriented to parallel and stable processing of input attributes with differing cost. Usage this FDT allows us to realize a sub-optimal classification. Such classification ...

Research paper thumbnail of Integrácia dátových zdrojov

Integrating heterogeneous data sources is a big deal for most companies, organizations, universit... more Integrating heterogeneous data sources is a big deal for most companies, organizations, universities ... Thanks to this every organization can yield the most important and relevant information for every user (in sense of employee, manager, student, professor,...) without bigger effort. Our paper describes what everything can be integrated in portal and how easily it can be done. The biggest problem for us is to choose the most effective and functional technique for solving our problems.Príspevok dokumentuje možnosti integrácie dátových zdrojov pri tvorbe rozsiahlych informačných systémov, ktoré čerpajú dáta z rôznych a popritom heterogénnych zdrojov. Článok demonštruje vybrané spôsoby integrácie tak, aby mohli byť prezentované v portlových riešeniach

Research paper thumbnail of S.: Learning fuzzy rules from fuzzy decision trees

Classification rules are an important tool for discovering knowledge from databases. Integrating ... more Classification rules are an important tool for discovering knowledge from databases. Integrating fuzzy logic algorithms into databases allows us to reduce uncertainty which is connected with data in databases and to increase discovered knowledge’s accuracy. In this paper, we analyze some possible variants of making classification rules from a given fuzzy decision based on cumulative information. We compare their classification accuracy with the accuracy which is reached by statistical methods and other fuzzy classification rules.

Research paper thumbnail of Algorithm for Dynamic Analysis of Multi-State System by Structure Function

EUROCON 2005 - The International Conference on "Computer as a Tool", 2005

The reliability of the multistate system is investigated in this paper. The new class of reliabil... more The reliability of the multistate system is investigated in this paper. The new class of reliability indices is proposed. They are dynamic reliability indices. These indices estimate influence upon the multistate system reliability by the state of a system component. The multiple valued logic mathematical tools are used for calculation of dynamic reliability indices

Research paper thumbnail of Various Approaches Proposed for Eliminating Duplicate Data in a System

Communications - Scientific letters of the University of Zilina

The growth of big data processing market led to an increase in the overload of computation data c... more The growth of big data processing market led to an increase in the overload of computation data centers, change of methods used in storing the data, communication between the computing units and computational time needed to process or edit the data. Methods of distributed or parallel data processing brought new problems related to computations with data which need to be examined. Unlike the conventional cloud services, a tight connection between the data and the computations is one of the main characteristics of the big data services. The computational tasks can be done only if relevant data are available. Three factors, which influence the speed and efficiency of data processing are-data duplicity, data integrity and data security. We are motivated to study the problems related to the growing time needed for data processing by optimizing these three factors in geographically distributed data centers.

Research paper thumbnail of Efficiency of the relational database tuple access

2019 IEEE 15th International Scientific Conference on Informatics

Relational databases form the core part of the data management in current information systems. Th... more Relational databases form the core part of the data management in current information systems. The number of data is still rising and the structure is more and more complicated and evolving. If the data are bordered by the time spectrum, problem is even deeper. This paper deals with the relational database system architecture and proposes own techniques for optimizing data location and access to the tuples to get relevant data effectively in a proper time. Own proposed solution is based on limiting the impact of the whole table scanning necessity. Thanks to that, the performance is significantly shifted and improved, whereas, in our solution, index approach can be always used, since each data row is delimited by the primary key definition.

Research paper thumbnail of Grid Computing for Complex Problems GCCP 2008 BOOK OF ABSTRACTS

The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastruct... more The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastructure in Europe and beyond. Based on the requirements of the user communities and by combining the strength and views of the National Grid Initiatives (NGI), EGI is expected to deliver the next step towards a permanent and common grid infrastructure. The effort is currently driven by the EGI Design Study, an FP7 funded project defining the structure and functionality of the future EGI and providing support to the NGIs in their evolution. The goal is the setup of an organizational model, with the EGI Organization (EGI.org) as the glue between the national efforts, which provides seamless access to grid resources for all application domains.

Research paper thumbnail of Grid Computing for Complex Problems GCCP 2009 BOOK OF ABSTRACTS

The ultimate goal of quantum chemistry is an a priori prediction. Despite indubitable achievement... more The ultimate goal of quantum chemistry is an a priori prediction. Despite indubitable achievements, precise predictions of molecular energies and properties still represent a challenge to method developers. This lecture is an attempt to introduce the world of highly accurate ab initio calculations of small molecules in the perspective of the author's contribution to it and from the perspective of the programmer. A link to astrophysical applications is outlined via an example of the full nine-dimensional potential energy surface of H2O-H2 interaction, which resulted from a combination of about 1000 intensive calculations using supercomputers with almost half a million less intensive calculations spread over within a grid environment. .

Research paper thumbnail of Various Approaches Proposed for Eliminating Duplicate Data in a System

Communications - Scientific letters of the University of Zilina

The growth of big data processing market led to an increase in the overload of computation data c... more The growth of big data processing market led to an increase in the overload of computation data centers, change of methods used in storing the data, communication between the computing units and computational time needed to process or edit the data. Methods of distributed or parallel data processing brought new problems related to computations with data which need to be examined. Unlike the conventional cloud services, a tight connection between the data and the computations is one of the main characteristics of the big data services. The computational tasks can be done only if relevant data are available. Three factors, which influence the speed and efficiency of data processing are-data duplicity, data integrity and data security. We are motivated to study the problems related to the growing time needed for data processing by optimizing these three factors in geographically distributed data centers.

Research paper thumbnail of Léo GERVILLE-RÉACHE, Mikhail NIKULIN On Statistical Modelling in Accelerated Life Testing.......................................................................48

Cost – effective maintenance with preventive replacement of oldest components....................... more Cost – effective maintenance with preventive replacement of oldest components.........................37

Research paper thumbnail of Efficiency of the relational database tuple access

2019 IEEE 15th International Scientific Conference on Informatics

Relational databases form the core part of the data management in current information systems. Th... more Relational databases form the core part of the data management in current information systems. The number of data is still rising and the structure is more and more complicated and evolving. If the data are bordered by the time spectrum, problem is even deeper. This paper deals with the relational database system architecture and proposes own techniques for optimizing data location and access to the tuples to get relevant data effectively in a proper time. Own proposed solution is based on limiting the impact of the whole table scanning necessity. Thanks to that, the performance is significantly shifted and improved, whereas, in our solution, index approach can be always used, since each data row is delimited by the primary key definition.

Research paper thumbnail of Grid Computing for Complex Problems GCCP 2008 BOOK OF ABSTRACTS

The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastruct... more The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastructure in Europe and beyond. Based on the requirements of the user communities and by combining the strength and views of the National Grid Initiatives (NGI), EGI is expected to deliver the next step towards a permanent and common grid infrastructure. The effort is currently driven by the EGI Design Study, an FP7 funded project defining the structure and functionality of the future EGI and providing support to the NGIs in their evolution. The goal is the setup of an organizational model, with the EGI Organization (EGI.org) as the glue between the national efforts, which provides seamless access to grid resources for all application domains.

Research paper thumbnail of Grid Computing for Complex Problems GCCP 2009 BOOK OF ABSTRACTS

The ultimate goal of quantum chemistry is an a priori prediction. Despite indubitable achievement... more The ultimate goal of quantum chemistry is an a priori prediction. Despite indubitable achievements, precise predictions of molecular energies and properties still represent a challenge to method developers. This lecture is an attempt to introduce the world of highly accurate ab initio calculations of small molecules in the perspective of the author's contribution to it and from the perspective of the programmer. A link to astrophysical applications is outlined via an example of the full nine-dimensional potential energy surface of H2O-H2 interaction, which resulted from a combination of about 1000 intensive calculations using supercomputers with almost half a million less intensive calculations spread over within a grid environment. .

Research paper thumbnail of Reducing Data Access Time using Table Partitioning Techniques

2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA), 2020

Data are used in a large number of different fields. Large databases with huge amounts of data co... more Data are used in a large number of different fields. Large databases with huge amounts of data come to the fore. They are a very important part of many of information systems, from commercial systems, through technical and technological systems, the web, and mobile applications to the management of scientific data in various fields. Fast access to data is, therefore, becoming increasingly important today and great emphasis is placed on improving it. Initially, the data access time may be only slightly increased working with smaller tasks, but with a larger number of larger tasks, there is a significantly higher data access time that needs to be reduced. From the point of view of efficiency, it is not appropriate or necessary to access all data, and therefore it is necessary to divide this data into smaller parts and thus create partitions, which will facilitate the execution of certain operations and bring efficiency, whether in terms of time or performance. This paper discusses partitioning, its various techniques, methods, and the benefits it brings. It compares access times to tables with and without partitions, regarding various numbers of table parts that are accessed.

Research paper thumbnail of Hospital Information System

Encyclopedia of Public Health

Research paper thumbnail of Reducing Data Access Time using Table Partitioning Techniques

2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA), 2020

Data are used in a large number of different fields. Large databases with huge amounts of data co... more Data are used in a large number of different fields. Large databases with huge amounts of data come to the fore. They are a very important part of many of information systems, from commercial systems, through technical and technological systems, the web, and mobile applications to the management of scientific data in various fields. Fast access to data is, therefore, becoming increasingly important today and great emphasis is placed on improving it. Initially, the data access time may be only slightly increased working with smaller tasks, but with a larger number of larger tasks, there is a significantly higher data access time that needs to be reduced. From the point of view of efficiency, it is not appropriate or necessary to access all data, and therefore it is necessary to divide this data into smaller parts and thus create partitions, which will facilitate the execution of certain operations and bring efficiency, whether in terms of time or performance. This paper discusses partitioning, its various techniques, methods, and the benefits it brings. It compares access times to tables with and without partitions, regarding various numbers of table parts that are accessed.

Research paper thumbnail of Visual support of workflow composition involving collaboration

The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastruct... more The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastructure in Europe and beyond. Based on the requirements of the user communities and by combining the strength and views of the National Grid Initiatives (NGI), EGI is expected to deliver the next step towards a permanent and common grid infrastructure. The effort is currently driven by the EGI Design Study, an FP7 funded project defining the structure and functionality of the future EGI and providing support to the NGIs in their evolution. The goal is the setup of an organizational model, with the EGI Organization (EGI.org) as the glue between the national efforts, which provides seamless access to grid resources for all application domains.

Research paper thumbnail of Unified data access framework for integrated systems

In this paper we are introducing the concept for the unified data access framework. The main aim ... more In this paper we are introducing the concept for the unified data access framework. The main aim of our work is to allow the unified data access on the international level for educational, commercial and security purposes. The idea of the unified access is important in the current days mostly for the national/international security, international labor policy, market restrictions or diseases prevention.

Research paper thumbnail of Building the Unified Data Access Framework

In this paper we are introducing the concept for the unified data access framework. The main aim ... more In this paper we are introducing the concept for the unified data access framework. The main aim of our work is to allow the unified data access on the international level for educational, commercial and security purposes. The idea of the unified access is important in the current days mostly for the national/international security, international labor policy, market restrictions or diseases prevention.

Research paper thumbnail of Placement fragments of the distributed database

Článek se týká distribuce fragmentů databáze na základě matematického modelu a kriteriální funkce... more Článek se týká distribuce fragmentů databáze na základě matematického modelu a kriteriální funkce zahrnující vliv transakčního zpracování a paralelizmu databázového systému. Řeší také varianty (podle definice omezujících podmínek) s ohledem na replikace fragmentů databáze. Toto řešení umožní revidovat distribuci na základě skutečných hodnot obsažených v databázi.