Dante Gallegos | Universidad del Valle de Mexico (original) (raw)

Papers by Dante Gallegos

Research paper thumbnail of A containerized distributed processing platform for autonomous surface vehicles: preliminary results for marine litter detection

2023 31st Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)

Research paper thumbnail of Blockchain-based schemes for continuous verifiability and traceability of IoT data

2023 31st Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)

Research paper thumbnail of On the building of efficient self-adaptable health data science services by using dynamic patterns

Future Generation Computer Systems

Research paper thumbnail of SecMesh: An efficient information security method for stream processing in edge-fog-cloud

2022 7th International Conference on Cloud Computing and Internet of Things

Research paper thumbnail of CD/CV: Blockchain-based schemes for continuous verifiability and traceability of IoT data for edge–fog–cloud

Information Processing & Management

Research paper thumbnail of On the Efficient Delivery and Storage of IoT Data in Edge–Fog–Cloud Environments

Sensors

Cloud storage has become a keystone for organizations to manage large volumes of data produced by... more Cloud storage has become a keystone for organizations to manage large volumes of data produced by sensors at the edge as well as information produced by deep and machine learning applications. Nevertheless, the latency produced by geographic distributed systems deployed on any of the edge, the fog, or the cloud, leads to delays that are observed by end-users in the form of high response times. In this paper, we present an efficient scheme for the management and storage of Internet of Thing (IoT) data in edge–fog–cloud environments. In our proposal, entities called data containers are coupled, in a logical manner, with nano/microservices deployed on any of the edge, the fog, or the cloud. The data containers implement a hierarchical cache file system including storage levels such as in-memory, file system, and cloud services for transparently managing the input/output data operations produced by nano/microservices (e.g., a sensor hub collecting data from sensors at the edge or machin...

Research paper thumbnail of On the building of self-adaptable systems to efficiently manage medical data

2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid)

Research paper thumbnail of PuzzleMesh: A Puzzle Model to Build Mesh of Agnostic Services for Edge-Fog-Cloud

IEEE Transactions on Services Computing

Research paper thumbnail of SeRSS: a storage mesh architecture to build serverless reliable storage services

2022 30th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP)

Cloud storage has been the solution for organizations to manage the exponential growth of data ob... more Cloud storage has been the solution for organizations to manage the exponential growth of data observed over the past few years. However, end-users still suffer from side-effects of cloud service outages, which particularly affect edge-fog-cloud environments. This paper presents SeRSS, a storage mesh architecture to create and operate reliable, configurable, and flexible serverless storage services for heterogeneous infrastructures. A case study was conducted based on-the-fly building of storage services to manage medical imagery. The experimental evaluation revealed the efficiency of SeRSS to manage and store data in a reliable manner in heterogeneous infrastructures.

Research paper thumbnail of Internet of Things orchestration using DagOn* workflow engine

2019 IEEE 5th World Forum on Internet of Things (WF-IoT)

The increasing of connected tiny, low-power, embedded devices, grouped into the generic definitio... more The increasing of connected tiny, low-power, embedded devices, grouped into the generic definition of "Internet of Things" (IoT), raised remarkably the amount of in-situ collected data. However, at the time of writing, IoT devices have limited storage and computation resources if compared with a cloud computing or on-premises infrastructure. IoT devices often suffer for reduced connectivity due to the place of the deployment or other technical, environmental or economic reasons. In this work, we present the DagOn* workflow engine as a part of an IoT orchestration scenario oriented to operational environmental prediction. Our novel approach is devoted to join the two worlds of workflows, in which each task runs on a dynamically allocated computational infrastructure, with tiny jobs targeted to embedded devices hosting sensors and actuators. We show our preliminary results applied to a demonstration use case. We are confident that further development of the proposed technology will affect positively production applications for massive and geographically distributed data collection.

Research paper thumbnail of FedFlow: a federated platform to build secure sharing and synchronization services for health dataflows

Research paper thumbnail of A Data Preparation Approach for Cloud Storage Based on Containerized Parallel Patterns

Internet and Distributed Computing Systems, 2019

In this paper, we present the design, implementation, and evaluation of an efficient data prepara... more In this paper, we present the design, implementation, and evaluation of an efficient data preparation and retrieval approach for cloud storage. The approach includes a deduplication subsystem that indexes the hash of each content to identify duplicated data. As a consequence, avoiding duplicated content reduces reprocessing time during uploads and other costs related to outsource data management tasks. Our proposed data preparation scheme enables organizations to add properties such as security, reliability, and cost-efficiency to their contents before sending them to the cloud. It also creates recovery schemes for organizations to share preprocessed contents with partners and endusers. The approach also includes an engine that encapsulates preprocessing applications into virtual containers (VCs) to create parallel patterns that improve the efficiency of data preparation retrieval process. In a study case, real repositories of satellite images, and organizational files were prepared to be migrated to the cloud by using processes such as compression, encryption, encoding for fault tolerance, and access control. The experimental evaluation revealed the feasibility of using a data preparation approach for organizations to mitigate risks that still could arise in the cloud. It also revealed the efficiency of the deduplication process to reduce data preparation tasks and the efficacy of parallel patterns to improve the end-user service experience.

Research paper thumbnail of A containerized service for clustering and categorization of weather records in the cloud

2018 8th International Conference on Computer Science and Information Technology (CSIT), 2018

This paper presents a containerized service for clustering and categorization of weather records ... more This paper presents a containerized service for clustering and categorization of weather records in the cloud. This service considers a scheme of microservices and containers for organizations and end-users to manage/process weather records from the acquisition, passing through the prepossessing and processing stages, to the exhibition of results. In this service, a specialized crawler acquires records that are delivered to a microservice of distributed categorization of weather records, which performs clustering of acquired data (the temperature and precipitation) by spatiotemporal parameters. The clusters found are exhibited in a map by a geoportal where statistic microservice also produce results regression graphs on-the-fly. To evaluate the feasibility of this service, a case study based on 33 years of daily records captured by the Mexican weather station network (EMAS-CONAGUA) has been conducted. Lessons learned in this study about the performance of record acquisition, clustering processing, and mapping exhibition are described in this paper. Examples of utilization of this service revealed that end-users can analyze weather parameters in an efficient, flexible and automatic manner.

Research paper thumbnail of An Interoperable Cloud Based Geoportal for Discovery and Managment of Earth Observation Products

Computer Science & Information Technology, 2018

This paper presents the design and development of an interoperable geoportal service for discover... more This paper presents the design and development of an interoperable geoportal service for discovery and management of earth observation products (EOPs). In this service, the geoportal components are encapsulated into virtual containers that are launched into the cloud by using a microservice scheme to solve issues such as interoperability (with other systems) and implementation (over different platforms). A search microservice that analyses the preferences of end-users (settings of spatiotemporal and polygon shapes) and builds clusters of users sharing preferences was included into the geoportal for recommending/delivering, in advance, products matching with end-user preferences. The geoportal service also enables end-users to organize EOPs on-the-fly by using spatiotemporal parameters. A prototype of this service was implemented in a private cloud and connected to a satellite imagery repository of an antenna (ERIS) managed by Mexican Space Agency in a proof of concept. Learned lessons and performance assessments are described through an experimental evaluation with real users' participation.

Research paper thumbnail of A Microservice-Based Building Block Approach for Scientific Workflow Engines: Processing Large Data Volumes with DagOnStar

2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), 2019

The impact of machine learning algorithms on everyday life is overwhelming until the novel concep... more The impact of machine learning algorithms on everyday life is overwhelming until the novel concept of datacracy as a new social paradigm. In the field of computational environmental science and, in particular, of applications of large data science proof of concept on the natural resources management this kind of approaches could make the difference between species surviving to potential extinction and compromised ecological niches. In this scenario, the use of high throughput workflow engines, enabling the management of complex data flows in production is rock solid, as demonstrated by the rise of recent tools as Parsl and DagOnStar. Nevertheless, the availability of dedicated computational resources, although mitigated by the use of cloud computing technologies, could be a remarkable limitation. In this paper, we present a novel and improved version of DagOnStar, enabling the execution of lightweight but recurring computational tasks on the microservice architecture. We present our preliminary results motivating our choices supported by some evaluations and a real-world use case.

Research paper thumbnail of An efficient pattern-based approach for workflow supporting large-scale science: The DagOnStar experience

Future Generation Computer Systems, 2021

Abstract Workflow engines are commonly used to orchestrate large-scale scientific computations su... more Abstract Workflow engines are commonly used to orchestrate large-scale scientific computations such as, but not limited to weather, climate, natural disasters, food safety, and territorial management. However, to implement, manage, and execute real-world scientific applications in the form of workflows on multiple infrastructures (servers, clusters, cloud) remains a challenge. In this paper, we present DagOnStar ( D irected A cyclic G raph On Anything ), a lightweight Python library implementing a workflow paradigm based on parallel patterns that can be executed on any combination of local machines, on-premise high performance computing clusters, containers, and cloud-based virtual infrastructures. DagOnStar is designed to minimize data movement to reduce the application storage footprint. A case study based on a real-world application is explored to illustrate the use of this novel workflow engine: a containerized weather data collection application deployed on multiple infrastructures. An experimental comparison with other state-of-the-art workflow engines shows that DagOnStar can run workflows on multiple types of infrastructure with an improvement of 50.19% in run time when using a parallel pattern with eight task-level workers.

Research paper thumbnail of On the Continuous Processing of Health Data in Edge-Fog-Cloud Computing by Using Micro/Nanoservice Composition

IEEE Access, 2020

The edge, the fog, the cloud, and even the end-user's devices play a key role in the management o... more The edge, the fog, the cloud, and even the end-user's devices play a key role in the management of the health sensitive content/data lifecycle. However, the creation and management of solutions including multiple applications executed by multiple users in multiple environments (edge, the fog, and the cloud) to process multiple health repositories that, at the same time, fulfilling non-functional requirements (NFRs) represents a complex challenge for health care organizations. This paper presents the design, development, and implementation of an architectural model to create, on-demand, edge-fog-cloud processing structures to continuously handle big health data and, at the same time, to execute services for fulfilling NFRs. In this model, constructive and modular blocks, implemented as microservices and nanoservices, are recursively interconnected to create edge-fog-cloud processing structures as infrastructure-agnostic services. Continuity schemes create dataflows through the blocks of edge-fog-cloud structures and enforce, in an implicit manner, the fulfillment of NFRs for data arriving and departing to/from each block of each edge-fogcloud structure. To show the feasibility of this model, a prototype was built using this model, which was evaluated in a case study based on the processing of health data for supporting critical decision-making procedures in remote patient monitoring. This study considered scenarios where end-users and medical staff received insights discovered when processing electrocardiograms (ECGs) produced by sensors in wireless IoT devices as well as where physicians received patient records (spirometry studies, ECGs and tomography images) and warnings raised when online analyzing and identifying anomalies in the analyzed ECG data. A scenario where organizations manage multiple simultaneous each edge-fog-cloud structure for processing of health data and contents delivered to internal and external staff was also studied. The evaluation of these scenarios showed the feasibility of applying this model to the building of solutions interconnecting multiple services/applications managing big health data through different environments. INDEX TERMS Big health data, edge-fog-cloud, the health-IoT processing, the Internet of Things, microservice architecture. ALDO E. PEREZ-RAMOS received the B.Sc. degree (Eng.) in electronics engineering from the Oaxaca Institute of Technology, Oaxaca, Mexico, and the M.Sc. and Ph.D. degrees in electronics and telecommunications from the CICESE Research Center, Ensenada, Mexico, in 2008 and 2016, respectively. He is currently a CONACyT Researcher assigned to CICESE Monterrey. His main research interests include wireless personal area networks, wireless sensor networks, low-power wide area networks, the Internet of Medical Things, e-health systems, radio over fiber architectures, embedded systems, and so on. DIANA CARRIZALES-ESPINOZA received the B.Sc. degree (Eng.) in IT engineering from the Polytechnic University of Altamira, Mexico, in 2016. She is currently pursuing the master's degree with the Cinvestav Tamaulipas, Mexico. Her research interests include distributed systems and cloud storage.

Research paper thumbnail of A Federated Content Distribution System to Build Health Data Synchronization Services

2021 29th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), 2021

In organizational environments, such as in hospitals, data have to be processed, preserved, and s... more In organizational environments, such as in hospitals, data have to be processed, preserved, and shared with other organizations in a cost-efficient manner. Moreover, organizations have to accomplish different mandatory non-functional requirements imposed by the laws, protocols, and norms of each country. In this context, this paper presents a Federated Content Distribution System to build infrastructure-agnostic health data synchronization services. In this federation, each hospital manages local and federated services based on a pub/sub model. The local services manage users and contents (i.e., medical imagery) inside the hospital, whereas federated services allow the cooperation of different hospitals sharing resources and data. Data preparation schemes were implemented to add non-functional requirements to data. Moreover, data published in the content distribution system are automatically synchronized to all users subscribed to the catalog where the content was published.

Research paper thumbnail of From the edge to the cloud: A continuous delivery and preparation model for processing big IoT data

Simulation Modelling Practice and Theory, 2020

This paper presents a processing model for big IoT data. The model includes a continuous delivery... more This paper presents a processing model for big IoT data. The model includes a continuous delivery scheme based on building blocks for constructing software pipelines from the edge to the cloud. It also includes a data preparation scheme based on parallel patterns for establishing, in an efficient manner, controls over the production and consumption of IoT data. This scheme adds data properties such as cost-efficiency storage, security, and reliability, which are useful to avoid alterations in data and repudiation situations as well as to mitigate risks still arisen in the cloud such as confidentiality violations and service outages. An overlay structure, including planes such as Pub/Sub, control, and preservation, integrates the proposed schemes into software pipelines. The proposed model was developed in both prototype and simulator of software pipelines. Case studies were conducted based on pipeline services deployed from the edge, passing from the fog to the cloud for processing and managing real climate data repositories, which were produced by three different data sensor sources, such as ground stations deployed on Mexico and Spain, as well as small distributed IoT devices. Information sharing patterns for end-users to retrieve raw and/or processed IoT data were also studied. The experimental evaluation revealed the feasibility of using continuous delivery scheme to create dataflows from the edge to the cloud, the efficacy of the overlay structure to create information sharing patterns, as well as the efficiency of data preparation schemes and parallel patterns to improve the end-user service experience in comparison with traditional state-of-the-art solutions.

Research paper thumbnail of A containerized distributed processing platform for autonomous surface vehicles: preliminary results for marine litter detection

2023 31st Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)

Research paper thumbnail of Blockchain-based schemes for continuous verifiability and traceability of IoT data

2023 31st Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)

Research paper thumbnail of On the building of efficient self-adaptable health data science services by using dynamic patterns

Future Generation Computer Systems

Research paper thumbnail of SecMesh: An efficient information security method for stream processing in edge-fog-cloud

2022 7th International Conference on Cloud Computing and Internet of Things

Research paper thumbnail of CD/CV: Blockchain-based schemes for continuous verifiability and traceability of IoT data for edge–fog–cloud

Information Processing & Management

Research paper thumbnail of On the Efficient Delivery and Storage of IoT Data in Edge–Fog–Cloud Environments

Sensors

Cloud storage has become a keystone for organizations to manage large volumes of data produced by... more Cloud storage has become a keystone for organizations to manage large volumes of data produced by sensors at the edge as well as information produced by deep and machine learning applications. Nevertheless, the latency produced by geographic distributed systems deployed on any of the edge, the fog, or the cloud, leads to delays that are observed by end-users in the form of high response times. In this paper, we present an efficient scheme for the management and storage of Internet of Thing (IoT) data in edge–fog–cloud environments. In our proposal, entities called data containers are coupled, in a logical manner, with nano/microservices deployed on any of the edge, the fog, or the cloud. The data containers implement a hierarchical cache file system including storage levels such as in-memory, file system, and cloud services for transparently managing the input/output data operations produced by nano/microservices (e.g., a sensor hub collecting data from sensors at the edge or machin...

Research paper thumbnail of On the building of self-adaptable systems to efficiently manage medical data

2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid)

Research paper thumbnail of PuzzleMesh: A Puzzle Model to Build Mesh of Agnostic Services for Edge-Fog-Cloud

IEEE Transactions on Services Computing

Research paper thumbnail of SeRSS: a storage mesh architecture to build serverless reliable storage services

2022 30th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP)

Cloud storage has been the solution for organizations to manage the exponential growth of data ob... more Cloud storage has been the solution for organizations to manage the exponential growth of data observed over the past few years. However, end-users still suffer from side-effects of cloud service outages, which particularly affect edge-fog-cloud environments. This paper presents SeRSS, a storage mesh architecture to create and operate reliable, configurable, and flexible serverless storage services for heterogeneous infrastructures. A case study was conducted based on-the-fly building of storage services to manage medical imagery. The experimental evaluation revealed the efficiency of SeRSS to manage and store data in a reliable manner in heterogeneous infrastructures.

Research paper thumbnail of Internet of Things orchestration using DagOn* workflow engine

2019 IEEE 5th World Forum on Internet of Things (WF-IoT)

The increasing of connected tiny, low-power, embedded devices, grouped into the generic definitio... more The increasing of connected tiny, low-power, embedded devices, grouped into the generic definition of "Internet of Things" (IoT), raised remarkably the amount of in-situ collected data. However, at the time of writing, IoT devices have limited storage and computation resources if compared with a cloud computing or on-premises infrastructure. IoT devices often suffer for reduced connectivity due to the place of the deployment or other technical, environmental or economic reasons. In this work, we present the DagOn* workflow engine as a part of an IoT orchestration scenario oriented to operational environmental prediction. Our novel approach is devoted to join the two worlds of workflows, in which each task runs on a dynamically allocated computational infrastructure, with tiny jobs targeted to embedded devices hosting sensors and actuators. We show our preliminary results applied to a demonstration use case. We are confident that further development of the proposed technology will affect positively production applications for massive and geographically distributed data collection.

Research paper thumbnail of FedFlow: a federated platform to build secure sharing and synchronization services for health dataflows

Research paper thumbnail of A Data Preparation Approach for Cloud Storage Based on Containerized Parallel Patterns

Internet and Distributed Computing Systems, 2019

In this paper, we present the design, implementation, and evaluation of an efficient data prepara... more In this paper, we present the design, implementation, and evaluation of an efficient data preparation and retrieval approach for cloud storage. The approach includes a deduplication subsystem that indexes the hash of each content to identify duplicated data. As a consequence, avoiding duplicated content reduces reprocessing time during uploads and other costs related to outsource data management tasks. Our proposed data preparation scheme enables organizations to add properties such as security, reliability, and cost-efficiency to their contents before sending them to the cloud. It also creates recovery schemes for organizations to share preprocessed contents with partners and endusers. The approach also includes an engine that encapsulates preprocessing applications into virtual containers (VCs) to create parallel patterns that improve the efficiency of data preparation retrieval process. In a study case, real repositories of satellite images, and organizational files were prepared to be migrated to the cloud by using processes such as compression, encryption, encoding for fault tolerance, and access control. The experimental evaluation revealed the feasibility of using a data preparation approach for organizations to mitigate risks that still could arise in the cloud. It also revealed the efficiency of the deduplication process to reduce data preparation tasks and the efficacy of parallel patterns to improve the end-user service experience.

Research paper thumbnail of A containerized service for clustering and categorization of weather records in the cloud

2018 8th International Conference on Computer Science and Information Technology (CSIT), 2018

This paper presents a containerized service for clustering and categorization of weather records ... more This paper presents a containerized service for clustering and categorization of weather records in the cloud. This service considers a scheme of microservices and containers for organizations and end-users to manage/process weather records from the acquisition, passing through the prepossessing and processing stages, to the exhibition of results. In this service, a specialized crawler acquires records that are delivered to a microservice of distributed categorization of weather records, which performs clustering of acquired data (the temperature and precipitation) by spatiotemporal parameters. The clusters found are exhibited in a map by a geoportal where statistic microservice also produce results regression graphs on-the-fly. To evaluate the feasibility of this service, a case study based on 33 years of daily records captured by the Mexican weather station network (EMAS-CONAGUA) has been conducted. Lessons learned in this study about the performance of record acquisition, clustering processing, and mapping exhibition are described in this paper. Examples of utilization of this service revealed that end-users can analyze weather parameters in an efficient, flexible and automatic manner.

Research paper thumbnail of An Interoperable Cloud Based Geoportal for Discovery and Managment of Earth Observation Products

Computer Science & Information Technology, 2018

This paper presents the design and development of an interoperable geoportal service for discover... more This paper presents the design and development of an interoperable geoportal service for discovery and management of earth observation products (EOPs). In this service, the geoportal components are encapsulated into virtual containers that are launched into the cloud by using a microservice scheme to solve issues such as interoperability (with other systems) and implementation (over different platforms). A search microservice that analyses the preferences of end-users (settings of spatiotemporal and polygon shapes) and builds clusters of users sharing preferences was included into the geoportal for recommending/delivering, in advance, products matching with end-user preferences. The geoportal service also enables end-users to organize EOPs on-the-fly by using spatiotemporal parameters. A prototype of this service was implemented in a private cloud and connected to a satellite imagery repository of an antenna (ERIS) managed by Mexican Space Agency in a proof of concept. Learned lessons and performance assessments are described through an experimental evaluation with real users' participation.

Research paper thumbnail of A Microservice-Based Building Block Approach for Scientific Workflow Engines: Processing Large Data Volumes with DagOnStar

2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), 2019

The impact of machine learning algorithms on everyday life is overwhelming until the novel concep... more The impact of machine learning algorithms on everyday life is overwhelming until the novel concept of datacracy as a new social paradigm. In the field of computational environmental science and, in particular, of applications of large data science proof of concept on the natural resources management this kind of approaches could make the difference between species surviving to potential extinction and compromised ecological niches. In this scenario, the use of high throughput workflow engines, enabling the management of complex data flows in production is rock solid, as demonstrated by the rise of recent tools as Parsl and DagOnStar. Nevertheless, the availability of dedicated computational resources, although mitigated by the use of cloud computing technologies, could be a remarkable limitation. In this paper, we present a novel and improved version of DagOnStar, enabling the execution of lightweight but recurring computational tasks on the microservice architecture. We present our preliminary results motivating our choices supported by some evaluations and a real-world use case.

Research paper thumbnail of An efficient pattern-based approach for workflow supporting large-scale science: The DagOnStar experience

Future Generation Computer Systems, 2021

Abstract Workflow engines are commonly used to orchestrate large-scale scientific computations su... more Abstract Workflow engines are commonly used to orchestrate large-scale scientific computations such as, but not limited to weather, climate, natural disasters, food safety, and territorial management. However, to implement, manage, and execute real-world scientific applications in the form of workflows on multiple infrastructures (servers, clusters, cloud) remains a challenge. In this paper, we present DagOnStar ( D irected A cyclic G raph On Anything ), a lightweight Python library implementing a workflow paradigm based on parallel patterns that can be executed on any combination of local machines, on-premise high performance computing clusters, containers, and cloud-based virtual infrastructures. DagOnStar is designed to minimize data movement to reduce the application storage footprint. A case study based on a real-world application is explored to illustrate the use of this novel workflow engine: a containerized weather data collection application deployed on multiple infrastructures. An experimental comparison with other state-of-the-art workflow engines shows that DagOnStar can run workflows on multiple types of infrastructure with an improvement of 50.19% in run time when using a parallel pattern with eight task-level workers.

Research paper thumbnail of On the Continuous Processing of Health Data in Edge-Fog-Cloud Computing by Using Micro/Nanoservice Composition

IEEE Access, 2020

The edge, the fog, the cloud, and even the end-user's devices play a key role in the management o... more The edge, the fog, the cloud, and even the end-user's devices play a key role in the management of the health sensitive content/data lifecycle. However, the creation and management of solutions including multiple applications executed by multiple users in multiple environments (edge, the fog, and the cloud) to process multiple health repositories that, at the same time, fulfilling non-functional requirements (NFRs) represents a complex challenge for health care organizations. This paper presents the design, development, and implementation of an architectural model to create, on-demand, edge-fog-cloud processing structures to continuously handle big health data and, at the same time, to execute services for fulfilling NFRs. In this model, constructive and modular blocks, implemented as microservices and nanoservices, are recursively interconnected to create edge-fog-cloud processing structures as infrastructure-agnostic services. Continuity schemes create dataflows through the blocks of edge-fog-cloud structures and enforce, in an implicit manner, the fulfillment of NFRs for data arriving and departing to/from each block of each edge-fogcloud structure. To show the feasibility of this model, a prototype was built using this model, which was evaluated in a case study based on the processing of health data for supporting critical decision-making procedures in remote patient monitoring. This study considered scenarios where end-users and medical staff received insights discovered when processing electrocardiograms (ECGs) produced by sensors in wireless IoT devices as well as where physicians received patient records (spirometry studies, ECGs and tomography images) and warnings raised when online analyzing and identifying anomalies in the analyzed ECG data. A scenario where organizations manage multiple simultaneous each edge-fog-cloud structure for processing of health data and contents delivered to internal and external staff was also studied. The evaluation of these scenarios showed the feasibility of applying this model to the building of solutions interconnecting multiple services/applications managing big health data through different environments. INDEX TERMS Big health data, edge-fog-cloud, the health-IoT processing, the Internet of Things, microservice architecture. ALDO E. PEREZ-RAMOS received the B.Sc. degree (Eng.) in electronics engineering from the Oaxaca Institute of Technology, Oaxaca, Mexico, and the M.Sc. and Ph.D. degrees in electronics and telecommunications from the CICESE Research Center, Ensenada, Mexico, in 2008 and 2016, respectively. He is currently a CONACyT Researcher assigned to CICESE Monterrey. His main research interests include wireless personal area networks, wireless sensor networks, low-power wide area networks, the Internet of Medical Things, e-health systems, radio over fiber architectures, embedded systems, and so on. DIANA CARRIZALES-ESPINOZA received the B.Sc. degree (Eng.) in IT engineering from the Polytechnic University of Altamira, Mexico, in 2016. She is currently pursuing the master's degree with the Cinvestav Tamaulipas, Mexico. Her research interests include distributed systems and cloud storage.

Research paper thumbnail of A Federated Content Distribution System to Build Health Data Synchronization Services

2021 29th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), 2021

In organizational environments, such as in hospitals, data have to be processed, preserved, and s... more In organizational environments, such as in hospitals, data have to be processed, preserved, and shared with other organizations in a cost-efficient manner. Moreover, organizations have to accomplish different mandatory non-functional requirements imposed by the laws, protocols, and norms of each country. In this context, this paper presents a Federated Content Distribution System to build infrastructure-agnostic health data synchronization services. In this federation, each hospital manages local and federated services based on a pub/sub model. The local services manage users and contents (i.e., medical imagery) inside the hospital, whereas federated services allow the cooperation of different hospitals sharing resources and data. Data preparation schemes were implemented to add non-functional requirements to data. Moreover, data published in the content distribution system are automatically synchronized to all users subscribed to the catalog where the content was published.

Research paper thumbnail of From the edge to the cloud: A continuous delivery and preparation model for processing big IoT data

Simulation Modelling Practice and Theory, 2020

This paper presents a processing model for big IoT data. The model includes a continuous delivery... more This paper presents a processing model for big IoT data. The model includes a continuous delivery scheme based on building blocks for constructing software pipelines from the edge to the cloud. It also includes a data preparation scheme based on parallel patterns for establishing, in an efficient manner, controls over the production and consumption of IoT data. This scheme adds data properties such as cost-efficiency storage, security, and reliability, which are useful to avoid alterations in data and repudiation situations as well as to mitigate risks still arisen in the cloud such as confidentiality violations and service outages. An overlay structure, including planes such as Pub/Sub, control, and preservation, integrates the proposed schemes into software pipelines. The proposed model was developed in both prototype and simulator of software pipelines. Case studies were conducted based on pipeline services deployed from the edge, passing from the fog to the cloud for processing and managing real climate data repositories, which were produced by three different data sensor sources, such as ground stations deployed on Mexico and Spain, as well as small distributed IoT devices. Information sharing patterns for end-users to retrieve raw and/or processed IoT data were also studied. The experimental evaluation revealed the feasibility of using continuous delivery scheme to create dataflows from the edge to the cloud, the efficacy of the overlay structure to create information sharing patterns, as well as the efficiency of data preparation schemes and parallel patterns to improve the end-user service experience in comparison with traditional state-of-the-art solutions.