CHINMAYA DEHURY - Academia.edu (original) (raw)

Papers by CHINMAYA DEHURY

Research paper thumbnail of Securing clustered edge intelligence with blockchain

IEEE Consumer Electronics Magazine, 2022

Research paper thumbnail of Data Pipeline Architecture for Serverless Platform

Communications in Computer and Information Science

To provide cost effective cloud resources with high QoS, serverless platform is introduced that a... more To provide cost effective cloud resources with high QoS, serverless platform is introduced that allows to pay for the exact amount of resource usage. On the other hand, a number of data management tools are developed to handle the data from a large number of IoT sensing devices. However, the modern data-intensive cloud applications require the power that comes from integrating data management tools with serverless platforms. This paper proposes a novel data pipeline architecture for serverless platform for providing an environment to develop applications that can be broken into independently deployable, schedulable, scalable, and re-usable modules and efficiently manage the flow of data between different environments.

Research paper thumbnail of HPC Cloud traces for better cloud service reliability

This data is in support of the research on "A combined system metrics approach to cloud serv... more This data is in support of the research on "A combined system metrics approach to cloud service reliability using artificial intelligence" (doi: 10.20944/preprints202111.0548.v1)

Research paper thumbnail of MUVINE: Multi-Stage Virtual Network Embedding in Cloud Data Centers Using Reinforcement Learning-Based Predictions

IEEE Journal on Selected Areas in Communications, 2020

The recent advances in virtualization technology have enabled the sharing of computing and networ... more The recent advances in virtualization technology have enabled the sharing of computing and networking resources of cloud data centers among multiple users. Virtual Network Embedding (VNE) is highly important and is an integral part of the cloud resource management. The lack of historical knowledge on cloud functioning and inability to foresee the future resource demand are two fundamental shortcomings of the traditional VNE approaches. The consequence of those shortcomings is the inefficient embedding of virtual resources on Substrate Nodes (SNs). On the contrary, application of Artificial Intelligence (AI) in VNE is still in the premature stage and needs further investigation. Considering the underlying complexity of VNE that includes numerous parameters, intelligent solutions are required to utilize the cloud resources efficiently via careful selection of appropriate SNs for the VNE. In this paper, Reinforcement Learning based prediction model is designed for the efficient Multi-stage Virtual Network Embedding (MUVINE) among the cloud data centers. The proposed MUVINE scheme is extensively simulated and evaluated against the recent state-of-the-art schemes. The simulation outcomes show that the proposed MUVINE scheme consistently outperforms over the existing schemes and provides the promising results.

Research paper thumbnail of Personalized Service Delivery using Reinforcement Learning in Fog and Cloud Environment

Proceedings of the 21st International Conference on Information Integration and Web-based Applications & Services, 2019

The ability to fulfil the resource demand in runtime is encouraging the businesses to migrate to ... more The ability to fulfil the resource demand in runtime is encouraging the businesses to migrate to cloud. Recently, to provide real-time cloud services and to save network resources, fog computing is introduced. To further improve the quality of service in delivery process, Artificial Intelligence is being applied extensively. However, the state-of-the-art in this regard is still immature as it mainly focuses at either fog or cloud. To address this issue, a novel reinforcement learning-based personalized service delivery (RLPSD) mechanism is proposed in this paper, which allows the service provider to combine the fog and cloud environments, while providing the service. RLPSD distributes the user's service requests between fog and cloud, considering the users' constraints (e.g. the distance from fog), thus resulting in personalized service delivery. The proposed RLPSD algorithm is implemented and evaluated in terms of its success rate, percentage of service requests' distribution, learning rate, discount factor, etc.

Research paper thumbnail of A Combined System Metrics Approach to Cloud Service Reliability Using Artificial Intelligence

Big data and cognitive computing, Mar 1, 2022

This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY

Research paper thumbnail of DeF-DReL: Systematic Deployment of Serverless Functions in Fog and Cloud environments using Deep Reinforcement Learning

ArXiv, 2021

Fog computing is introduced by shifting cloud resources towards the users’ proximity to mitigate ... more Fog computing is introduced by shifting cloud resources towards the users’ proximity to mitigate the limitations possessed by cloud computing. Fog environment made its limited resource available to a large number of users to deploy their serverless applications, composed of several serverless functions. One of the primary intentions behind introducing the fog environment is to fulfil the demand of latency and location-sensitive serverless applications through its limited resources. The recent research mainly focuses on assigning maximum resources to such applications from the fog node and not taking full advantage of the cloud environment. This introduces a negative impact in providing the resources to a maximum number of connected users. To address this issue, in this paper, we investigated the optimum percentage of a user’s request that should be fulfilled by fog and cloud. As a result, we proposed DeF-DReL, a Systematic Deployment of Serverless Functions in Fog and Cloud environm...

Research paper thumbnail of DataPipelineExecutionValidation-release-data

Research paper thumbnail of Serverless data pipeline approaches for IoT data in fog and cloud computing

Future Generation Computer Systems, 2021

With the increasing number of Internet of Things (IoT) devices, massive amounts of raw data is be... more With the increasing number of Internet of Things (IoT) devices, massive amounts of raw data is being generated. The latency, cost, and other challenges in cloud-based IoT data processing have driven the adoption of Edge and Fog computing models, where some data processing tasks are moved closer to data sources. Properly dealing with the flow of such data requires building data pipelines, to control the complete life cycle of data streams from data acquisition at the data source, edge and fog processing, to Cloud side storage and analytics. Data analytics tasks need to be executed dynamically at different distances from the data sources and often on very heterogeneous hardware devices. This can be streamlined by the use of a Serverless (or FaaS) cloud computing model, where tasks are defined as virtual functions, which can be migrated from edge to cloud (and vice versa) and executed in an eventdriven manner on data streams. In this work, we investigate the benefits of building Serverless data pipelines (SDP) for IoT data analytics and evaluate three different approaches for designing SDPs: 1) Off-the-shelf data flow tool (DFT) based, 2) Object storage service (OSS) based and 3) MQTT based. Further, we applied these strategies on three fog applications (Aeneas, PocketSphinx, and custom Video processing application) and evaluated the performance by comparing their processing time (computation time, network communication and disk access time), and resource utilization. Results show that DFT is unsuitable for compute-intensive applications such as video or image processing, whereas OSS is best suitable for this task. However, DFT is nicely fit for bandwidthintensive applications due to the minimum use of network resources. On the other hand, MQTT-based SDP is observed with increase in CPU and Memory usage as the number of users rose, and experienced a drop in data units in the pipeline for PocketSphinx and custom video processing applications, however it performed well for Aeneas which had low size data units.

Research paper thumbnail of TOSCAdata: Modeling data pipeline applications in TOSCA

Journal of Systems and Software, 2021

The serverless platform allows a customer to effectively use cloud resources and pay for the exac... more The serverless platform allows a customer to effectively use cloud resources and pay for the exact amount of used resources. A number of dedicated open source and commercial cloud data management tools are available to handle the massive amount of data. Such modern cloud data management tools are not enough matured to integrate the generic cloud application with the serverless platform due to the lack of mature and stable standards. One of the most popular and mature standards, TOSCA (Topology and Orchestration Specification for Cloud Applications), mainly focuses on application and service portability and automated management of the generic cloud application components. This paper proposes the extension of the TOSCA standard, TOSCAdata, that focuses on the modeling of data pipeline-based cloud applications. Keeping the requirements of modern data pipeline cloud applications, TOSCAdata provides a number of TOSCA models that are independently deployable, schedulable, scalable, and re-usable, while effectively handling the flow and transformation of data in a pipeline manner. We also demonstrate the applicability of proposed TOSCAdata models by taking a web-based cloud application in the context of tourism promotion as a use case scenario.

Research paper thumbnail of An efficient service dispersal mechanism for fog and cloud computing using deep reinforcement learning

2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID), 2020

Thousands of high-end physical servers are used to fulfill the huge resource demand of diverse ap... more Thousands of high-end physical servers are used to fulfill the huge resource demand of diverse applications or services, ranging from healthcare data analytic services to gaming services. The network latency, as one of the major limitations of cloud computing, becomes the primary reason for introducing fog computing by pushing the computing environment towards the edge of the network. The ability to offer computing environments in close proximity to the user’s device improves the delivery of high-quality services. The majority of the research is devoted to providing the high quality of services using either fog or cloud environment. In this paper, a novel deep reinforcement learning-based service dispersal approach for fog and cloud computing (DRLSD-FC) is adopted for offering the service using both environments simultaneously. The request to avail services is sliced and dispersed between the nearby fog and cloud environments. By taking advantage of cloud resources, the proposed approach minimizes the workload on the fog environment without compromising the service quality. The proposed approach is implemented using the Keras framework. Implementation results show that DRLSD-FC can outperform over other related approaches.

Research paper thumbnail of Computer Vision in COVID-19: A Study

Impact of AI and Data Science in Response to Coronavirus Pandemic, 2021

Computer vision is a pioneering sub-field of artificial intelligence that is used in computers fo... more Computer vision is a pioneering sub-field of artificial intelligence that is used in computers for throwing a light on the visual world and better understanding of it. In crucial times like COVID-19, computer vision is used to combat all the challenges that are been faced. In healthcare field, computer vision has been used to enhance the productivity of the various departments and also help in the development of the vaccine. Computer vision tasks include methods for acquiring, processing, analysing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information. Recently all these tasks are put to use in developing a systematic operation to tackle the challenges of COVID-19. Computer vision is used in thermal scanners, along with ML it is used in predicting virus spread, it is also used in AI systems in most of the Lab to help in development of the vaccine and also help in analysing factors like whet...

Research paper thumbnail of A Combined Metrics Approach to Cloud Service Reliability using Artificial Intelligence

Identifying and anticipating potential failures in the cloud is an effective method for increasin... more Identifying and anticipating potential failures in the cloud is an effective method for increasing cloud reliability and proactive failure management. Many studies have been conducted to predict potential failure, but none have combined SMART (Self-Monitoring, Analysis, and Reporting Technology) hard drive metrics with other system metrics such as CPU utilisation. Therefore, we propose a combined metrics approach for failure prediction based on Artificial Intelligence to improve reliability. We tested over 100 cloud servers’ data and four AI algorithms: Random Forest, Gradient Boosting, Long-Short-Term Memory, and Gated Recurrent Unit. Our experimental result shows the benefits of combining metrics, outperforming state-of-the-art.

Research paper thumbnail of Deep Learning Frameworks in Healthcare Systems

Research paper thumbnail of RRFT: A Rank-Based Resource Aware Fault Tolerant Strategy for Cloud Platforms

IEEE Transactions on Cloud Computing, 2021

The applications that are deployed in the cloud to provide services to the users encompass a larg... more The applications that are deployed in the cloud to provide services to the users encompass a large number of interconnected dependent cloud components. Multiple identical components are scheduled to run concurrently in order to handle unexpected failures and provide uninterrupted service to the end user, which introduces resource overhead problem for the cloud service provider. Furthermore such resource-intensive fault tolerant strategies bring extra monetary overhead to the cloud service provider and eventually to the cloud users. In order to address these issues, a novel fault tolerant strategy based on the significance level of each component is developed. The communication topology among the application components, their historical performance, failure rate, failure impact on other components, dependencies among them, etc., are used to rank those application components to further decide on the importance of one component over others. Based on the rank, a Markov Decision Process (MDP) model is presented to determine the number of replicas that varies from one component to another. A rigorous performance evaluation is carried out using some of the most common practically useful metrics such as, recovery time upon a fault, average number of components needed, number of parallel components successfully executed, etc., to quote a few, with similar component ranking and fault tolerant strategies. Simulation results demonstrate that the proposed algorithm reduces the required number of virtual and physical machines by approximately 10% and 4.2%, respectively, compared to other similar algorithms.

Research paper thumbnail of Failure Aware Semi-Centralized Virtual Network Embedding in Cloud Computing Fat-Tree Data Center Networks

IEEE Transactions on Cloud Computing, 2020

In Cloud Computing, the tenants opting for the Infrastructure as a Service (IaaS) send the resour... more In Cloud Computing, the tenants opting for the Infrastructure as a Service (IaaS) send the resource requirements to the Cloud Service Provider (CSP) in the form of Virtual Network (VN) consisting of a set of interconnected Virtual Machines (VM). Embedding the VN onto the existing physical network is known as Virtual Network Embedding (VNE) problem. One of the major research challenges is to allocate the physical resources such that the failure of the physical resources would bring less impact onto the users' service. Additionally, the major challenge is to handle the embedding process of growing number of incoming users' VNs from the algorithm design point-of-view. Considering both of the above-mentioned research issues, a novel Failure aware Semi-Centralized VNE (FSC-VNE) algorithm is proposed for the Fat-Tree data center network with the goal to reduce the impact of the resource failure onto the existing users. The impact of failure of the Physical Machines (PMs), physical links and network devices are taken into account while allocating the resources to the users. The beauty of the proposed algorithm is that the VMs are assigned to different PMs in a semi-centralized manner. In other words, the embedding algorithm is executed by multiple physical servers in order to concurrently embed the VMs of a VN and reduces the embedding time. Extensive simulation results show that the proposed algorithm can outperform over other VNE algorithms.

Research paper thumbnail of Efficient data and CPU-intensive job scheduling algorithms for healthcare cloud

Computers & Electrical Engineering, 2018

Cloud computing platform is used to improve the operational efficiency of business processes and ... more Cloud computing platform is used to improve the operational efficiency of business processes and to provide services to users. The fast growth of healthcare industry is shifting its traditional business model to cloud-enabled business model, which can fulfill the resource demand of different applications in healthcare industries. The job of healthcare system can vary from a simple patient record retrieval to a complex biomedical image analysis. On the other hand, the shared configurable computing, storage and other resources are provided to the users as a service over the Internet on rented basis. Although huge amount of resources are provided by the cloud to the healthcare systems to carry out complex time-consuming and data-intensive operations, scheduling of diverse healthcare applications onto large numbers of physical servers is an evolving issue, which needs to be addressed. In this paper, a scheduling framework is designed for the intelligent distribution of healthcare related jobs based on their types by taking advantage of existing heterogeneous distributed data center management solutions. This loosely coupled architecture works atop the existing solutions whose components can run in parallel on different nodes. The proposed framework for cloud computing can not only handle a variety of jobs but also can recover itself from any accidental crash.

Research paper thumbnail of LVRM: On the Design of Efficient Link Based Virtual Resource Management Algorithm for Cloud Platforms

IEEE Transactions on Parallel and Distributed Systems, 2018

Virtualization technology boosts up traditional computing concept to cloud computing by introduci... more Virtualization technology boosts up traditional computing concept to cloud computing by introducing Virtual Machines (VMs) over the Physical Machines (PMs), which enables the cloud service providers to share the limited computing and network resources among multiple users. Virtual resource mapping can be defined as the process of embedding multiple VMs and their network resource demand onto multiple interconnected PMs. The existing mechanisms of resource mapping need to be efficient enough to minimize the number of PMs without compromising the deadline of the tasks assigned to the VMs, which is NP-hard. To deal with this problem, a Link based Virtual Resource Management (LVRM) algorithm is designed to map the VMs onto PMs based on the available and required resources of the PMs and VMs, respectively. The designed algorithm exploits the fact that the demanded network bandwidth among VMs should be given higher priority while allocating the physical resources to the interconnected virtual machines as insufficient network bandwidth may detain the task execution. The proposed algorithm is evaluated by a discrete event simulator and is compared with similar virtual network embedded algorithms. Simulation results show that LVRM can outperform over other network embedded algorithms.

Research paper thumbnail of DYVINE: Fitness-Based Dynamic Virtual Network Embedding in Cloud Computing

IEEE Journal on Selected Areas in Communications, 2019

Virtual Network Embedding (VNE) is the process of embedding the set of interconnected virtual mac... more Virtual Network Embedding (VNE) is the process of embedding the set of interconnected virtual machines onto the set of interconnected physical servers in the cloud computing environment. The level of complexity of VNE problem increases when large number of virtual machines with a set of resource demand need to be embedded onto a network of thousands of physical servers. The key challenge of VNE is the efficient mapping of the virtual networks, which may have dynamic resource demands. Existing solutions mainly emphasize on the embedding of static virtual network resulting in poor resource utilization and very low acceptance rate. To tackle such level of complexity in VNE, a fitness based DYnamic VIrtual Network Embedding (DYVINE) algorithm is proposed with the goal to maximize the resource utilization by maximizing the acceptance rate. Local and global fitness value of the virtual machines and virtual network, respectively, are used to utilize the maximum amount of physical resources. The proposed VNE algorithm allows the virtual network to be dynamic, which indicates that the structure and resource demand can be changed during its execution time. Further, in order to reduce the embedding time in each time slot, a set of physical servers is selected to host the virtual network instead of considering thousands of physical servers, which may significantly increase the embedding time. The proposed embedding mechanism is evaluated through extensive simulation and is compared with similar existing embedding algorithms, which outperforms over others.

Research paper thumbnail of Design and implementation of a novel service management framework for IoT devices in cloud

Journal of Systems and Software, 2016

A service management platform for the IoT devices in Cloud is designed.Proposed Cloud platform ca... more A service management platform for the IoT devices in Cloud is designed.Proposed Cloud platform can serve the real and non-real time data efficiently.The framework can provide services to maximum number of Application handlers.Docker container for virtualization is implemented to provide Software as a Service. With advent of new technologies, we are surrounded by several tiny but powerful mobile devices through which we can communicate with the outside world to store and retrieve data from the Cloud. These devices are considered as smart objects as they can sense the medium, collect data, interact with nearby smart objects, and transmit data to the cloud for processing and storage through internet. Internet of Things (IoT) create an environment for smart home, health care and smart business decisions by transmitting data through internet. Cloud computing, on the other hand leverages the capability of IoT by providing computation and storage power to each smart object. Researches and developers combine the cloud computing environment with that of IoT to reduce the transmission and processing cost in the cloud and to provide better services for processing and storing the realtime data generated from those IoT devices. In this paper, a novel framework is designed for the Cloud to manage the realtime IoT data and scientific non-IoT data. In order to demonstrate the services in Cloud, real experimental result of implementing the Docker container for virtualization is introduced to provide Software as a Service (SaaS) in a hybrid cloud environment.

Research paper thumbnail of Securing clustered edge intelligence with blockchain

IEEE Consumer Electronics Magazine, 2022

Research paper thumbnail of Data Pipeline Architecture for Serverless Platform

Communications in Computer and Information Science

To provide cost effective cloud resources with high QoS, serverless platform is introduced that a... more To provide cost effective cloud resources with high QoS, serverless platform is introduced that allows to pay for the exact amount of resource usage. On the other hand, a number of data management tools are developed to handle the data from a large number of IoT sensing devices. However, the modern data-intensive cloud applications require the power that comes from integrating data management tools with serverless platforms. This paper proposes a novel data pipeline architecture for serverless platform for providing an environment to develop applications that can be broken into independently deployable, schedulable, scalable, and re-usable modules and efficiently manage the flow of data between different environments.

Research paper thumbnail of HPC Cloud traces for better cloud service reliability

This data is in support of the research on "A combined system metrics approach to cloud serv... more This data is in support of the research on "A combined system metrics approach to cloud service reliability using artificial intelligence" (doi: 10.20944/preprints202111.0548.v1)

Research paper thumbnail of MUVINE: Multi-Stage Virtual Network Embedding in Cloud Data Centers Using Reinforcement Learning-Based Predictions

IEEE Journal on Selected Areas in Communications, 2020

The recent advances in virtualization technology have enabled the sharing of computing and networ... more The recent advances in virtualization technology have enabled the sharing of computing and networking resources of cloud data centers among multiple users. Virtual Network Embedding (VNE) is highly important and is an integral part of the cloud resource management. The lack of historical knowledge on cloud functioning and inability to foresee the future resource demand are two fundamental shortcomings of the traditional VNE approaches. The consequence of those shortcomings is the inefficient embedding of virtual resources on Substrate Nodes (SNs). On the contrary, application of Artificial Intelligence (AI) in VNE is still in the premature stage and needs further investigation. Considering the underlying complexity of VNE that includes numerous parameters, intelligent solutions are required to utilize the cloud resources efficiently via careful selection of appropriate SNs for the VNE. In this paper, Reinforcement Learning based prediction model is designed for the efficient Multi-stage Virtual Network Embedding (MUVINE) among the cloud data centers. The proposed MUVINE scheme is extensively simulated and evaluated against the recent state-of-the-art schemes. The simulation outcomes show that the proposed MUVINE scheme consistently outperforms over the existing schemes and provides the promising results.

Research paper thumbnail of Personalized Service Delivery using Reinforcement Learning in Fog and Cloud Environment

Proceedings of the 21st International Conference on Information Integration and Web-based Applications & Services, 2019

The ability to fulfil the resource demand in runtime is encouraging the businesses to migrate to ... more The ability to fulfil the resource demand in runtime is encouraging the businesses to migrate to cloud. Recently, to provide real-time cloud services and to save network resources, fog computing is introduced. To further improve the quality of service in delivery process, Artificial Intelligence is being applied extensively. However, the state-of-the-art in this regard is still immature as it mainly focuses at either fog or cloud. To address this issue, a novel reinforcement learning-based personalized service delivery (RLPSD) mechanism is proposed in this paper, which allows the service provider to combine the fog and cloud environments, while providing the service. RLPSD distributes the user's service requests between fog and cloud, considering the users' constraints (e.g. the distance from fog), thus resulting in personalized service delivery. The proposed RLPSD algorithm is implemented and evaluated in terms of its success rate, percentage of service requests' distribution, learning rate, discount factor, etc.

Research paper thumbnail of A Combined System Metrics Approach to Cloud Service Reliability Using Artificial Intelligence

Big data and cognitive computing, Mar 1, 2022

This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY

Research paper thumbnail of DeF-DReL: Systematic Deployment of Serverless Functions in Fog and Cloud environments using Deep Reinforcement Learning

ArXiv, 2021

Fog computing is introduced by shifting cloud resources towards the users’ proximity to mitigate ... more Fog computing is introduced by shifting cloud resources towards the users’ proximity to mitigate the limitations possessed by cloud computing. Fog environment made its limited resource available to a large number of users to deploy their serverless applications, composed of several serverless functions. One of the primary intentions behind introducing the fog environment is to fulfil the demand of latency and location-sensitive serverless applications through its limited resources. The recent research mainly focuses on assigning maximum resources to such applications from the fog node and not taking full advantage of the cloud environment. This introduces a negative impact in providing the resources to a maximum number of connected users. To address this issue, in this paper, we investigated the optimum percentage of a user’s request that should be fulfilled by fog and cloud. As a result, we proposed DeF-DReL, a Systematic Deployment of Serverless Functions in Fog and Cloud environm...

Research paper thumbnail of DataPipelineExecutionValidation-release-data

Research paper thumbnail of Serverless data pipeline approaches for IoT data in fog and cloud computing

Future Generation Computer Systems, 2021

With the increasing number of Internet of Things (IoT) devices, massive amounts of raw data is be... more With the increasing number of Internet of Things (IoT) devices, massive amounts of raw data is being generated. The latency, cost, and other challenges in cloud-based IoT data processing have driven the adoption of Edge and Fog computing models, where some data processing tasks are moved closer to data sources. Properly dealing with the flow of such data requires building data pipelines, to control the complete life cycle of data streams from data acquisition at the data source, edge and fog processing, to Cloud side storage and analytics. Data analytics tasks need to be executed dynamically at different distances from the data sources and often on very heterogeneous hardware devices. This can be streamlined by the use of a Serverless (or FaaS) cloud computing model, where tasks are defined as virtual functions, which can be migrated from edge to cloud (and vice versa) and executed in an eventdriven manner on data streams. In this work, we investigate the benefits of building Serverless data pipelines (SDP) for IoT data analytics and evaluate three different approaches for designing SDPs: 1) Off-the-shelf data flow tool (DFT) based, 2) Object storage service (OSS) based and 3) MQTT based. Further, we applied these strategies on three fog applications (Aeneas, PocketSphinx, and custom Video processing application) and evaluated the performance by comparing their processing time (computation time, network communication and disk access time), and resource utilization. Results show that DFT is unsuitable for compute-intensive applications such as video or image processing, whereas OSS is best suitable for this task. However, DFT is nicely fit for bandwidthintensive applications due to the minimum use of network resources. On the other hand, MQTT-based SDP is observed with increase in CPU and Memory usage as the number of users rose, and experienced a drop in data units in the pipeline for PocketSphinx and custom video processing applications, however it performed well for Aeneas which had low size data units.

Research paper thumbnail of TOSCAdata: Modeling data pipeline applications in TOSCA

Journal of Systems and Software, 2021

The serverless platform allows a customer to effectively use cloud resources and pay for the exac... more The serverless platform allows a customer to effectively use cloud resources and pay for the exact amount of used resources. A number of dedicated open source and commercial cloud data management tools are available to handle the massive amount of data. Such modern cloud data management tools are not enough matured to integrate the generic cloud application with the serverless platform due to the lack of mature and stable standards. One of the most popular and mature standards, TOSCA (Topology and Orchestration Specification for Cloud Applications), mainly focuses on application and service portability and automated management of the generic cloud application components. This paper proposes the extension of the TOSCA standard, TOSCAdata, that focuses on the modeling of data pipeline-based cloud applications. Keeping the requirements of modern data pipeline cloud applications, TOSCAdata provides a number of TOSCA models that are independently deployable, schedulable, scalable, and re-usable, while effectively handling the flow and transformation of data in a pipeline manner. We also demonstrate the applicability of proposed TOSCAdata models by taking a web-based cloud application in the context of tourism promotion as a use case scenario.

Research paper thumbnail of An efficient service dispersal mechanism for fog and cloud computing using deep reinforcement learning

2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID), 2020

Thousands of high-end physical servers are used to fulfill the huge resource demand of diverse ap... more Thousands of high-end physical servers are used to fulfill the huge resource demand of diverse applications or services, ranging from healthcare data analytic services to gaming services. The network latency, as one of the major limitations of cloud computing, becomes the primary reason for introducing fog computing by pushing the computing environment towards the edge of the network. The ability to offer computing environments in close proximity to the user’s device improves the delivery of high-quality services. The majority of the research is devoted to providing the high quality of services using either fog or cloud environment. In this paper, a novel deep reinforcement learning-based service dispersal approach for fog and cloud computing (DRLSD-FC) is adopted for offering the service using both environments simultaneously. The request to avail services is sliced and dispersed between the nearby fog and cloud environments. By taking advantage of cloud resources, the proposed approach minimizes the workload on the fog environment without compromising the service quality. The proposed approach is implemented using the Keras framework. Implementation results show that DRLSD-FC can outperform over other related approaches.

Research paper thumbnail of Computer Vision in COVID-19: A Study

Impact of AI and Data Science in Response to Coronavirus Pandemic, 2021

Computer vision is a pioneering sub-field of artificial intelligence that is used in computers fo... more Computer vision is a pioneering sub-field of artificial intelligence that is used in computers for throwing a light on the visual world and better understanding of it. In crucial times like COVID-19, computer vision is used to combat all the challenges that are been faced. In healthcare field, computer vision has been used to enhance the productivity of the various departments and also help in the development of the vaccine. Computer vision tasks include methods for acquiring, processing, analysing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information. Recently all these tasks are put to use in developing a systematic operation to tackle the challenges of COVID-19. Computer vision is used in thermal scanners, along with ML it is used in predicting virus spread, it is also used in AI systems in most of the Lab to help in development of the vaccine and also help in analysing factors like whet...

Research paper thumbnail of A Combined Metrics Approach to Cloud Service Reliability using Artificial Intelligence

Identifying and anticipating potential failures in the cloud is an effective method for increasin... more Identifying and anticipating potential failures in the cloud is an effective method for increasing cloud reliability and proactive failure management. Many studies have been conducted to predict potential failure, but none have combined SMART (Self-Monitoring, Analysis, and Reporting Technology) hard drive metrics with other system metrics such as CPU utilisation. Therefore, we propose a combined metrics approach for failure prediction based on Artificial Intelligence to improve reliability. We tested over 100 cloud servers’ data and four AI algorithms: Random Forest, Gradient Boosting, Long-Short-Term Memory, and Gated Recurrent Unit. Our experimental result shows the benefits of combining metrics, outperforming state-of-the-art.

Research paper thumbnail of Deep Learning Frameworks in Healthcare Systems

Research paper thumbnail of RRFT: A Rank-Based Resource Aware Fault Tolerant Strategy for Cloud Platforms

IEEE Transactions on Cloud Computing, 2021

The applications that are deployed in the cloud to provide services to the users encompass a larg... more The applications that are deployed in the cloud to provide services to the users encompass a large number of interconnected dependent cloud components. Multiple identical components are scheduled to run concurrently in order to handle unexpected failures and provide uninterrupted service to the end user, which introduces resource overhead problem for the cloud service provider. Furthermore such resource-intensive fault tolerant strategies bring extra monetary overhead to the cloud service provider and eventually to the cloud users. In order to address these issues, a novel fault tolerant strategy based on the significance level of each component is developed. The communication topology among the application components, their historical performance, failure rate, failure impact on other components, dependencies among them, etc., are used to rank those application components to further decide on the importance of one component over others. Based on the rank, a Markov Decision Process (MDP) model is presented to determine the number of replicas that varies from one component to another. A rigorous performance evaluation is carried out using some of the most common practically useful metrics such as, recovery time upon a fault, average number of components needed, number of parallel components successfully executed, etc., to quote a few, with similar component ranking and fault tolerant strategies. Simulation results demonstrate that the proposed algorithm reduces the required number of virtual and physical machines by approximately 10% and 4.2%, respectively, compared to other similar algorithms.

Research paper thumbnail of Failure Aware Semi-Centralized Virtual Network Embedding in Cloud Computing Fat-Tree Data Center Networks

IEEE Transactions on Cloud Computing, 2020

In Cloud Computing, the tenants opting for the Infrastructure as a Service (IaaS) send the resour... more In Cloud Computing, the tenants opting for the Infrastructure as a Service (IaaS) send the resource requirements to the Cloud Service Provider (CSP) in the form of Virtual Network (VN) consisting of a set of interconnected Virtual Machines (VM). Embedding the VN onto the existing physical network is known as Virtual Network Embedding (VNE) problem. One of the major research challenges is to allocate the physical resources such that the failure of the physical resources would bring less impact onto the users' service. Additionally, the major challenge is to handle the embedding process of growing number of incoming users' VNs from the algorithm design point-of-view. Considering both of the above-mentioned research issues, a novel Failure aware Semi-Centralized VNE (FSC-VNE) algorithm is proposed for the Fat-Tree data center network with the goal to reduce the impact of the resource failure onto the existing users. The impact of failure of the Physical Machines (PMs), physical links and network devices are taken into account while allocating the resources to the users. The beauty of the proposed algorithm is that the VMs are assigned to different PMs in a semi-centralized manner. In other words, the embedding algorithm is executed by multiple physical servers in order to concurrently embed the VMs of a VN and reduces the embedding time. Extensive simulation results show that the proposed algorithm can outperform over other VNE algorithms.

Research paper thumbnail of Efficient data and CPU-intensive job scheduling algorithms for healthcare cloud

Computers & Electrical Engineering, 2018

Cloud computing platform is used to improve the operational efficiency of business processes and ... more Cloud computing platform is used to improve the operational efficiency of business processes and to provide services to users. The fast growth of healthcare industry is shifting its traditional business model to cloud-enabled business model, which can fulfill the resource demand of different applications in healthcare industries. The job of healthcare system can vary from a simple patient record retrieval to a complex biomedical image analysis. On the other hand, the shared configurable computing, storage and other resources are provided to the users as a service over the Internet on rented basis. Although huge amount of resources are provided by the cloud to the healthcare systems to carry out complex time-consuming and data-intensive operations, scheduling of diverse healthcare applications onto large numbers of physical servers is an evolving issue, which needs to be addressed. In this paper, a scheduling framework is designed for the intelligent distribution of healthcare related jobs based on their types by taking advantage of existing heterogeneous distributed data center management solutions. This loosely coupled architecture works atop the existing solutions whose components can run in parallel on different nodes. The proposed framework for cloud computing can not only handle a variety of jobs but also can recover itself from any accidental crash.

Research paper thumbnail of LVRM: On the Design of Efficient Link Based Virtual Resource Management Algorithm for Cloud Platforms

IEEE Transactions on Parallel and Distributed Systems, 2018

Virtualization technology boosts up traditional computing concept to cloud computing by introduci... more Virtualization technology boosts up traditional computing concept to cloud computing by introducing Virtual Machines (VMs) over the Physical Machines (PMs), which enables the cloud service providers to share the limited computing and network resources among multiple users. Virtual resource mapping can be defined as the process of embedding multiple VMs and their network resource demand onto multiple interconnected PMs. The existing mechanisms of resource mapping need to be efficient enough to minimize the number of PMs without compromising the deadline of the tasks assigned to the VMs, which is NP-hard. To deal with this problem, a Link based Virtual Resource Management (LVRM) algorithm is designed to map the VMs onto PMs based on the available and required resources of the PMs and VMs, respectively. The designed algorithm exploits the fact that the demanded network bandwidth among VMs should be given higher priority while allocating the physical resources to the interconnected virtual machines as insufficient network bandwidth may detain the task execution. The proposed algorithm is evaluated by a discrete event simulator and is compared with similar virtual network embedded algorithms. Simulation results show that LVRM can outperform over other network embedded algorithms.

Research paper thumbnail of DYVINE: Fitness-Based Dynamic Virtual Network Embedding in Cloud Computing

IEEE Journal on Selected Areas in Communications, 2019

Virtual Network Embedding (VNE) is the process of embedding the set of interconnected virtual mac... more Virtual Network Embedding (VNE) is the process of embedding the set of interconnected virtual machines onto the set of interconnected physical servers in the cloud computing environment. The level of complexity of VNE problem increases when large number of virtual machines with a set of resource demand need to be embedded onto a network of thousands of physical servers. The key challenge of VNE is the efficient mapping of the virtual networks, which may have dynamic resource demands. Existing solutions mainly emphasize on the embedding of static virtual network resulting in poor resource utilization and very low acceptance rate. To tackle such level of complexity in VNE, a fitness based DYnamic VIrtual Network Embedding (DYVINE) algorithm is proposed with the goal to maximize the resource utilization by maximizing the acceptance rate. Local and global fitness value of the virtual machines and virtual network, respectively, are used to utilize the maximum amount of physical resources. The proposed VNE algorithm allows the virtual network to be dynamic, which indicates that the structure and resource demand can be changed during its execution time. Further, in order to reduce the embedding time in each time slot, a set of physical servers is selected to host the virtual network instead of considering thousands of physical servers, which may significantly increase the embedding time. The proposed embedding mechanism is evaluated through extensive simulation and is compared with similar existing embedding algorithms, which outperforms over others.

Research paper thumbnail of Design and implementation of a novel service management framework for IoT devices in cloud

Journal of Systems and Software, 2016

A service management platform for the IoT devices in Cloud is designed.Proposed Cloud platform ca... more A service management platform for the IoT devices in Cloud is designed.Proposed Cloud platform can serve the real and non-real time data efficiently.The framework can provide services to maximum number of Application handlers.Docker container for virtualization is implemented to provide Software as a Service. With advent of new technologies, we are surrounded by several tiny but powerful mobile devices through which we can communicate with the outside world to store and retrieve data from the Cloud. These devices are considered as smart objects as they can sense the medium, collect data, interact with nearby smart objects, and transmit data to the cloud for processing and storage through internet. Internet of Things (IoT) create an environment for smart home, health care and smart business decisions by transmitting data through internet. Cloud computing, on the other hand leverages the capability of IoT by providing computation and storage power to each smart object. Researches and developers combine the cloud computing environment with that of IoT to reduce the transmission and processing cost in the cloud and to provide better services for processing and storing the realtime data generated from those IoT devices. In this paper, a novel framework is designed for the Cloud to manage the realtime IoT data and scientific non-IoT data. In order to demonstrate the services in Cloud, real experimental result of implementing the Docker container for virtualization is introduced to provide Software as a Service (SaaS) in a hybrid cloud environment.