Achieving Predictable and Low End-to-End Latency for a Network of Smart Services (original) (raw)
Related papers
2018
This paper highlights cloud computing as one of the principal building blocks of a smart factory, providing a huge data storage space and a highly scalable computational capacity. The cloud computing system used in a smart factory should be time-predictable to be able to satisfy hard real-time requirements of various applications existing in manufacturing systems. Interleaving an intermediate computing layer-called fog-between the factory and the cloud data center is a promising solution to deal with latency requirements of hard real-time applications. In this paper, a time-predictable cloud framework is proposed which is able to satisfy end-to-end latency requirements in a smart factory. To propose such an industrial cloud framework, we not only use existing real-time technologies such as Industrial Ethernet and the Real-time XEN hypervisor, but we also discuss unaddressed challenges. Among the unaddressed challenges, the partitioning of a given workload between the fog and the clo...
Managing latency in edge–cloud environment
Journal of Systems and Software, 2021
Modern Cyber-physical Systems (CPS) include applications like smart traffic, smart agriculture, smart power grid, etc. Commonly, these systems are distributed and composed of end-user applications and microservices that typically run in the cloud. The connection with the physical world, which is inherent to CPS, brings the need to operate and respond in real-time. As the cloud becomes part of the computation loop, the real-time requirements have to be also reflected by the cloud. In this paper, we present an approach that provides soft real-time guarantees on the response time of services running in cloud and edge-cloud (i.e., cloud geographically close to the end-user), where these services are developed in high-level programming languages. In particular, we elaborate a method that allows us to predict the upper bound of the response time of a service when sharing the same computer with other services. Importantly, as our approach focuses on minimizing the impact on the developer of such services, it does not require any special programming model nor limits usage of common libraries, etc.
Improving the Latency Value by Virtualizing Distributed Data Center and Automation in Cloud
IOSR Journal of Computer Engineering, 2012
Organization today are leveraging the benefits of cloud computing to increase flexibility, agility and reduce cost however that flexibility can also pose networking challenges by moving application offsite, companies need good network connectivity between a data center site and a cloud provider so that user don't experience performance degradation. Good connectivity comes in two forms necessary bandwidth and low latency. Distributed datacenter improves services access latency and bandwidth. Virtualized cloud data center enables IT organization to share compute resources across multiple applications and user group in a much more dynamic way than is possible in traditional environment where application, middleware and infrastructure are tightly coupled and resource allocation are highly static. The goal is to enable users to reduce the cost and complexity of application provisioning and operations in virtualized data centers. Cloud environments at the same by automation liberate the operational management from the burden of manual process. I.
Ultra-Reliable Distributed Cloud Network Control With End-to-End Latency Constraints
IEEE/ACM Transactions on Networking
We are entering a rapidly unfolding future driven by the delivery of real-time computation services, such as industrial automation and augmented reality, collectively referred to as augmented information (AgI) services, over highly distributed cloud/edge computing networks. The interaction intensive nature of AgI services is accelerating the need for networking solutions that provide strict latency guarantees. In contrast to most existing studies that can only characterize average delay performance, we focus on the critical goal of delivering AgI services ahead of corresponding deadlines on a per-packet basis, while minimizing overall cloud network operational cost. To this end, we design a novel queuing system able to track data packets' lifetime and formalize the delay-constrained least-cost dynamic network control problem. To address this challenging problem, we first study the setting with average capacity (or resource budget) constraints, for which we characterize the delay-constrained stability region and design a near-optimal control policy leveraging Lyapunov optimization theory on an equivalent virtual network. Guided by the same principle, we tackle the peak capacity constrained scenario by developing the reliable cloud network control (RCNC) algorithm, which employs a two-way optimization method to make actual and virtual network flow solutions converge in an iterative manner. Extensive numerical results show the superior performance of the proposed control policy compared with the state-of-the-art cloud network control algorithm, and the value of guaranteeing strict end-to-end deadlines for the delivery of next-generation AgI services.
Delay Mitigation in Offloaded Cloud Controllers in Industrial IoT
IEEE Access, 2017
This paper investigates the interplay of cloud computing, fog computing, and Internet of Things (IoT) in control applications targeting the automation industry. In this context, a prototype is developed to explore the use of IoT devices that communicate with a cloud-based controller, i.e., the controller is offloaded to cloud or fog. Several experiments are performed to investigate the consequences of having a cloud server between the end device and the controller. The experiments are performed while considering arbitrary jitter and delays, i.e., they can be smaller than, equal to, or greater than the sampling period. This paper also applies mitigation mechanisms to deal with the delays and jitter that are caused by the networks when the controller is offloaded to the fog or cloud. INDEX TERMS Industrial IoT, fog computing, cloud computing, industrial automation systems.
Minimization of Latency Using Multitask Scheduling in Industrial Autonomous Systems
Wireless Communications and Mobile Computing
Using enhanced ant colony optimization, this study proposes an efficient heuristic scheduling technique for cloud infrastructure that addresses the issues with nonlinear loads, slow processing complexity, and incomplete shared memory asset knowledge that plagued earlier resource supply implementations. The cloud-based planning architecture has been tailored for dynamic planning. Therefore, to determine the best task allocation method, a contentment factor was developed by integrating these three objectives of the smallest waiting period, the extent of commodity congestion control, and the expense of goal accomplishment. Ultimately, the incentive and retribution component would be used to modify the ant colony calculation perfume-generating criteria that accelerate a solution time. In particular, they leverage an activity contributed of the instability component to enhance the capabilities of such a method, and they include a virtual desktop burden weight component in the operation o...
EDGE COMPUTING: REVOLUTIONIZING INDUSTRIAL AUTOMATION FOR ENHANCED EFFICIENCY AND RELIABILITY
IAEME PUBLICATION, 2024
Industrial automation is experiencing a significant transformation with the integration of edge computing technologies, which bring computational power closer to data sources, enabling real-time processing and intelligent decision-making at the network edge. This paradigm shift optimizes data analytics and enhances system responsiveness in industrial settings, addressing critical issues such as latency, bandwidth limitations, and reliability concerns associated with traditional cloud-based systems. The paper explores edge computing's applications in industrial automation, including real-time monitoring and control, predictive maintenance, and quality assurance, highlighting benefits like improved operational efficiency, reduced downtime, and enhanced product quality. While implementation challenges such as security concerns, interoperability problems, and data governance issues exist, the potential of edge computing to reshape industrial automation is immense. As technology advances and industry standards evolve, edge computing is poised to unlock new levels of efficiency, reliability, and scalability in industrial processes. The integration of edge computing with emerging technologies like 5G and artificial intelligence promises to further revolutionize the industrial landscape, despite the hurdles that need to be overcome.
IEEE Access
With the increasing adoption of the edge computing paradigm, including multi-access edge computing (MEC) in telecommunication scenarios, many works have explored the benefits of adopting it. Since MEC, in general, presents a reduction in latency and energy consumption compared to cloud computing, it has been applied to deploy artificial intelligence services. This kind of service can have distinct requirements, which involve different computational resource capabilities as well different data formats or communication protocols to collect data. In this sense, we propose the VEF Edge Framework, which aims at helping the development and deployment of artificial intelligence services for MEC scenarios considering requirements as low-latency and CPU/memory consumption. We explain the VEF architecture and present experimental results obtained with a base case's implementation: an object detection inference service deployed with VEF. The experiments measured CPU and memory usage for the VEF's main components and the processing time for two procedures (inference and video stream handling). 12 13 INDEX TERMS Future connected systems, fog computing, edge computing, intelligent services, services deployment. I. INTRODUCTION 14 Currently, in industrial scenarios, we have distributed and 15 heterogeneous applications involving, for instance, differ-16 ent Industrial Internet of Things (IIoT) devices with their 17 known constraints (e.g., processing and storage capabil-18 ities). These scenarios include integrating those devices 19 with other services and equipment [1], such as integrat-20 ing video cameras with computer vision services or using 21 model-predictive control models to remotely control an auto-22 mated guided vehicle [2]. In general, to achieve these appli-23 cations' requirements, we employ external servers to provide 24 the necessary resources. Besides cloud computing, which 25 brings the required resources properly, we can deploy the 26 474 that the Video Stream service benefits from a more robust 475 infrastructure (it went from 100ms of processing time in 476 the small-01-source setting to 19ms in the large-01-source 477 setting). However, the Video Streaming Processing Time sig-478 nificantly increases as the number of sources grows. Hence, 479 we must carefully define the infrastructure specifications for 480 the VEF VM based on the final application requirements to 481 deliver a smooth user experience. 482 The Inference Processing Time roughly stabilizes when 483 adding more sources, given that there is processing power 484 available to be used. Nonetheless, this is likely caused by the 485 use of the Frame Age Threshold, which makes the Inference 486 service skips more and more frames from inference as the 487 number of sources rises, keeping processing time constant. 488 Therefore, we cannot say the current implementation of the 489 Inference service is efficient since it does not get to process an 490 increasing amount of frames as the number of sources rises.
PaperNr ]-1 CLOUD-BASED CONTROL OF INDUSTRIAL CYBER-PHYSICAL SYSTEMS
2019
This paper presents an implementation of a control algorithm to a cloud system. The motivation is that cloud implementations of low-level systems in the production industry are gradually becoming more common. Microsoft Azure platform is utilized for the cloud-based control and the case is tested using a customized laboratory model, which can be presented as an agent in a typical production system. The model offers the regulation of a ball on an inclined surface and uses two asynchronous motors connected to frequency converters to control the position of the ball. These frequency converters are controlled by a Programmable Logic Controller (PLC). Windows Communication Foundation (WCF) services and Azure IoT Hub were selected to be used with the cloud-based control system. Experimental results have shown our solution can control the system with sampling period equal or higher than 100ms. The latency of WCF service is at around 100ms and latency of Azure IoT Hub is at around 1000ms, so...
Adapting Distributed Real-Time and Embedded Pub/Sub Middleware for Cloud Computing Environments
2010
Enterprise distributed real-time and embedded (DRE) publish/subscribe (pub/sub) systems manage resources and data that are vital to users. Cloud computing—where computing resources are provisioned elastically and leased as a service—is an increasingly popular deployment paradigm. Enterprise DRE pub/sub systems can leverage cloud computing provisioning services to execute needed functionality when on-site computing resources are not available. Although cloud computing provides flexible on-demand computing and networking resources, enterprise DRE pub/sub systems often cannot accurately characterize their behavior a priori for the variety of resource configurations cloud computing supplies (e.g., CPU and network bandwidth), which makes it hard for DRE systems to leverage conventional cloud computing platforms. This paper provides two contributions to the study of how autonomic configuration of DRE pub/sub middleware can provision and use on-demand cloud resources effectively. We first describe how supervised machine learning can configure DRE pub/sub middleware services and transport protocols autonomically to support end-to-end quality-of-service (QoS) requirements based on cloud computing resources. We then present results that empirically validate how computing and networking resources affect enterprise DRE pub/sub system QoS. These results show how supervised machine learning can configure DRE pub/sub middleware adaptively in < 10 μsec with bounded time complexity to support key QoS reliability and latency requirements.