Balázs Sonkoly - Academia.edu (original) (raw)

Papers by Balázs Sonkoly

Research paper thumbnail of Improving resiliency and throughput of transport networks with OpenFlow and Multipath TCP

Research paper thumbnail of Towards an Edge Cloud Based Coordination Platform for Multi-User AR Applications Built on Open-Source SLAMs

NOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium

Research paper thumbnail of Pricing games of NFV infrastructure providers

Telecommun. Syst., 2021

Future online services will be provisioned over the federation of infrastructure providers for ec... more Future online services will be provisioned over the federation of infrastructure providers for economic reasons: for effective usage of resources and for a wide geographic reach of customers. The technical enablers of this setup, envisioned also for 5G services, are the virtualization techniques applied in data centers, and in access and core networks. Business aspects around the provisioning of these services however pose still unresolved questions. In this heterogeneous setup those who want to deploy an online service face the problem of selecting the compute and network resource set that fulfills the technical requirements of the service deployment and that is preferable also from an economic point of view. Infrastructure providers compete among themselves for these customers, and shape their business offerings with profit maximization in mind. We model this resource market with the tool set of graph and game theory in order to study its characteristics. We show that customers ne...

Research paper thumbnail of Tuple space explosion

Proceedings of the 15th International Conference on Emerging Networking Experiments And Technologies, 2019

Packet classification is one of the fundamental building blocks of various security primitives an... more Packet classification is one of the fundamental building blocks of various security primitives and thus it needs to be highly efficient and available. In this paper, we evaluate whether the de facto packet classification algorithm (i.e., Tuple Space Search scheme, TSS) used in many popular software networking stacks, e.g., Open vSwitch, VPP, HyperSwitch, is robust against low-rate denial-ofservice (DoS) attacks. We present the Tuple Space Explosion (TSE) attack that exploits the fundamental space/time complexity of the TSS algorithm. We demonstrate that the TSE attack can degrade the switch performance to as low as 12% of its full capacity with a very low packet rate (i.e., 0.7 Mbps) when the target packet classification only has simple policies, e.g., "allow a few flows but drop all others". Then, we show that if the adversary has partial knowledge of the installed classification policies, she can virtually bring down the packet classifier with the same low attack rate. The TSE attack, in general, does not generate any specific attack traffic patterns but some attack packets with randomly chosen IP headers and arbitrary message contents. This makes it particularly hard to build a signature of our attack traffic for detection. Since the TSE attack exploits the fundamental complexity characteristics of the TSS algorithm, unfortunately, there seems to be no complete mitigation of the problem. We thus suggest, as a long-term solution, to use other packet classification algorithms (e.g., hierarchical tries, HaRP, Hypercuts) that are not vulnerable to the TSE attack. As a short-term solution, we propose MFCGuard, a monitoring system that carefully manages the entries in the tuple space to keep packet classification fast for the packets that are eventually accepted by the system.

Research paper thumbnail of Machine Learning-Based Scaling Management for Kubernetes Edge Clusters

IEEE Transactions on Network and Service Management, 2021

Kubernetes, the container orchestrator for cloud-deployed applications, offers automatic scaling ... more Kubernetes, the container orchestrator for cloud-deployed applications, offers automatic scaling for the application provider in order to meet the ever-changing intensity of processing demand. This auto-scaling feature can be customized with a parameter set, but those management parameters are static while incoming Web request dynamics often change, not to mention the fact that scaling decisions are inherently reactive, instead of being proactive. We set the ultimate goal of making cloud-based applications’ management easier and more effective. We propose a Kubernetes scaling engine that makes the auto-scaling decisions apt for handling the actual variability of incoming requests. In this engine various machine learning forecast methods compete with each other via a short-term evaluation loop in order to always give the lead to the method that suits best the actual request dynamics. We also introduce a compact management parameter for the cloud-tenant application provider to easily set their sweet spot in the resource over-provisioning vs. SLA violation trade-off. We motivate our scaling solution with analytical modeling and evaluation of the current Kubernetes behavior. The multi-forecast scaling engine and the proposed management parameter are evaluated both in simulations and with measurements on our collected Web traces to show the improved quality of fitting provisioned resources to service demand. We find that with just a few, but fundamentally different, and competing forecast methods, our auto-scaler engine, implemented in Kubernetes, results in significantly fewer lost requests with just slightly more provisioned resources compared to the default baseline.

Research paper thumbnail of How to orchestrate a distributed OpenStack

IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2018

We see two important trends in ICT nowadays: the backend of online applications and services are ... more We see two important trends in ICT nowadays: the backend of online applications and services are moving to the cloud, and for delay-sensitive ones the cloud is being extended with fogs. The reason for these phenomena is most importantly economic, but there are other benefits too: fast service creation, flexible reconfigurability, and portability. The management and orchestration of these services are currently separated to at least two layers: virtual infrastructure managers (VIMs) and network controllers operate their own domains, it should consist of compute or network resources, while handling services with cross-domain deployment is done by an upper-level orchestrator. In this paper we show the slight modification of OpenStack, the mainstream VIM today, which enables it to manage a distributed cloud-fog infrastructure. While our solution alleviates the need for running OpenStack controllers in the lightweight edge, it takes into account network aspects that are extremely important in a resource setup with remote fogs. We propose and analyze an online resource orchestration algorithm, we describe the OpenStack-based implementation aspects and we also show largescale simulation results on the performance of our algorithm.

Research paper thumbnail of On Pending Interest Table in Named Data Networking based Edge Computing: The Case of Mobile Augmented Reality

2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN)

Future networks require fast information response time, scalable content distribution, security a... more Future networks require fast information response time, scalable content distribution, security and mobility. In order to enable future Internet many key enabling technologies have been proposed such as Edge computing (EC) and Named Data Networking (NDN). In EC substantial compute and storage resources are placed at the edge of the network, in close proximity to end users. Similarly, NDN provides an alternative to traditional host centric IP architecture which seems a perfect candidate for distributed computation. Although NDN with EC seems a promising approach for enabling future Internet, it can cause various challenges such as expiry time of the Pending Interest Table (PIT) and non-trivial computation of the edge node. In this paper we discuss the expiry time and non-trivial computation in NDN based EC. We argue that if NDN is integrated in EC, then the PIT expiry time will be affected in relation with the processing time on the edge node. Our analysis shows that integrating NDN in EC without considering PIT expiry time may result in the degradation of network performance in terms of Interest Satisfaction Rate.

Research paper thumbnail of Optimizing Latency Sensitive Applications for Amazon's Public Cloud Platform

2019 IEEE Global Communications Conference (GLOBECOM)

Recent cloud technologies enable a diverse set of novel applications with capabilities never seen... more Recent cloud technologies enable a diverse set of novel applications with capabilities never seen before. Cloud native programming, microservices, serverless architectures are novel paradigms reducing the burden on both software developers and operators while enabling cloud-grade service deployments. Several types of applications fit in well with the new concepts, however, latency sensitive applications with strict delay constraints pose additional challenges on the platforms. Can we run these applications on today's public cloud platforms making use of the brand new tools and techniques? In this paper, we try to answer this question by addressing one of the most widely used and versatile public cloud platforms, namely Amazon's AWS, and we propose a novel mechanism to optimize the software "layout" based on dynamic performance measurements. Our contribution is threefold. First, we define a combined performance and cost model on CaaS/FaaS (Container/Function as a Service) platforms, specifically for AWS, based on a comprehensive performance analysis, and we also provide an application model capturing the performance requirements. Second, we formulate an optimization problem which minimizes the deployment costs on AWS while meeting the latency constraints. A polynomial algorithm finding the optimal solution is also given. Third, we evaluate the model and the algorithm for different scenarios and investigate the performance on today's system.

Research paper thumbnail of Fast Edge-to-Edge Serverless Migration in 5G Programmable Packet-Optical Networks

Optical Fiber Communication Conference (OFC) 2021, 2021

Ultra-low latency serverless applications are dynamically deployed and migrated between edge comp... more Ultra-low latency serverless applications are dynamically deployed and migrated between edge computing nodes in less than 10 ms, leveraging comprehensive telemetry data re- trieved from programmable packet-optical 5G x-haul.

Research paper thumbnail of Private VNFs for collaborative multi-operator service delivery: An architectural case

NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium, 2016

Flexible service delivery is a key requirement for 5G network architectures. This includes the su... more Flexible service delivery is a key requirement for 5G network architectures. This includes the support for collaborative service delivery by multiple operators, when an individual operator lacks the geographical footprint or the available network, compute or storage resources to provide the requested service to its customer. Network Function Virtualisation is a key enabler of such service delivery, as network functions (VNFs) can be outsourced to other operators. Owing to the (partial lack of) contractual relationships and co-opetition in the ecosystem, the privacy of user data, operator policy and even VNF code could be compromised. In this paper, we present a case for privacy in a VNF-enabled collaborative service delivery architecture. Specifically, we show the promise of homomorphic encryption (HE) in this context and its performance limitations through a proof of concept implementation of an image transcoder network function. Furthermore, inspired by application-specific encryption techniques, we propose a way forward for private, payload-intensive VNFs.

Research paper thumbnail of Towards Human-Robot Collaboration: An Industry 4.0 VR Platform with Clouds Under the Hood

2019 IEEE 27th International Conference on Network Protocols (ICNP)

Safe and efficient Human-Robot Collaboration (HRC) is an essential feature of future Industry 4.0... more Safe and efficient Human-Robot Collaboration (HRC) is an essential feature of future Industry 4.0 production systems which requires sophisticated collision avoidance mechanisms with intense computation need. Digital twins provide a novel way to test the impact of different control decisions in a simulated virtual environment even in parallel. In addition, Virtual/Augmented Reality (VR/AR) applications can revolutionize future industry environments. Each component requires extreme computational power which can be provided by cloud platforms but at the cost of higher delay and jitter. Moreover, clouds bring a versatile set of novel techniques easing the life of both developers and operators. Can these applications be realized and operated on today's systems? In this demonstration, we give answers to this question via real experiments.

Research paper thumbnail of Survey on Placement Methods in the Edge and Beyond

IEEE Communications Surveys & Tutorials, 2021

Edge computing is a (r)evolutionary extension of traditional cloud computing. It expands central ... more Edge computing is a (r)evolutionary extension of traditional cloud computing. It expands central cloud infrastructure with execution environments close to the users in terms of latency in order to enable a new generation of cloud applications. This paradigm shift has opened the door for telecommunications operators, mobile and fixed network vendors: they have joined the cloud ecosystem as essential stakeholders considerably influencing the future success of the technology. A key problem in edge computing is the optimal placement of computational units (virtual machines, containers, tasks or functions) of novel distributed applications. These components are deployed to a geographically distributed virtualized infrastructure and heterogeneous networking technologies are invoked to connect them while respecting quality requirements. The optimal hosting environment should be selected based on multiple criteria by novel scheduler algorithms which can cope with the new challenges of distributed cloud architecture where networking aspects cannot be ignored. The research community has dedicated significant efforts to this topic during recent years and a vast number of theoretical results have been published addressing different variants of the related mathematical problems. However, a comprehensive survey focusing on the technical and analytical aspects of the placement problem in various edge architectures is still missing. This survey provides a comprehensive summary and a structured taxonomy of the vast research on placement of computational entities in emerging edge infrastructures. Following the given taxonomy, the research papers are analyzed and categorized according to several dimensions, such as the capabilities of the underlying platforms, the structure of the supported services, the problem formulation, the applied mathematical methods, the objectives and constraints incorporated in the optimization problems, and the complexity of the proposed methods. We summarize the gained insights and important lessons learned, and finally, we reveal some important research gaps in the current literature.

Research paper thumbnail of Realizing services and slices across multiple operator domains

NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium, 2018

Supporting end-to-end network slices and services across operators has become an important use ca... more Supporting end-to-end network slices and services across operators has become an important use case of study for 5G networks as can be seen by 5G use cases published in 3GPP, ETSI as well as NGMN. This paper presents the in-depth architecture, implementation and experiments on a multi-domain orchestration framework that is able to deploy such multi-operator service as well as monitor the service for SLA compliance. Our implemented architecture allows operators to abstract their sensitive details while exposing the relevant amount of information to support inter-operator slice creation. Our experiments shows that the implemented framework is capable of creating services across operators while fulfilling the requirements of the network functions that form that service.

Research paper thumbnail of On Pricing of 5G Services

GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017

IT and telco providers are preparing for the era of 5G; in terms of technology, the driving force... more IT and telco providers are preparing for the era of 5G; in terms of technology, the driving force is virtualization, both for computing and networking. The 5G services will be superior than today's online services not only in technological aspects, but also from an economic and business perspective: fast service creation, effective utilization of resources, dynamic adaption to actual demand are all direct benefits of the virtualized infrastructure. In this paper we study the economic interactions between 5G resource providers and customers: we formalize how resources should be priced and selected for being booked. In particular we show that usage-based pricing is an incomemaximizing scheme for providers, and we derive the problem the customers need to solve for cost-optimizing service deployment.

Research paper thumbnail of Cost and Latency Optimized Edge Computing Platform

Electronics, 2022

Latency-critical applications, e.g., automated and assisted driving services, can now be deployed... more Latency-critical applications, e.g., automated and assisted driving services, can now be deployed in fog or edge computing environments, offloading energy-consuming tasks from end devices. Besides the proximity, though, the edge computing platform must provide the necessary operation techniques in order to avoid added delays by all means. In this paper, we propose an integrated edge platform that comprises orchestration methods with such objectives, in terms of handling the deployment of both functions and data. We show how the integration of the function orchestration solution with the adaptive data placement of a distributed key–value store can lead to decreased end-to-end latency even when the mobility of end devices creates a dynamic set of requirements. Along with the necessary monitoring features, the proposed edge platform is capable of serving the nomad users of novel applications with low latency requirements. We showcase this capability in several scenarios, in which we ar...

Research paper thumbnail of Orchestration of Network Services across multiple operators: The 5G Exchange prototype

2017 European Conference on Networks and Communications (EuCNC), 2017

Future 5G networks will rely on the coordinated allocation of compute, storage, and networking re... more Future 5G networks will rely on the coordinated allocation of compute, storage, and networking resources in order to meet the functional requirements of 5G services as well as guaranteeing efficient usage of the network infrastructure. However, the 5G service provisioning paradigm will also require a unified infrastructure service market that integrates multiple operators and technologies. The 5G Exchange (5GEx) project, building heavily on the Software-Defined Network (SDN) and the Network Function Virtualization (NFV) functionalities, tries to overcome this market and technology fragmentation by designing, implementing, and testing a multi-domain orchestrator (MdO) prototype for fast and automated Network Service (NS) provisioning over multiple-technologies and spanning across multiple operators. This paper presents a first implementation of the 5GEx MdO prototype obtained by extending existing open source software tools at the disposal of the 5GEx partners. The main functions of the 5GEx MdO prototype are showcased by demonstrating how it is possible to create and deploy NSs in the context of a Slice as a Service (SlaaS) use-case, based on a multi-operator scenario. The 5GEx MdO prototype performance is experimentally evaluated running validation tests within the 5GEx sandbox. The overall time required for the NS deployment has been evaluated considering NSs deployed across two operators.

Research paper thumbnail of FERO: Fast and Efficient Resource Orchestrator for a Data Plane Built on Docker and DPDK

IEEE INFOCOM 2018 - IEEE Conference on Computer Communications, 2018

Research paper thumbnail of The orchestration in 5G exchange — A multi-provider NFV framework for 5G services

2017 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), 2017

This paper presents the design of the 5GT Service Orchestrator (SO), which is one of the key comp... more This paper presents the design of the 5GT Service Orchestrator (SO), which is one of the key components of the 5G-TRANSFORMER (5GT) system for the deployment of vertical services. Depending on the requests from verticals, the 5GT-SO offers service or resource orchestration and federation. These functions include all tasks related to coordinating and providing the vertical with an integrated view of services and resources from multiple administrative domains. In particular, service orchestration entails managing end-to-end services that are split into various domains based on requirements and availability. Federation entails managing administrative relations at the interface between the SOs belonging to different domains and handling abstraction of services. The SO key functionalities, architecture, interfaces, as well as two sample use cases for service federation and service and resource orchestration are presented. Results for the latter use case show that a vertical service is deployed in the order of minutes.

Research paper thumbnail of A Multi-Domain Multi-Technology SFC Control Plane Experiment: A UNIFYed Approach

This document reports on the experimentation with a combined Network Function Virtualization (NFV... more This document reports on the experimentation with a combined Network Function Virtualization (NFV) orchestrator and Service Function Chain (SFC) Control Plane proof of concept prototype.

Research paper thumbnail of Fairness and stability analysis of high speed transport protocols

TCP congestion control had managed successfully the stability of the Internet in the past decades... more TCP congestion control had managed successfully the stability of the Internet in the past decades but it has reached its limitations in “challenging” network environments. The new challenges of next-generation networks (e.g., high speed communication or the communication over different media) generated an urgent need to further develop the congestion control of the current Internet. In recent years, several new proposals and modifications of the standard congestion control mechanism have been developed by different research groups all over the world. These new mechanisms and TCP versions address different aspects of future networks and applications and improve the performance of regular TCP. One of the important network environments where the serious drawbacks of standard TCP Reno can be experienced is high speed wide are networks. These networks can be characterized by high bandwidth-delay product (BDP) and TCP cannot efficiently utilize them due to its conservative congestion cont...

Research paper thumbnail of Improving resiliency and throughput of transport networks with OpenFlow and Multipath TCP

Research paper thumbnail of Towards an Edge Cloud Based Coordination Platform for Multi-User AR Applications Built on Open-Source SLAMs

NOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium

Research paper thumbnail of Pricing games of NFV infrastructure providers

Telecommun. Syst., 2021

Future online services will be provisioned over the federation of infrastructure providers for ec... more Future online services will be provisioned over the federation of infrastructure providers for economic reasons: for effective usage of resources and for a wide geographic reach of customers. The technical enablers of this setup, envisioned also for 5G services, are the virtualization techniques applied in data centers, and in access and core networks. Business aspects around the provisioning of these services however pose still unresolved questions. In this heterogeneous setup those who want to deploy an online service face the problem of selecting the compute and network resource set that fulfills the technical requirements of the service deployment and that is preferable also from an economic point of view. Infrastructure providers compete among themselves for these customers, and shape their business offerings with profit maximization in mind. We model this resource market with the tool set of graph and game theory in order to study its characteristics. We show that customers ne...

Research paper thumbnail of Tuple space explosion

Proceedings of the 15th International Conference on Emerging Networking Experiments And Technologies, 2019

Packet classification is one of the fundamental building blocks of various security primitives an... more Packet classification is one of the fundamental building blocks of various security primitives and thus it needs to be highly efficient and available. In this paper, we evaluate whether the de facto packet classification algorithm (i.e., Tuple Space Search scheme, TSS) used in many popular software networking stacks, e.g., Open vSwitch, VPP, HyperSwitch, is robust against low-rate denial-ofservice (DoS) attacks. We present the Tuple Space Explosion (TSE) attack that exploits the fundamental space/time complexity of the TSS algorithm. We demonstrate that the TSE attack can degrade the switch performance to as low as 12% of its full capacity with a very low packet rate (i.e., 0.7 Mbps) when the target packet classification only has simple policies, e.g., "allow a few flows but drop all others". Then, we show that if the adversary has partial knowledge of the installed classification policies, she can virtually bring down the packet classifier with the same low attack rate. The TSE attack, in general, does not generate any specific attack traffic patterns but some attack packets with randomly chosen IP headers and arbitrary message contents. This makes it particularly hard to build a signature of our attack traffic for detection. Since the TSE attack exploits the fundamental complexity characteristics of the TSS algorithm, unfortunately, there seems to be no complete mitigation of the problem. We thus suggest, as a long-term solution, to use other packet classification algorithms (e.g., hierarchical tries, HaRP, Hypercuts) that are not vulnerable to the TSE attack. As a short-term solution, we propose MFCGuard, a monitoring system that carefully manages the entries in the tuple space to keep packet classification fast for the packets that are eventually accepted by the system.

Research paper thumbnail of Machine Learning-Based Scaling Management for Kubernetes Edge Clusters

IEEE Transactions on Network and Service Management, 2021

Kubernetes, the container orchestrator for cloud-deployed applications, offers automatic scaling ... more Kubernetes, the container orchestrator for cloud-deployed applications, offers automatic scaling for the application provider in order to meet the ever-changing intensity of processing demand. This auto-scaling feature can be customized with a parameter set, but those management parameters are static while incoming Web request dynamics often change, not to mention the fact that scaling decisions are inherently reactive, instead of being proactive. We set the ultimate goal of making cloud-based applications’ management easier and more effective. We propose a Kubernetes scaling engine that makes the auto-scaling decisions apt for handling the actual variability of incoming requests. In this engine various machine learning forecast methods compete with each other via a short-term evaluation loop in order to always give the lead to the method that suits best the actual request dynamics. We also introduce a compact management parameter for the cloud-tenant application provider to easily set their sweet spot in the resource over-provisioning vs. SLA violation trade-off. We motivate our scaling solution with analytical modeling and evaluation of the current Kubernetes behavior. The multi-forecast scaling engine and the proposed management parameter are evaluated both in simulations and with measurements on our collected Web traces to show the improved quality of fitting provisioned resources to service demand. We find that with just a few, but fundamentally different, and competing forecast methods, our auto-scaler engine, implemented in Kubernetes, results in significantly fewer lost requests with just slightly more provisioned resources compared to the default baseline.

Research paper thumbnail of How to orchestrate a distributed OpenStack

IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2018

We see two important trends in ICT nowadays: the backend of online applications and services are ... more We see two important trends in ICT nowadays: the backend of online applications and services are moving to the cloud, and for delay-sensitive ones the cloud is being extended with fogs. The reason for these phenomena is most importantly economic, but there are other benefits too: fast service creation, flexible reconfigurability, and portability. The management and orchestration of these services are currently separated to at least two layers: virtual infrastructure managers (VIMs) and network controllers operate their own domains, it should consist of compute or network resources, while handling services with cross-domain deployment is done by an upper-level orchestrator. In this paper we show the slight modification of OpenStack, the mainstream VIM today, which enables it to manage a distributed cloud-fog infrastructure. While our solution alleviates the need for running OpenStack controllers in the lightweight edge, it takes into account network aspects that are extremely important in a resource setup with remote fogs. We propose and analyze an online resource orchestration algorithm, we describe the OpenStack-based implementation aspects and we also show largescale simulation results on the performance of our algorithm.

Research paper thumbnail of On Pending Interest Table in Named Data Networking based Edge Computing: The Case of Mobile Augmented Reality

2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN)

Future networks require fast information response time, scalable content distribution, security a... more Future networks require fast information response time, scalable content distribution, security and mobility. In order to enable future Internet many key enabling technologies have been proposed such as Edge computing (EC) and Named Data Networking (NDN). In EC substantial compute and storage resources are placed at the edge of the network, in close proximity to end users. Similarly, NDN provides an alternative to traditional host centric IP architecture which seems a perfect candidate for distributed computation. Although NDN with EC seems a promising approach for enabling future Internet, it can cause various challenges such as expiry time of the Pending Interest Table (PIT) and non-trivial computation of the edge node. In this paper we discuss the expiry time and non-trivial computation in NDN based EC. We argue that if NDN is integrated in EC, then the PIT expiry time will be affected in relation with the processing time on the edge node. Our analysis shows that integrating NDN in EC without considering PIT expiry time may result in the degradation of network performance in terms of Interest Satisfaction Rate.

Research paper thumbnail of Optimizing Latency Sensitive Applications for Amazon's Public Cloud Platform

2019 IEEE Global Communications Conference (GLOBECOM)

Recent cloud technologies enable a diverse set of novel applications with capabilities never seen... more Recent cloud technologies enable a diverse set of novel applications with capabilities never seen before. Cloud native programming, microservices, serverless architectures are novel paradigms reducing the burden on both software developers and operators while enabling cloud-grade service deployments. Several types of applications fit in well with the new concepts, however, latency sensitive applications with strict delay constraints pose additional challenges on the platforms. Can we run these applications on today's public cloud platforms making use of the brand new tools and techniques? In this paper, we try to answer this question by addressing one of the most widely used and versatile public cloud platforms, namely Amazon's AWS, and we propose a novel mechanism to optimize the software "layout" based on dynamic performance measurements. Our contribution is threefold. First, we define a combined performance and cost model on CaaS/FaaS (Container/Function as a Service) platforms, specifically for AWS, based on a comprehensive performance analysis, and we also provide an application model capturing the performance requirements. Second, we formulate an optimization problem which minimizes the deployment costs on AWS while meeting the latency constraints. A polynomial algorithm finding the optimal solution is also given. Third, we evaluate the model and the algorithm for different scenarios and investigate the performance on today's system.

Research paper thumbnail of Fast Edge-to-Edge Serverless Migration in 5G Programmable Packet-Optical Networks

Optical Fiber Communication Conference (OFC) 2021, 2021

Ultra-low latency serverless applications are dynamically deployed and migrated between edge comp... more Ultra-low latency serverless applications are dynamically deployed and migrated between edge computing nodes in less than 10 ms, leveraging comprehensive telemetry data re- trieved from programmable packet-optical 5G x-haul.

Research paper thumbnail of Private VNFs for collaborative multi-operator service delivery: An architectural case

NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium, 2016

Flexible service delivery is a key requirement for 5G network architectures. This includes the su... more Flexible service delivery is a key requirement for 5G network architectures. This includes the support for collaborative service delivery by multiple operators, when an individual operator lacks the geographical footprint or the available network, compute or storage resources to provide the requested service to its customer. Network Function Virtualisation is a key enabler of such service delivery, as network functions (VNFs) can be outsourced to other operators. Owing to the (partial lack of) contractual relationships and co-opetition in the ecosystem, the privacy of user data, operator policy and even VNF code could be compromised. In this paper, we present a case for privacy in a VNF-enabled collaborative service delivery architecture. Specifically, we show the promise of homomorphic encryption (HE) in this context and its performance limitations through a proof of concept implementation of an image transcoder network function. Furthermore, inspired by application-specific encryption techniques, we propose a way forward for private, payload-intensive VNFs.

Research paper thumbnail of Towards Human-Robot Collaboration: An Industry 4.0 VR Platform with Clouds Under the Hood

2019 IEEE 27th International Conference on Network Protocols (ICNP)

Safe and efficient Human-Robot Collaboration (HRC) is an essential feature of future Industry 4.0... more Safe and efficient Human-Robot Collaboration (HRC) is an essential feature of future Industry 4.0 production systems which requires sophisticated collision avoidance mechanisms with intense computation need. Digital twins provide a novel way to test the impact of different control decisions in a simulated virtual environment even in parallel. In addition, Virtual/Augmented Reality (VR/AR) applications can revolutionize future industry environments. Each component requires extreme computational power which can be provided by cloud platforms but at the cost of higher delay and jitter. Moreover, clouds bring a versatile set of novel techniques easing the life of both developers and operators. Can these applications be realized and operated on today's systems? In this demonstration, we give answers to this question via real experiments.

Research paper thumbnail of Survey on Placement Methods in the Edge and Beyond

IEEE Communications Surveys & Tutorials, 2021

Edge computing is a (r)evolutionary extension of traditional cloud computing. It expands central ... more Edge computing is a (r)evolutionary extension of traditional cloud computing. It expands central cloud infrastructure with execution environments close to the users in terms of latency in order to enable a new generation of cloud applications. This paradigm shift has opened the door for telecommunications operators, mobile and fixed network vendors: they have joined the cloud ecosystem as essential stakeholders considerably influencing the future success of the technology. A key problem in edge computing is the optimal placement of computational units (virtual machines, containers, tasks or functions) of novel distributed applications. These components are deployed to a geographically distributed virtualized infrastructure and heterogeneous networking technologies are invoked to connect them while respecting quality requirements. The optimal hosting environment should be selected based on multiple criteria by novel scheduler algorithms which can cope with the new challenges of distributed cloud architecture where networking aspects cannot be ignored. The research community has dedicated significant efforts to this topic during recent years and a vast number of theoretical results have been published addressing different variants of the related mathematical problems. However, a comprehensive survey focusing on the technical and analytical aspects of the placement problem in various edge architectures is still missing. This survey provides a comprehensive summary and a structured taxonomy of the vast research on placement of computational entities in emerging edge infrastructures. Following the given taxonomy, the research papers are analyzed and categorized according to several dimensions, such as the capabilities of the underlying platforms, the structure of the supported services, the problem formulation, the applied mathematical methods, the objectives and constraints incorporated in the optimization problems, and the complexity of the proposed methods. We summarize the gained insights and important lessons learned, and finally, we reveal some important research gaps in the current literature.

Research paper thumbnail of Realizing services and slices across multiple operator domains

NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium, 2018

Supporting end-to-end network slices and services across operators has become an important use ca... more Supporting end-to-end network slices and services across operators has become an important use case of study for 5G networks as can be seen by 5G use cases published in 3GPP, ETSI as well as NGMN. This paper presents the in-depth architecture, implementation and experiments on a multi-domain orchestration framework that is able to deploy such multi-operator service as well as monitor the service for SLA compliance. Our implemented architecture allows operators to abstract their sensitive details while exposing the relevant amount of information to support inter-operator slice creation. Our experiments shows that the implemented framework is capable of creating services across operators while fulfilling the requirements of the network functions that form that service.

Research paper thumbnail of On Pricing of 5G Services

GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017

IT and telco providers are preparing for the era of 5G; in terms of technology, the driving force... more IT and telco providers are preparing for the era of 5G; in terms of technology, the driving force is virtualization, both for computing and networking. The 5G services will be superior than today's online services not only in technological aspects, but also from an economic and business perspective: fast service creation, effective utilization of resources, dynamic adaption to actual demand are all direct benefits of the virtualized infrastructure. In this paper we study the economic interactions between 5G resource providers and customers: we formalize how resources should be priced and selected for being booked. In particular we show that usage-based pricing is an incomemaximizing scheme for providers, and we derive the problem the customers need to solve for cost-optimizing service deployment.

Research paper thumbnail of Cost and Latency Optimized Edge Computing Platform

Electronics, 2022

Latency-critical applications, e.g., automated and assisted driving services, can now be deployed... more Latency-critical applications, e.g., automated and assisted driving services, can now be deployed in fog or edge computing environments, offloading energy-consuming tasks from end devices. Besides the proximity, though, the edge computing platform must provide the necessary operation techniques in order to avoid added delays by all means. In this paper, we propose an integrated edge platform that comprises orchestration methods with such objectives, in terms of handling the deployment of both functions and data. We show how the integration of the function orchestration solution with the adaptive data placement of a distributed key–value store can lead to decreased end-to-end latency even when the mobility of end devices creates a dynamic set of requirements. Along with the necessary monitoring features, the proposed edge platform is capable of serving the nomad users of novel applications with low latency requirements. We showcase this capability in several scenarios, in which we ar...

Research paper thumbnail of Orchestration of Network Services across multiple operators: The 5G Exchange prototype

2017 European Conference on Networks and Communications (EuCNC), 2017

Future 5G networks will rely on the coordinated allocation of compute, storage, and networking re... more Future 5G networks will rely on the coordinated allocation of compute, storage, and networking resources in order to meet the functional requirements of 5G services as well as guaranteeing efficient usage of the network infrastructure. However, the 5G service provisioning paradigm will also require a unified infrastructure service market that integrates multiple operators and technologies. The 5G Exchange (5GEx) project, building heavily on the Software-Defined Network (SDN) and the Network Function Virtualization (NFV) functionalities, tries to overcome this market and technology fragmentation by designing, implementing, and testing a multi-domain orchestrator (MdO) prototype for fast and automated Network Service (NS) provisioning over multiple-technologies and spanning across multiple operators. This paper presents a first implementation of the 5GEx MdO prototype obtained by extending existing open source software tools at the disposal of the 5GEx partners. The main functions of the 5GEx MdO prototype are showcased by demonstrating how it is possible to create and deploy NSs in the context of a Slice as a Service (SlaaS) use-case, based on a multi-operator scenario. The 5GEx MdO prototype performance is experimentally evaluated running validation tests within the 5GEx sandbox. The overall time required for the NS deployment has been evaluated considering NSs deployed across two operators.

Research paper thumbnail of FERO: Fast and Efficient Resource Orchestrator for a Data Plane Built on Docker and DPDK

IEEE INFOCOM 2018 - IEEE Conference on Computer Communications, 2018

Research paper thumbnail of The orchestration in 5G exchange — A multi-provider NFV framework for 5G services

2017 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), 2017

This paper presents the design of the 5GT Service Orchestrator (SO), which is one of the key comp... more This paper presents the design of the 5GT Service Orchestrator (SO), which is one of the key components of the 5G-TRANSFORMER (5GT) system for the deployment of vertical services. Depending on the requests from verticals, the 5GT-SO offers service or resource orchestration and federation. These functions include all tasks related to coordinating and providing the vertical with an integrated view of services and resources from multiple administrative domains. In particular, service orchestration entails managing end-to-end services that are split into various domains based on requirements and availability. Federation entails managing administrative relations at the interface between the SOs belonging to different domains and handling abstraction of services. The SO key functionalities, architecture, interfaces, as well as two sample use cases for service federation and service and resource orchestration are presented. Results for the latter use case show that a vertical service is deployed in the order of minutes.

Research paper thumbnail of A Multi-Domain Multi-Technology SFC Control Plane Experiment: A UNIFYed Approach

This document reports on the experimentation with a combined Network Function Virtualization (NFV... more This document reports on the experimentation with a combined Network Function Virtualization (NFV) orchestrator and Service Function Chain (SFC) Control Plane proof of concept prototype.

Research paper thumbnail of Fairness and stability analysis of high speed transport protocols

TCP congestion control had managed successfully the stability of the Internet in the past decades... more TCP congestion control had managed successfully the stability of the Internet in the past decades but it has reached its limitations in “challenging” network environments. The new challenges of next-generation networks (e.g., high speed communication or the communication over different media) generated an urgent need to further develop the congestion control of the current Internet. In recent years, several new proposals and modifications of the standard congestion control mechanism have been developed by different research groups all over the world. These new mechanisms and TCP versions address different aspects of future networks and applications and improve the performance of regular TCP. One of the important network environments where the serious drawbacks of standard TCP Reno can be experienced is high speed wide are networks. These networks can be characterized by high bandwidth-delay product (BDP) and TCP cannot efficiently utilize them due to its conservative congestion cont...