Alex Glikson - Academia.edu (original) (raw)
Papers by Alex Glikson
Des procedes et des systemes de calcul impliquent d'identifier un emplacement cible pour une ... more Des procedes et des systemes de calcul impliquent d'identifier un emplacement cible pour une migration en cours d'utilisation de services de machine virtuelle, de delivrer les services de machine virtuelle a un systeme d'exploitation a partir d'un emplacement de source, de migrer les services de machine virtuelle une premiere fois via un reseau de donnees de l'emplacement de source vers un emplacement intermediaire, de delivrer les services de machine virtuelle au systeme d'exploitation a partir de l'emplacement intermediaire, et de migrer les services de machine virtuelle une deuxieme fois de l'emplacement intermediaire vers l'emplacement cible via le reseau de donnees. Les interruptions des services de machine virtuelle pour le systeme d'exploitation pendant la migration de la machine virtuelle sont minimales.
Proceedings of the 12th ACM International Conference on Systems and Storage, 2019
Serverless computing revolutionizes cloud software by eliminating the need to manage the underlyi... more Serverless computing revolutionizes cloud software by eliminating the need to manage the underlying infrastructure, while providing efficient scaling, performance and security isolation as well as usage metering. The goal of Runboxes is to bring the benefits of serverless computing to Interactive Computing, involving ad-hoc, stateful, resource-intensive tasks, whose resource demand is directly affected by the observed user behavior. We evaluate Runboxes in the context of cost-efficient hosting of sandbox programming environments for Computer Science students.
ArXiv, 2019
Rapid adoption of the serverless (or Function-as-a-Service, FaaS) paradigm, pioneered by Amazon w... more Rapid adoption of the serverless (or Function-as-a-Service, FaaS) paradigm, pioneered by Amazon with AWS Lambda and followed by numerous commercial offerings and open source projects, introduces new challenges in designing the cloud infrastructure, balancing between performance and cost. While instant per-request elasticity that FaaS platforms typically offer application developers makes it possible to achieve high performance of bursty workloads without over-provisioning, such elasticity often involves extra latency associated with on-demand provisioning of individual runtime containers that serve the functions. This phenomenon is often called cold starts, as opposed to the situation when a function is served by a pre-provisioned "warm" container, ready to serve requests with close to zero overhead. Providers are constantly working on techniques aimed at reducing cold starts. A common approach to reduce cold starts is to maintain a pool of warm containers, in anticipation...
Proceedings of the 9th International Conference on Network and Service Management (CNSM 2013), 2013
Optimal resource allocation is a key ingredient in the ability of cloud providers to offer agile ... more Optimal resource allocation is a key ingredient in the ability of cloud providers to offer agile data centers and cloud computing services at a competitive cost. In this paper we study the problem of placing images and virtual machine instances on physical containers in a way that maximizes the affinity between the images and virtual machine instances created from them. This reduces communication overhead and latency imposed by the ongoing communication between the virtual machine instances and their respective images. We model this problem as a novel placement problem that extends the class constrained multiple knapsack problem (CCMK) previously studied in the literature, and present a polynomial time local search algorithm for the case where all the relevant images have the same size. We prove that this algorithm has an approximation ratio of (3 +) and then evaluate its performance in a general setting where images and virtual machine instances are of arbitrary sizes, using production data from a private cloud. The results indicate that our algorithm can obtain significant improvements (up to 20%) compared to the greedy approach, in cases where local image storage or main memory resources are scarce.
Lecture Notes in Computer Science, 2003
Graph grammars are widely used in order to de ne classes of graphs having some inductive and narr... more Graph grammars are widely used in order to de ne classes of graphs having some inductive and narrow structure. In many cases the narrowness can be measured in terms of the maximal treewidth and/or clique-width of the graphs in the class, (see RS86], CO00] for de nitions of these notions). It is known that using the corresponding tree-decomposition or clique-width parse term, any property of the graph expressed in Monadic Second Order Logic can be evaluated e ciently (CMR00]). In general, nding the tree-width/clique-width is NP-complete, but speci c upper bounds are known for some graph classes generated by context-free grammars (Lau88], ER90], CO00], CER93]). We investigate here the explicit relationship between various not necessarily context-free Neighborhood Controlled Embedding (NCE) graph grammars and the clique-width of graphs generated by them. We show that all the graphs, generated by any given NCE graph grammar, have bounded cliquewidth (while the bound depends only on parameters of the grammar), and provide the corresponding algorithms (based on dynamic programming techniques) for nding clique-width expression based on given derivation tree. We also obtain improvements in the case the graphs generated by such grammars are of tree-width(k,m). This gives also a new and short proof of a result due to Courcelle and Olariu, that graphs of tree-width k have clique-width at most 2 k+1 + 1. All the results obtained for NLC grammars and generalized to both NCE and edNCE grammars for directed graphs with dynamic edge relabelling.
domino.research.ibm.com
Alex Glikson, Amir Epstein, Assaf Israel, John Marberg IBM Haifa Research Lab, Haifa 31905, Israe... more Alex Glikson, Amir Epstein, Assaf Israel, John Marberg IBM Haifa Research Lab, Haifa 31905, Israel {glikson,amire,assafi,marberg}@il.ibm.com ... ABSTRACT Server virtualization introduces new capabilities that im-prove the efficiency of data centers. One such ...
2014 IEEE International Conference on Cloud Engineering, 2014
Proceedings of the 10th ACM International Systems and Storage Conference
Journal of Sensor and Actuator Networks
Nowadays, the penetration of sensors and actuators in different application fields is revolutioni... more Nowadays, the penetration of sensors and actuators in different application fields is revolutionizing all aspects of our daily life. One of the major sectors that is taking advantage of such cutting-edge cheap smart devices is healthcare. In this context, Remote Patient Monitoring (RPM) at home represents a tempting opportunity for hospitals to reduce clinical costs and to improve the quality of life of both patients and their families. It allows patients to be monitored remotely by means networks of Internet of Things (IoT) medical devices equipped with sensors and actuators that collect healthcare data from patients and send them to a Cloud-based Hospital Information System (HIS) for processing. Up to now, many different proprietary software systems have been developed as stand-along expensive solutions, presenting interoperability, extensibility, and scalability issues. In recent years, the European Commission (EC) has promoted the wide adoption of FIWARE technology, launching 16...
ABSTRACT Cloud paradigm facilitates cost-efficient elastic computing allowing scaling workloads o... more ABSTRACT Cloud paradigm facilitates cost-efficient elastic computing allowing scaling workloads on demand. As cloud size increases, the probability that all workloads simultaneously scale up to their maximum demand, diminishes. This observation allows multiplexing cloud resources among multiple workloads, greatly improving resource utilization. The ability to host virtualized workloads such that available physical capacity is smaller than the sum of maximal demands of the workloads, is referred to as over-commit or over-subscription. Naturally, over-commit implies risk of resource congestion. Therefore, there is a tradeoff between improving resource utilization by increasing an over-commit ratio and exposing the infrastructure provider and customers to the risk of resource congestion. In this work, we observe that while resource multiplexing naturally occurs in the cloud, the risks associated with exploiting it for higher levels of cloud utilization, are not transparent to the customers. We consider workloads comprising elastic groups of Virtual Machines (VMs). We suggest that cloud providers would extend a standard availability Service Level Agreement (SLA) to express the probability of successfully launching a VM (to expand a workload), complementing the current practice of offering a standard SLA on availability of VMs which are already successfully launched. Using the proposed extended availability SLA, we introduce a notion of the cloud effective demand, which generalizes previously introduced notions of effective size of a single VM and effective bandwidth of stand-alone and multiplexed network connections. We propose an algorithmic framework that uses cloud effective demand to estimate the total physical capacity required for SLA compliance under over-commit. We evaluate our proposed methodology using simulations based on the data collected from a real private cloud production environment.
Proceedings of the 9th ACM International on Systems and Storage Conference - SYSTOR '16, 2016
2015 IEEE Symposium on Computers and Communication (ISCC), 2015
Cloud paradigm facilitates cost-efficient elastic computing allowing scaling workloads on demand.... more Cloud paradigm facilitates cost-efficient elastic computing allowing scaling workloads on demand. As cloud size increases, the probability that all workloads simultaneously scale up to their maximum demand, diminishes. This observation allows multiplexing cloud resources among multiple workloads, greatly improving resource utilization. The ability to host virtualized workloads such that available physical capacity is smaller than the sum of maximal demands of the workloads, is referred to as over-commit or over-subscription. Naturally, over-commit implies risk of resource congestion. Therefore, there is a tradeoff between improving resource utilization by increasing an over-commit ratio and exposing the infrastructure provider and customers to the risk of resource congestion. In this work, we observe that while resource multiplexing naturally occurs in the cloud, the risks associated with exploiting it for higher levels of cloud utilization, are not transparent to the customers. We ...
Des procedes et des systemes de calcul impliquent d'identifier un emplacement cible pour une ... more Des procedes et des systemes de calcul impliquent d'identifier un emplacement cible pour une migration en cours d'utilisation de services de machine virtuelle, de delivrer les services de machine virtuelle a un systeme d'exploitation a partir d'un emplacement de source, de migrer les services de machine virtuelle une premiere fois via un reseau de donnees de l'emplacement de source vers un emplacement intermediaire, de delivrer les services de machine virtuelle au systeme d'exploitation a partir de l'emplacement intermediaire, et de migrer les services de machine virtuelle une deuxieme fois de l'emplacement intermediaire vers l'emplacement cible via le reseau de donnees. Les interruptions des services de machine virtuelle pour le systeme d'exploitation pendant la migration de la machine virtuelle sont minimales.
Proceedings of the 12th ACM International Conference on Systems and Storage, 2019
Serverless computing revolutionizes cloud software by eliminating the need to manage the underlyi... more Serverless computing revolutionizes cloud software by eliminating the need to manage the underlying infrastructure, while providing efficient scaling, performance and security isolation as well as usage metering. The goal of Runboxes is to bring the benefits of serverless computing to Interactive Computing, involving ad-hoc, stateful, resource-intensive tasks, whose resource demand is directly affected by the observed user behavior. We evaluate Runboxes in the context of cost-efficient hosting of sandbox programming environments for Computer Science students.
ArXiv, 2019
Rapid adoption of the serverless (or Function-as-a-Service, FaaS) paradigm, pioneered by Amazon w... more Rapid adoption of the serverless (or Function-as-a-Service, FaaS) paradigm, pioneered by Amazon with AWS Lambda and followed by numerous commercial offerings and open source projects, introduces new challenges in designing the cloud infrastructure, balancing between performance and cost. While instant per-request elasticity that FaaS platforms typically offer application developers makes it possible to achieve high performance of bursty workloads without over-provisioning, such elasticity often involves extra latency associated with on-demand provisioning of individual runtime containers that serve the functions. This phenomenon is often called cold starts, as opposed to the situation when a function is served by a pre-provisioned "warm" container, ready to serve requests with close to zero overhead. Providers are constantly working on techniques aimed at reducing cold starts. A common approach to reduce cold starts is to maintain a pool of warm containers, in anticipation...
Proceedings of the 9th International Conference on Network and Service Management (CNSM 2013), 2013
Optimal resource allocation is a key ingredient in the ability of cloud providers to offer agile ... more Optimal resource allocation is a key ingredient in the ability of cloud providers to offer agile data centers and cloud computing services at a competitive cost. In this paper we study the problem of placing images and virtual machine instances on physical containers in a way that maximizes the affinity between the images and virtual machine instances created from them. This reduces communication overhead and latency imposed by the ongoing communication between the virtual machine instances and their respective images. We model this problem as a novel placement problem that extends the class constrained multiple knapsack problem (CCMK) previously studied in the literature, and present a polynomial time local search algorithm for the case where all the relevant images have the same size. We prove that this algorithm has an approximation ratio of (3 +) and then evaluate its performance in a general setting where images and virtual machine instances are of arbitrary sizes, using production data from a private cloud. The results indicate that our algorithm can obtain significant improvements (up to 20%) compared to the greedy approach, in cases where local image storage or main memory resources are scarce.
Lecture Notes in Computer Science, 2003
Graph grammars are widely used in order to de ne classes of graphs having some inductive and narr... more Graph grammars are widely used in order to de ne classes of graphs having some inductive and narrow structure. In many cases the narrowness can be measured in terms of the maximal treewidth and/or clique-width of the graphs in the class, (see RS86], CO00] for de nitions of these notions). It is known that using the corresponding tree-decomposition or clique-width parse term, any property of the graph expressed in Monadic Second Order Logic can be evaluated e ciently (CMR00]). In general, nding the tree-width/clique-width is NP-complete, but speci c upper bounds are known for some graph classes generated by context-free grammars (Lau88], ER90], CO00], CER93]). We investigate here the explicit relationship between various not necessarily context-free Neighborhood Controlled Embedding (NCE) graph grammars and the clique-width of graphs generated by them. We show that all the graphs, generated by any given NCE graph grammar, have bounded cliquewidth (while the bound depends only on parameters of the grammar), and provide the corresponding algorithms (based on dynamic programming techniques) for nding clique-width expression based on given derivation tree. We also obtain improvements in the case the graphs generated by such grammars are of tree-width(k,m). This gives also a new and short proof of a result due to Courcelle and Olariu, that graphs of tree-width k have clique-width at most 2 k+1 + 1. All the results obtained for NLC grammars and generalized to both NCE and edNCE grammars for directed graphs with dynamic edge relabelling.
domino.research.ibm.com
Alex Glikson, Amir Epstein, Assaf Israel, John Marberg IBM Haifa Research Lab, Haifa 31905, Israe... more Alex Glikson, Amir Epstein, Assaf Israel, John Marberg IBM Haifa Research Lab, Haifa 31905, Israel {glikson,amire,assafi,marberg}@il.ibm.com ... ABSTRACT Server virtualization introduces new capabilities that im-prove the efficiency of data centers. One such ...
2014 IEEE International Conference on Cloud Engineering, 2014
Proceedings of the 10th ACM International Systems and Storage Conference
Journal of Sensor and Actuator Networks
Nowadays, the penetration of sensors and actuators in different application fields is revolutioni... more Nowadays, the penetration of sensors and actuators in different application fields is revolutionizing all aspects of our daily life. One of the major sectors that is taking advantage of such cutting-edge cheap smart devices is healthcare. In this context, Remote Patient Monitoring (RPM) at home represents a tempting opportunity for hospitals to reduce clinical costs and to improve the quality of life of both patients and their families. It allows patients to be monitored remotely by means networks of Internet of Things (IoT) medical devices equipped with sensors and actuators that collect healthcare data from patients and send them to a Cloud-based Hospital Information System (HIS) for processing. Up to now, many different proprietary software systems have been developed as stand-along expensive solutions, presenting interoperability, extensibility, and scalability issues. In recent years, the European Commission (EC) has promoted the wide adoption of FIWARE technology, launching 16...
ABSTRACT Cloud paradigm facilitates cost-efficient elastic computing allowing scaling workloads o... more ABSTRACT Cloud paradigm facilitates cost-efficient elastic computing allowing scaling workloads on demand. As cloud size increases, the probability that all workloads simultaneously scale up to their maximum demand, diminishes. This observation allows multiplexing cloud resources among multiple workloads, greatly improving resource utilization. The ability to host virtualized workloads such that available physical capacity is smaller than the sum of maximal demands of the workloads, is referred to as over-commit or over-subscription. Naturally, over-commit implies risk of resource congestion. Therefore, there is a tradeoff between improving resource utilization by increasing an over-commit ratio and exposing the infrastructure provider and customers to the risk of resource congestion. In this work, we observe that while resource multiplexing naturally occurs in the cloud, the risks associated with exploiting it for higher levels of cloud utilization, are not transparent to the customers. We consider workloads comprising elastic groups of Virtual Machines (VMs). We suggest that cloud providers would extend a standard availability Service Level Agreement (SLA) to express the probability of successfully launching a VM (to expand a workload), complementing the current practice of offering a standard SLA on availability of VMs which are already successfully launched. Using the proposed extended availability SLA, we introduce a notion of the cloud effective demand, which generalizes previously introduced notions of effective size of a single VM and effective bandwidth of stand-alone and multiplexed network connections. We propose an algorithmic framework that uses cloud effective demand to estimate the total physical capacity required for SLA compliance under over-commit. We evaluate our proposed methodology using simulations based on the data collected from a real private cloud production environment.
Proceedings of the 9th ACM International on Systems and Storage Conference - SYSTOR '16, 2016
2015 IEEE Symposium on Computers and Communication (ISCC), 2015
Cloud paradigm facilitates cost-efficient elastic computing allowing scaling workloads on demand.... more Cloud paradigm facilitates cost-efficient elastic computing allowing scaling workloads on demand. As cloud size increases, the probability that all workloads simultaneously scale up to their maximum demand, diminishes. This observation allows multiplexing cloud resources among multiple workloads, greatly improving resource utilization. The ability to host virtualized workloads such that available physical capacity is smaller than the sum of maximal demands of the workloads, is referred to as over-commit or over-subscription. Naturally, over-commit implies risk of resource congestion. Therefore, there is a tradeoff between improving resource utilization by increasing an over-commit ratio and exposing the infrastructure provider and customers to the risk of resource congestion. In this work, we observe that while resource multiplexing naturally occurs in the cloud, the risks associated with exploiting it for higher levels of cloud utilization, are not transparent to the customers. We ...