Susan Vrbsky - Academia.edu (original) (raw)
Papers by Susan Vrbsky
Computer, 2003
The Critical Analysis Reporting Environment provides an efficient tool for transportation safety ... more The Critical Analysis Reporting Environment provides an efficient tool for transportation safety engineers and policymakers to use in analyzing the categorical crash data typically obtained from police reports.
2018 IEEE 11th International Conference on Cloud Computing (CLOUD), 2018
In this paper, virtual machine performance is determined while stationary and during a live migra... more In this paper, virtual machine performance is determined while stationary and during a live migration process. Benchmarks are used to determine network, CPU, and disk I/O performance of the virtual machines while stationary and migrating. To further examine virtual machine performance, three types of persistent storage techniques are used: Solid-State Drive (SSD), Hard Disk Drive (HDD), and Diskless (Network Boot). Results are given that show the degraded performance of a diskless cloud cluster, due to the network latency of fetch requests located on a networked PXE server. The diskless cloud cluster is compared to the performance of a traditional diskfull cluster consisting of SSDs and HDDs.
address. Users will be able to access cloud resources through a simple web interface and maintena... more address. Users will be able to access cloud resources through a simple web interface and maintenance of the cloud will be contained with private networking resources. Users needing resources in the local cloud will rely on a NAT router, with other functionalities, to gain access these resources. We will also demonstrate, if the need arises to have multiple geographically distributed clusters in the local cloud, how to scale in this sense with only one IP address per cluster.
ACM Journal of Experimental Algorithmics, 2018
Graphical Processing Units have been applied to solve NP-hard problems with no known polynomial t... more Graphical Processing Units have been applied to solve NP-hard problems with no known polynomial time solutions. An example of such a problem is the Traveling Salesman Problem (TSP). The TSP is one of the most commonly studied combinatorial optimization problems and has multiple applications in the areas of engineering, transportation, and logistics. This article presents an improved algorithm for approximating the TSP on fully connected, symmetric graphs by utilizing the GPU. Our approach improves an existing 2-opt hill-climbing algorithm with random restarts by considering multiple updates to the current path found in parallel, and it allows k number of updates per iteration, called k-swap . With our k-swap modification, we show a speed-up over the existing algorithm of 4.5× to 22.9× on data sets ranging from 1,400 to 33,810 nodes, respectively.
IEEE Systems Journal, 2014
In this paper, we discuss how to prevent users' passwords from being stolen by adversaries in onl... more In this paper, we discuss how to prevent users' passwords from being stolen by adversaries in online environments and automated teller machines. We propose differentiated virtual password mechanisms in which a user has the freedom to choose a virtual password scheme ranging from weak security to strong security, where a virtual password requires a small amount of human computing to secure users' passwords. The tradeoff is that the stronger the scheme, the more complex the scheme may be. Among the schemes, we have a default method (i.e., traditional password scheme), system recommended functions, user-specified functions, user-specified programs, and so on. A function/program is used to implement the virtual password concept with a tradeoff of security for complexity requiring a small amount of human computing. We further propose several functions to serve as system recommended functions and provide a security analysis. For user-specified functions, we adopt secret little functions in which security is enhanced by hiding secret functions/algorithms.
2015 IEEE 8th International Conference on Cloud Computing, 2015
Live virtual machine migration allows resources from one physical server to be moved to another w... more Live virtual machine migration allows resources from one physical server to be moved to another with little or no interruption in the processes of the guest operating system. The process involved in performing a live migration includes copying the guest virtual machine memory state and cpu register state from a hyper visor on one server to another. In this paper, virtual machine migration is evaluated in terms of performance of the virtual machine while being migrated, performance of the cloud architecture while migrating the virtual machine, and the power costs of performing a live virtual machine migration. The experiments on virtual machine live migration will give insight on how the cloud architecture responds and handles live migrations, and how virtual machines perform while they are in a migration phase.
2008 IEEE International Conference on Communications, 2008
2010 IEEE Second International Conference on Cloud Computing Technology and Science, 2010
Proceedings of the 51st ACM Southeast Conference, 2013
ABSTRACT Management of cloud and cluster systems can be a daunting task, and becomes increasingly... more ABSTRACT Management of cloud and cluster systems can be a daunting task, and becomes increasingly complicated as the organization grows and requires more computing resources. Organizations that have a need for a large amount of computing resources have two major deployment decisions: native cluster architecture or a local cloud based architecture. Regardless of the underlying architecture, the deployment process should remain universal. Many different deployment considerations are available. Automated deployment is desirable because typically these systems are headless, and thus difficult to manage individually. The tool, which we have developed allows users to deploy filesystem images on computer systems within the same local network. Additionally, the tool presents the user with node management tools for controlling the power state of the systems, as well as information such as IP addresses, MAC addresses, and power state.
23rd International Conference on Distributed Computing Systems Workshops, 2003. Proceedings.
2008 International Symposium on a World of Wireless, Mobile and Multimedia Networks, 2008
The privacy of wireless communications is becoming an important issue due to the open nature of w... more The privacy of wireless communications is becoming an important issue due to the open nature of wireless medium. Many research work have been proposed to address the anonymity of communicating parties, the location privacy of the message source and destination, and the privacy of the network routing paths. However, with the advent of new radio identification and localization techniques, more advanced privacy attacks are possible. We describe a new privacy attack in which the adversary tries to infer the itineraries of the nodes in the network. To protect itinerary privacy, we design an algorithm, called the ∆-mobility camouflaging algorithm, which can be applied upon any mobility model by changing the original motion segments into ∆-shaped camouflaging paths. Our analysis results show that ∆-mobility camouflaging algorithm is costeffective, which in most cases decreases more than 80% itinerary exposure probability at a cost of less than 3% extra travel distance. Through comparing DSR routing performance under different mobility models and their counterparts with ∆-motion, we find ∆-mobility camouflaging algorithm does not degrade network layer routing performance in terms of message delivery ratio, delivery latency and routing overhead.
2010 IEEE Second International Conference on Cloud Computing Technology and Science, 2010
ACM SIGMOD Record, 1997
Multilevel relations, based on the current multilevel secure (MLS) relational data models, can pr... more Multilevel relations, based on the current multilevel secure (MLS) relational data models, can present a user with information that is difficult to interpret and may display an inconsistent outlook about the views of other users. Such ambiguity is due to the lack of a comprehensive method for asserting and interpreting beliefs about lower level information. In this paper we identify different beliefs that can be held by higher level users about lower level information, and we introduce the new concept of a mirage tuple. We present a mechanism for asserting beliefs about all accessible tuples, including lower level tuples. This mechanism provides every user of an MLS database with an unambiguous interpretation of all viewable information and presents a consistent account of the views at all levels below the user's level.
2008 IEEE International Conference on Communications, 2008
Journal of Cloud Computing: Advances, Systems and Applications, 2013
As federal funding in many public non-profit organizations (NPO's) seems to be dwindling, it is o... more As federal funding in many public non-profit organizations (NPO's) seems to be dwindling, it is of the utmost importance that efforts are focused on reducing operating costs of needy organizations, such as public schools. Our approach for reducing organizational costs is through the combined benefits of a high performance cloud architecture and low-power, thin-client devices. However, general-purpose private cloud architectures are not easily deployable by average users, or even those with some computing knowledge. For this reason, we propose a new vertical cloud architecture, which is focused on ease of deployment and management, as well as providing organizations with cost-efficient virtualization and storage, and other organization-specific utilities. We postulate that if organizations are provided with on-demand access to electronic resources in a way that is cost-efficient, then the operating costs may be reduced, such that the user experience and organizational efficiency may be increased. In this paper we discuss our private vertical cloud architecture called THUNDER. Additionally, we introduce a number of methodologies that could enable needy non-profit organizations to decrease costs and also provide many additional benefits for the users. Specifically, this paper introduces our current implementation of THUNDER, details about the architecture, and the software system that we have designed to specifically target the needs of underfunded organizations.
2013 IEEE 33rd International Conference on Distributed Computing Systems Workshops, 2013
ABSTRACT One of the core components in building a cloud infrastructure is the development of a mi... more ABSTRACT One of the core components in building a cloud infrastructure is the development of a middleware solution, which allows for ease in resource management. Middleware solutions for IaaS cloud architectures are responsible for managing the instantiation of virtual machine images, constructing persistent storage volumes, providing node-to-node message passing for metadata retrieval, and more. Our reasoning for developing a new cloud middleware API is to address issues that we have encountered in current cloud middleware solutions, which are centered upon ease of deployment and ease interfacing with the system. In this paper we present a cloud middleware API that we have developed to be used in building a private IaaS cloud architecture, and we compare our implementation with that of open source general purpose IaaS cloud architectures currently available.
2013 10th International Conference on Information Technology: New Generations, 2013
ABSTRACT High performance and parallel computing are traditionally implemented on very large dedi... more ABSTRACT High performance and parallel computing are traditionally implemented on very large dedicated compute clusters. However, as many organizations begin to adopt service-oriented cloud-based infrastructures, we can expect to see the development of parallel computing in the cloud. The goal of a parallel compute cluster is to divide a large job into several small jobs, execute the small jobs in parallel on many compute nodes, and then combine the results in some coherent manner. The biggest hurdle in moving this type of service to a cloud-based infrastructure is that performance will undoubtedly be affected by many factors, particularly those related to virtualization in clouds, such as memory and CPU overhead, limited resources, and others relating to hardware virtualization. In order to fully understand how virtualization can affect parallel computing in a tiny private cloud, we have devised four case studies that examine the performance of Apache Hadoop in varying environments on our private cloud. Our case studies are comprised of a baseline or bare metal (non-virtualized) cluster deployment consisting of seven nodes, a seven-node virtual machine cluster, a twenty-node virtual machine cluster, and an optimized seven-node virtual machine cluster. Results show that, although small data sets result in comparable job completion times, as the data size increases the performance of Apache Hadoop is affected greatly by virtualization even when we attempt to optimize the configuration of our cloud.
International Journal of Cloud Computing, 2014
ABSTRACT This paper focuses on the deployment of a geographically distributed private cloud archi... more ABSTRACT This paper focuses on the deployment of a geographically distributed private cloud architecture that offers resources similar to commercial vendors. Realising the implications of deploying such a system in an environment with limited networking resources (IP address), we decided to find a solution that would work giving each resource cluster in our cloud only one public IP address. As more cloud resources are needed, we demonstrate the ability to add these resources in a grid-like fashion to the master cloud cluster. To overcome the complexities required to use this architecture, this paper also introduces a graphical, web-based interface for the private cloud architecture. One of the problems of deploying an infrastructure-as-a-service cloud into a general organisation is getting users connected to their resources. The interface described in this paper gives users an easy way to access private cloud resources. Since the cloud computing clusters will be hosted behind NAT routers, the interface gives users the port forwarding information needed to access private cloud computational and persistent storage resources.
2012 International Conference on Computing, Networking and Communications (ICNC), 2012
Cloud storage service is currently becoming a very popular solution for medium-sized and startup ... more Cloud storage service is currently becoming a very popular solution for medium-sized and startup companies. However, there is still no suitable solution being offered to deploy transactional databases in a cloud platform. The maintenance of ACID (Atomicity, Consistency, Isolation and Durability) properties is the primary obstacle to the implementation of transactional cloud databases. The main features of cloud computing: scalability, availability and reliability are achieved by sacrificing consistency. While different forms of consistent states have been introduced, they do not address the needs of many database applications. In this paper we present a tree-based consistency approach, called TBC, that reduces interdependency among replica servers to minimize response time of cloud databases and to maximize the performance of those applications. Experimental results indicate that our TBC approach trades off availability and consistency with performance.
ACM SIGMOD Record, 2003
There has been an abundance of research within the last couple of decades in the area of multilev... more There has been an abundance of research within the last couple of decades in the area of multilevel secure (MLS) databases. Recent work in this field deals with the processing of multilevel transactions, expanding the logic of MLS query languages, and utilizing MLS principles within the realm of E-Business. However, there is a basic flaw within the MLS logic, which obstructs the handling of clearance-invariant aggregate queries and physical-entity related queries where some of the information in the database may be gleaned from the outside world. This flaw stands in the way of a more pervasive adoption of MLS models by the developers of practical applications. This paper clearly identifies the cause of this impediment -- the cover story dependence on the value of a user-defined key -- and proposes a practical solution.
Computer, 2003
The Critical Analysis Reporting Environment provides an efficient tool for transportation safety ... more The Critical Analysis Reporting Environment provides an efficient tool for transportation safety engineers and policymakers to use in analyzing the categorical crash data typically obtained from police reports.
2018 IEEE 11th International Conference on Cloud Computing (CLOUD), 2018
In this paper, virtual machine performance is determined while stationary and during a live migra... more In this paper, virtual machine performance is determined while stationary and during a live migration process. Benchmarks are used to determine network, CPU, and disk I/O performance of the virtual machines while stationary and migrating. To further examine virtual machine performance, three types of persistent storage techniques are used: Solid-State Drive (SSD), Hard Disk Drive (HDD), and Diskless (Network Boot). Results are given that show the degraded performance of a diskless cloud cluster, due to the network latency of fetch requests located on a networked PXE server. The diskless cloud cluster is compared to the performance of a traditional diskfull cluster consisting of SSDs and HDDs.
address. Users will be able to access cloud resources through a simple web interface and maintena... more address. Users will be able to access cloud resources through a simple web interface and maintenance of the cloud will be contained with private networking resources. Users needing resources in the local cloud will rely on a NAT router, with other functionalities, to gain access these resources. We will also demonstrate, if the need arises to have multiple geographically distributed clusters in the local cloud, how to scale in this sense with only one IP address per cluster.
ACM Journal of Experimental Algorithmics, 2018
Graphical Processing Units have been applied to solve NP-hard problems with no known polynomial t... more Graphical Processing Units have been applied to solve NP-hard problems with no known polynomial time solutions. An example of such a problem is the Traveling Salesman Problem (TSP). The TSP is one of the most commonly studied combinatorial optimization problems and has multiple applications in the areas of engineering, transportation, and logistics. This article presents an improved algorithm for approximating the TSP on fully connected, symmetric graphs by utilizing the GPU. Our approach improves an existing 2-opt hill-climbing algorithm with random restarts by considering multiple updates to the current path found in parallel, and it allows k number of updates per iteration, called k-swap . With our k-swap modification, we show a speed-up over the existing algorithm of 4.5× to 22.9× on data sets ranging from 1,400 to 33,810 nodes, respectively.
IEEE Systems Journal, 2014
In this paper, we discuss how to prevent users' passwords from being stolen by adversaries in onl... more In this paper, we discuss how to prevent users' passwords from being stolen by adversaries in online environments and automated teller machines. We propose differentiated virtual password mechanisms in which a user has the freedom to choose a virtual password scheme ranging from weak security to strong security, where a virtual password requires a small amount of human computing to secure users' passwords. The tradeoff is that the stronger the scheme, the more complex the scheme may be. Among the schemes, we have a default method (i.e., traditional password scheme), system recommended functions, user-specified functions, user-specified programs, and so on. A function/program is used to implement the virtual password concept with a tradeoff of security for complexity requiring a small amount of human computing. We further propose several functions to serve as system recommended functions and provide a security analysis. For user-specified functions, we adopt secret little functions in which security is enhanced by hiding secret functions/algorithms.
2015 IEEE 8th International Conference on Cloud Computing, 2015
Live virtual machine migration allows resources from one physical server to be moved to another w... more Live virtual machine migration allows resources from one physical server to be moved to another with little or no interruption in the processes of the guest operating system. The process involved in performing a live migration includes copying the guest virtual machine memory state and cpu register state from a hyper visor on one server to another. In this paper, virtual machine migration is evaluated in terms of performance of the virtual machine while being migrated, performance of the cloud architecture while migrating the virtual machine, and the power costs of performing a live virtual machine migration. The experiments on virtual machine live migration will give insight on how the cloud architecture responds and handles live migrations, and how virtual machines perform while they are in a migration phase.
2008 IEEE International Conference on Communications, 2008
2010 IEEE Second International Conference on Cloud Computing Technology and Science, 2010
Proceedings of the 51st ACM Southeast Conference, 2013
ABSTRACT Management of cloud and cluster systems can be a daunting task, and becomes increasingly... more ABSTRACT Management of cloud and cluster systems can be a daunting task, and becomes increasingly complicated as the organization grows and requires more computing resources. Organizations that have a need for a large amount of computing resources have two major deployment decisions: native cluster architecture or a local cloud based architecture. Regardless of the underlying architecture, the deployment process should remain universal. Many different deployment considerations are available. Automated deployment is desirable because typically these systems are headless, and thus difficult to manage individually. The tool, which we have developed allows users to deploy filesystem images on computer systems within the same local network. Additionally, the tool presents the user with node management tools for controlling the power state of the systems, as well as information such as IP addresses, MAC addresses, and power state.
23rd International Conference on Distributed Computing Systems Workshops, 2003. Proceedings.
2008 International Symposium on a World of Wireless, Mobile and Multimedia Networks, 2008
The privacy of wireless communications is becoming an important issue due to the open nature of w... more The privacy of wireless communications is becoming an important issue due to the open nature of wireless medium. Many research work have been proposed to address the anonymity of communicating parties, the location privacy of the message source and destination, and the privacy of the network routing paths. However, with the advent of new radio identification and localization techniques, more advanced privacy attacks are possible. We describe a new privacy attack in which the adversary tries to infer the itineraries of the nodes in the network. To protect itinerary privacy, we design an algorithm, called the ∆-mobility camouflaging algorithm, which can be applied upon any mobility model by changing the original motion segments into ∆-shaped camouflaging paths. Our analysis results show that ∆-mobility camouflaging algorithm is costeffective, which in most cases decreases more than 80% itinerary exposure probability at a cost of less than 3% extra travel distance. Through comparing DSR routing performance under different mobility models and their counterparts with ∆-motion, we find ∆-mobility camouflaging algorithm does not degrade network layer routing performance in terms of message delivery ratio, delivery latency and routing overhead.
2010 IEEE Second International Conference on Cloud Computing Technology and Science, 2010
ACM SIGMOD Record, 1997
Multilevel relations, based on the current multilevel secure (MLS) relational data models, can pr... more Multilevel relations, based on the current multilevel secure (MLS) relational data models, can present a user with information that is difficult to interpret and may display an inconsistent outlook about the views of other users. Such ambiguity is due to the lack of a comprehensive method for asserting and interpreting beliefs about lower level information. In this paper we identify different beliefs that can be held by higher level users about lower level information, and we introduce the new concept of a mirage tuple. We present a mechanism for asserting beliefs about all accessible tuples, including lower level tuples. This mechanism provides every user of an MLS database with an unambiguous interpretation of all viewable information and presents a consistent account of the views at all levels below the user's level.
2008 IEEE International Conference on Communications, 2008
Journal of Cloud Computing: Advances, Systems and Applications, 2013
As federal funding in many public non-profit organizations (NPO's) seems to be dwindling, it is o... more As federal funding in many public non-profit organizations (NPO's) seems to be dwindling, it is of the utmost importance that efforts are focused on reducing operating costs of needy organizations, such as public schools. Our approach for reducing organizational costs is through the combined benefits of a high performance cloud architecture and low-power, thin-client devices. However, general-purpose private cloud architectures are not easily deployable by average users, or even those with some computing knowledge. For this reason, we propose a new vertical cloud architecture, which is focused on ease of deployment and management, as well as providing organizations with cost-efficient virtualization and storage, and other organization-specific utilities. We postulate that if organizations are provided with on-demand access to electronic resources in a way that is cost-efficient, then the operating costs may be reduced, such that the user experience and organizational efficiency may be increased. In this paper we discuss our private vertical cloud architecture called THUNDER. Additionally, we introduce a number of methodologies that could enable needy non-profit organizations to decrease costs and also provide many additional benefits for the users. Specifically, this paper introduces our current implementation of THUNDER, details about the architecture, and the software system that we have designed to specifically target the needs of underfunded organizations.
2013 IEEE 33rd International Conference on Distributed Computing Systems Workshops, 2013
ABSTRACT One of the core components in building a cloud infrastructure is the development of a mi... more ABSTRACT One of the core components in building a cloud infrastructure is the development of a middleware solution, which allows for ease in resource management. Middleware solutions for IaaS cloud architectures are responsible for managing the instantiation of virtual machine images, constructing persistent storage volumes, providing node-to-node message passing for metadata retrieval, and more. Our reasoning for developing a new cloud middleware API is to address issues that we have encountered in current cloud middleware solutions, which are centered upon ease of deployment and ease interfacing with the system. In this paper we present a cloud middleware API that we have developed to be used in building a private IaaS cloud architecture, and we compare our implementation with that of open source general purpose IaaS cloud architectures currently available.
2013 10th International Conference on Information Technology: New Generations, 2013
ABSTRACT High performance and parallel computing are traditionally implemented on very large dedi... more ABSTRACT High performance and parallel computing are traditionally implemented on very large dedicated compute clusters. However, as many organizations begin to adopt service-oriented cloud-based infrastructures, we can expect to see the development of parallel computing in the cloud. The goal of a parallel compute cluster is to divide a large job into several small jobs, execute the small jobs in parallel on many compute nodes, and then combine the results in some coherent manner. The biggest hurdle in moving this type of service to a cloud-based infrastructure is that performance will undoubtedly be affected by many factors, particularly those related to virtualization in clouds, such as memory and CPU overhead, limited resources, and others relating to hardware virtualization. In order to fully understand how virtualization can affect parallel computing in a tiny private cloud, we have devised four case studies that examine the performance of Apache Hadoop in varying environments on our private cloud. Our case studies are comprised of a baseline or bare metal (non-virtualized) cluster deployment consisting of seven nodes, a seven-node virtual machine cluster, a twenty-node virtual machine cluster, and an optimized seven-node virtual machine cluster. Results show that, although small data sets result in comparable job completion times, as the data size increases the performance of Apache Hadoop is affected greatly by virtualization even when we attempt to optimize the configuration of our cloud.
International Journal of Cloud Computing, 2014
ABSTRACT This paper focuses on the deployment of a geographically distributed private cloud archi... more ABSTRACT This paper focuses on the deployment of a geographically distributed private cloud architecture that offers resources similar to commercial vendors. Realising the implications of deploying such a system in an environment with limited networking resources (IP address), we decided to find a solution that would work giving each resource cluster in our cloud only one public IP address. As more cloud resources are needed, we demonstrate the ability to add these resources in a grid-like fashion to the master cloud cluster. To overcome the complexities required to use this architecture, this paper also introduces a graphical, web-based interface for the private cloud architecture. One of the problems of deploying an infrastructure-as-a-service cloud into a general organisation is getting users connected to their resources. The interface described in this paper gives users an easy way to access private cloud resources. Since the cloud computing clusters will be hosted behind NAT routers, the interface gives users the port forwarding information needed to access private cloud computational and persistent storage resources.
2012 International Conference on Computing, Networking and Communications (ICNC), 2012
Cloud storage service is currently becoming a very popular solution for medium-sized and startup ... more Cloud storage service is currently becoming a very popular solution for medium-sized and startup companies. However, there is still no suitable solution being offered to deploy transactional databases in a cloud platform. The maintenance of ACID (Atomicity, Consistency, Isolation and Durability) properties is the primary obstacle to the implementation of transactional cloud databases. The main features of cloud computing: scalability, availability and reliability are achieved by sacrificing consistency. While different forms of consistent states have been introduced, they do not address the needs of many database applications. In this paper we present a tree-based consistency approach, called TBC, that reduces interdependency among replica servers to minimize response time of cloud databases and to maximize the performance of those applications. Experimental results indicate that our TBC approach trades off availability and consistency with performance.
ACM SIGMOD Record, 2003
There has been an abundance of research within the last couple of decades in the area of multilev... more There has been an abundance of research within the last couple of decades in the area of multilevel secure (MLS) databases. Recent work in this field deals with the processing of multilevel transactions, expanding the logic of MLS query languages, and utilizing MLS principles within the realm of E-Business. However, there is a basic flaw within the MLS logic, which obstructs the handling of clearance-invariant aggregate queries and physical-entity related queries where some of the information in the database may be gleaned from the outside world. This flaw stands in the way of a more pervasive adoption of MLS models by the developers of practical applications. This paper clearly identifies the cause of this impediment -- the cover story dependence on the value of a user-defined key -- and proposes a practical solution.