Storage Area Networks Research Papers (original) (raw)

We propose the creation of a wireless storage area network (SAN) and analyze its benefits. The proposed wireless SAN (WSAN) consists of a SAN switch that is connected to multiple wireless access points (APs) that communicate with the... more

We propose the creation of a wireless storage area network (SAN) and analyze its benefits. The proposed wireless SAN (WSAN) consists of a SAN switch that is connected to multiple wireless access points (APs) that communicate with the storage devices. This network would save space and reduce overall costs by not requiring wired connections. Wireless SANS would also provide more freedom in the placement of storage devices. However, because the number of wireless access points is less than the number of storage devices, it is possible for user data requests to be blocked if all access points are busy. An important design goal is therefore to minimize the probability that a network access request will be blocked.

Storage area network is one of the most well liked and structured storage system for both multi domain storage and enterprise domain. In both the domain the data are stored securely and in an efficient manner. In this paper we present... more

Storage area network is one of the most well liked and structured storage system for both multi domain storage and enterprise domain. In both the domain the data are stored securely and in an efficient manner. In this paper we present important aspects of storage area network. Along with that some of the precautions are discussed to protect the data. The focal point of this paper is on safeguarding the storage area network by exploiting the finest performs in developing the storage in more secured manner.

Since the onset of Cloud computing and its inroads into infrastructure as a service, Virtualization has become peak of importance in the field of abstraction and resource management. However, these additional layers of abstraction... more

Since the onset of Cloud computing and its inroads into infrastructure as a service, Virtualization has become peak
of importance in the field of abstraction and resource management. However, these additional layers of abstraction provided by
virtualization come at a trade-off between performance and cost in a cloud environment where everything is on a pay-per-use
basis. Containers which are perceived to be the future of virtualization are developed to address these issues. This study paper
scrutinizes the performance of a conventional virtual machine and contrasts them with the containers. We cover the critical
assessment of each parameter and its behavior when its subjected to various stress tests. We discuss the implementations and
their performance metrics to help us draw conclusions on which one is ideal to use for desired needs. After assessment of the
result and discussion of the limitations, we conclude with prospects for future research.

Pada pengalokasian media penyimpanan skala jaringan (network storage), diperlukan konsep yang baru untuk meningkatkan efektivitas komunikasi antar storage server dan client. Pada network storage konvensional, setiap node berkomunikasi... more

Pada pengalokasian media penyimpanan skala jaringan (network storage), diperlukan konsep yang baru untuk meningkatkan efektivitas komunikasi antar storage server dan client. Pada network storage konvensional, setiap node berkomunikasi dengan storage server menggunakan single file image yang terdapat pada server sehingga memungkinkan terjadinya kegagalan node tunggal (single point failure) yang mengakibatkan node server untuk layanan tertentu tidak dapat diakses sama sekali.
Proses transfer file pada server dapat dimodifikasi dengan mengubah mekanisme menjadi end user-initiator-storage server dengan konsep redundant server initiator sebagaimana konsep yang terdapat pada Storage Area Network. Dengan demikian hal tersebut membuat setiap initiator node yang ingin mengakses storage server harus memiliki file system yang sama dan mampu berkoordinasi satu sama lain. Untuk mengatasi hal tersebut diimplementasikan Cluster File System yang terpasang di setiap initiator node yang akan mengakses storage server. Mekanisme ini mendukung High Availability Clustering, setiap cluster node dapat mengakses dalam bentuk block-data pada storage server, dan file system pada setiap node merupakan Cluster File System yang dapat melakukan interkoneksi antar server dengan mekanisme fencing dan locking.
Pada Storage Area Network (SAN), Cluster File System yang diuji pada tugas akhir ini, memiliki failover delay yang bervariasi antara 6-12 detik yang berpotensi menjadikan proses transfer file menjadi terputus, memiliki kemampuan IOPS yang seragam dengan nilai bervariasi bergantung kepada spesifikasi hardware yang terdapat pada node server dan mengakibatkan penggunaan resource memory dan CPU yang lebih besar pada saat menggunakan block data dengan ukuran yang lebih kecil.

Storage Area Net work (SAN) switches and Storage Arrays have used for many t ime; they have become mo re ab ility of fault tolerance. These often involve some degree of redundancy. However, there are still issues that can occur and take... more

Storage Area Net work (SAN) switches and Storage Arrays have used for many t ime; they have become mo re ab ility of fault tolerance. These often involve some degree of redundancy. However, there are still issues that can occur and take some t ime t o resolve that problem co me in the Storage area network applicat ions. The number of Storage Protocols and Storage Interfaces rapidly increased in a Networking technology field, it avoids the Bottleneck of data centers. This paper focuses on few guidelines that may help to understand some of the design issues involved in SAN. Those Problems that are abstract and cannot solve on SAN infrastructure and application run on SAN can solve after understanding all the parameters. Fib re channels also, make so me concern that is not solvable and creates issues. This paper discusses some of the co mmon problem related to SAN to prevent the issues.

Laboratorium komputer pada UPT STMIK AMIKOM Yogyakarta berjumlah banyak dan kapasitas pengguna yang besar maka kebutuhan layanan data center juga semakin tinggi. Untuk memenuhi kebutuhan pengembangan data center tersebut dibutuhkan... more

Laboratorium komputer pada UPT STMIK AMIKOM Yogyakarta berjumlah banyak dan kapasitas pengguna yang besar maka kebutuhan layanan data center juga semakin tinggi. Untuk memenuhi kebutuhan pengembangan data center tersebut dibutuhkan peningkatan kapasitas komputasi, salah satunya adalah dengan cara pengadaan server baru. Namun terdapat konsekuensi dari keputusan tersebut, organisasi akan menghadapi beberapa masalah baru dalam pengelolaan server yang semakin bertambah yaitu biaya yang dihabiskan untuk keperluan tersebut cukup besar. Biaya yang paling besar adalah pada pembelian dan maintenance server. Pemanfaatan fungsi server juga digunakan untuk mendukung pembelajaran praktikum di laboratorium komputer, jadi selain biaya dan maintenance organisasi juga akan menghadapi permasalahan baru, yaitu utilisasi server yang rendah. Untuk itu cloud computing sebagai solusi yang tepat untuk di implementasikan di laboratorium komputer STMIK AMIKOM Yogyakarta agar layanan data center bisa optimal dari sisi jumlah layanan dan penggunaan sumberdaya server.

Laboratorium komputer pada UPT STMIK AMIKOM Yogyakarta berjumlah banyak dan kapasitas pengguna yang besar maka kebutuhan layanan data center juga semakin tinggi. Untuk memenuhi kebutuhan pengembangan data center tersebut dibutuhkan... more

Laboratorium komputer pada UPT STMIK AMIKOM Yogyakarta berjumlah banyak dan kapasitas pengguna yang besar maka kebutuhan layanan data center juga semakin tinggi. Untuk memenuhi kebutuhan pengembangan data center tersebut dibutuhkan peningkatan kapasitas komputasi, salah satunya adalah dengan cara pengadaan server baru. Namun terdapat konsekuensi dari keputusan tersebut, organisasi akan menghadapi beberapa masalah baru dalam pengelolaan server yang semakin bertambah yaitu biaya yang dihabiskan untuk keperluan tersebut cukup besar. Biaya yang paling besar adalah pada pembelian dan maintenance server. Pemanfaatan fungsi server juga digunakan untuk mendukung pembelajaran praktikum di laboratorium komputer, jadi selain biaya dan maintenance organisasi juga akan menghadapi permasalahan baru, yaitu utilisasi server yang rendah. Untuk itu cloud computing sebagai solusi yang tepat untuk di implementasikan di laboratorium komputer STMIK AMIKOM Yogyakarta agar layanan data center bisa optimal dari sisi jumlah layanan dan penggunaan sumberdaya server.

Pada pengalokasian media penyimpanan skala jaringan (network storage), diperlukan konsep yang baru untuk meningkatkan efektivitas komunikasi antar storage server dan client. Pada network storage konvensional, setiap node berkomunikasi... more

Pada pengalokasian media penyimpanan skala jaringan (network storage), diperlukan konsep yang baru untuk meningkatkan efektivitas komunikasi antar storage server dan client. Pada network storage konvensional, setiap node berkomunikasi dengan storage server menggunakan single file image yang terdapat pada server sehingga memungkinkan terjadinya kegagalan node tunggal (single point failure) yang mengakibatkan node server untuk layanan tertentu tidak dapat diakses sama sekali.
Proses transfer file pada server dapat dimodifikasi dengan mengubah mekanisme menjadi end user-initiator-storage server dengan konsep redundant server initiator sebagaimana konsep yang terdapat pada Storage Area Network. Dengan demikian hal tersebut membuat setiap initiator node yang ingin mengakses storage server harus memiliki file system yang sama dan mampu berkoordinasi satu sama lain. Untuk mengatasi hal tersebut diimplementasikan Cluster File System yang terpasang di setiap initiator node yang akan mengakses storage server. Mekanisme ini mendukung High Availability Clustering, setiap cluster node dapat mengakses dalam bentuk block-data pada storage server, dan file system pada setiap node merupakan Cluster File System yang dapat melakukan interkoneksi antar server dengan mekanisme fencing dan locking.
Pada Storage Area Network (SAN), Cluster File System yang diuji pada tugas akhir ini, memiliki failover delay yang bervariasi antara 6-12 detik yang berpotensi menjadikan proses transfer file menjadi terputus, memiliki kemampuan IOPS yang seragam dengan nilai bervariasi bergantung kepada spesifikasi hardware yang terdapat pada node server dan mengakibatkan penggunaan resource memory dan CPU yang lebih besar pada saat menggunakan block data dengan ukuran yang lebih kecil.

Cloud storage allows moving the remote data to the centralized large data centers, where the da ta i nt egr it y is not achieved. This paper studies the problem of ensuring the integrity of the data storage in Cloud Computing.... more

Cloud storage allows moving the remote data to the centralized large data centers, where the da ta i nt egr it y is not achieved. This paper studies the problem of ensuring the integrity of the data storage in Cloud Computing. Particularly, here we consider the Verifier (TPA) to verify the correctness of the dynamic data stored in cloud. Here the Verifier should efficiently audit the owner's data without asking for the local copy of t ha t data, this reduces TPA computation cost and supports for user's privacy, Batch Auditing, also implement the Authentication Forwarding Mechanisms to scale the usability of the servers by connecting many clients to it and forwards the authentication credentials to next level clients, this is done by giving right permissions from the first owner to first few clients to act themselves as owners for next few more clients by First Come First Serve basis, Thus provides server scalability and greater security.

Cloud computing is the dominating paradigm in distributed computing. The most popular open source cloud solutions support different type of storage subsystems, because of the different needs of the deployed services (in terms of... more

Cloud computing is the dominating paradigm in distributed computing. The most popular open source cloud solutions support different type of storage subsystems, because of the different needs of the deployed services (in terms of performance, flexibility, cost-effectiveness). In this paper, we investigate the supported standard and open source storage types and create a classification. We point out that the Internet Small Computer System Interface (iSCSI) based block level storage can be used for I/O intensive services currently. However, the ATA-over-Ethernet (AoE) protocol uses fewer layers and operates on lower level which makes it more lightweight and faster than iSCSI. Therefore, we proposed an architecture for AoE based storage support in OpenNebula cloud. The novel storage solution was implemented and the performance evaluation shows that the I/O throughput of the AoE based storage is better (32.5-61.5%) compared to the prior iSCSI based storage and the new storage solution needs less CPU time (41.37%) to provide the same services.

Abstract—Most organizations are becoming increasingly re-liant on IT product and services to manage their daily operations. The total cost of ownership (TCO), which includes the hardware and software purchase cost, management cost, etc.,... more

Abstract—Most organizations are becoming increasingly re-liant on IT product and services to manage their daily operations. The total cost of ownership (TCO), which includes the hardware and software purchase cost, management cost, etc., has signif-icantly increased and ...

A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of NASA. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate... more

A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of NASA. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed, and initial testing has been completed. These testbed systems and the associated navigation techniques used are described. Particular emphasis is placed on three technologies: Computer-Aided Remote Driving (CARD), Semiautonomous Navigation (SAN), and behavior control. It is concluded that, through the development and evaluation of such technologies, research at JPL has expanded the set of viable planetary rover mission possibilities beyond the limits of remotely teleoperated systems such as Lunakhod. These are potentially applicable to exploration of all the solid planetary surfaces in the solar system, including Mars, Venus, and the moons of the gas giant planets

In current social networking world, organizations ar e demanding high-speed security for client data such as images, audio video files even huge databases. Also no flexibility to connect and share... more

In current social networking world, organizations ar
e demanding high-speed security for client data such as images, audio video files even huge databases. Also no flexibility to connect and share remote devices and servers reduces the performance of service given by organization. To overcome this issues Storage Area Network (SAN) was introduced which is a dedicated storage network that carries I/O traffic only between servers and storage devices. The paper describes requirement of SAN in real world by implementing a SAN using Openfiler, Based on seven cases SAN’s performance analysis done with help of Iometer tool and SAN’s importance in clustered Database.

As businesses have grown, so has the need to deploy I/T applications rapidly to support the expanding business processes. Often, this growth was achieved in an unplanned way: each time a new application was needed a new server along with... more

As businesses have grown, so has the need to deploy I/T applications rapidly to support the expanding business processes. Often, this growth was achieved in an unplanned way: each time a new application was needed a new server along with the application software was deployed and new storage elements were purchased. In many cases this has led to what is often referred to as "server sprawl", resulting in low server utilization and high system management costs. An architectural approach that is becoming increasingly popular to address this problem is known as server virtualization. In this paper we introduce the concept of server consolidation using virtualization and point out associated issues that arise in the area of application performance. We show how some of these problems can be solved by monitoring key performance metrics and using the data to trigger migration of Virtual Machines within physical servers. The algorithms we present attempt to minimize the cost of migration and maintain acceptable application performance levels.

This paper presents an overview of the storageaware routing protocol proposed for the MobilityFirst network architecture being developed as a part of the NSF Future Internet Architecture (FIA) project. The storage-aware routing method... more

This paper presents an overview of the storageaware routing protocol proposed for the MobilityFirst network architecture being developed as a part of the NSF Future Internet Architecture (FIA) project. The storage-aware routing method described here is a generalized form of delay-tolerant network (DTN) intended to work seamlessly across both wired and wireless networks. An adaptive storage-aware routing algorithm (ASTAR) which uses a combination of short and long-term path quality metrics along with available buffer storage to make store vs. forward decisions at routers is described. Selected experimental trace-driven validation results are given for an example hybrid network incorporating wired backbones and heterogeneous radio access technologies. The proposed storageaware routing algorithm under consideration is shown to provide significant capacity and performance gains for the mobile/wireless usage scenarios considered. It is further demonstrated that the ASTAR adaptation algorithm provides a unified mechanism for adjusting store/forward decisions in response to variations in path quality, available router storage and traffic levels, achieving up to 20% capacity improvement over baseline non-adaptive schemes.

This paper introduces method for implementing NFS protocol for larger block/network packets transfer over to NAS [Network Attached Storage] from Highly Available clients. NAS storage scalability is habitually inadequate to the volume of... more

This paper introduces method for implementing NFS protocol for larger block/network packets transfer over to NAS [Network Attached Storage] from Highly Available clients. NAS storage scalability is habitually inadequate to the volume of the appliance. Addition of an appliance is reasonably easy, but access to the pooled contents as a mono entity is not. In this approach, NAS supports file sharing between multiple peer clients serving on highly available cluster services, NAS clients can potentially access file shares from anywhere in the organization. Because file access is, typically low volume and less sensitive to response times, predictable performance and distance are less of a concern in NAS implementations.

A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of NASA. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate... more

A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of NASA. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed, and initial testing has been completed. These testbed systems and the associated navigation techniques used are described. Particular emphasis is placed on three technologies: Computer-Aided Remote Driving (CARD), Semiautonomous Navigation (SAN), and behavior control. It is concluded that, through the development and evaluation of such technologies, research at JPL has expanded the set of viable planetary rover mission possibilities beyond the limits of remotely teleoperated systems such as Lunakhod. These are potentially applicable to exploration of all the solid planetary surfaces in the solar system, including Mars, Venus, and the moons of the gas giant planets

Preface The BeaST is the new FreeBSD based dual-headed reliable storage system concept. Recently we implemented both ZFS and in-memory cache in our architecture. And after this last improvement the BeaST system has become quite complex... more

Preface The BeaST is the new FreeBSD based dual-headed reliable storage system concept. Recently we implemented both ZFS and in-memory cache in our architecture. And after this last improvement the BeaST system has become quite complex comparing to the predecessors.

Abstract-Storage Area Networks (SANs) connect storage devices to servers over fast network interconnects. We consider the problem of optimal SAN configuration with the goal of retaining more security in meeting service level agreements... more

Abstract-Storage Area Networks (SANs) connect storage devices to servers over fast network interconnects. We consider the problem of optimal SAN configuration with the goal of retaining more security in meeting service level agreements (SLAs) on unexpected peaks. First, we give an algorithm for assigning storage devices to applications running on the SAN's hosts. This algorithm tries to balance the workload as evenly as possible over all storage devices. Our second algorithm takes these assignments and computes the interconnections (data paths) that are necessary to achieve the desired configuration while respecting redundancy (safety) requirements in the SLAs. Again, this algorithm tries to balance the workload of all connections and devices. Thus, our network configurations respect all SLAs and provide flexibility for future changes by avoiding bottlenecks on storage devices or switches. We also discuss integrating our solution with the open source SAN management software Aperi.

Policy-based management provides the ability to dynamically re-configure DiffServ networks such that desired quality of service (QoS) goals are achieved. This includes network provisioning decisions, performing admission control, and... more

Policy-based management provides the ability to dynamically re-configure DiffServ networks such that desired quality of service (QoS) goals are achieved. This includes network provisioning decisions, performing admission control, and adapting bandwidth allocation dynamically. QoS management aims to satisfy the service level agreements (SLAs) contracted by the provider and therefore QoS policies are derived from SLA specifications and the provider's business goals. This policy refinement is usually ...

Management of a switch fabric security configuration, a core component of Storage Area Networks, is complex and error prone. As a consequence, misconfiguration of and/or a poor understanding of a switch fabric may unnecessarily expose an... more

Management of a switch fabric security configuration, a core component of Storage Area Networks, is complex and error prone. As a consequence, misconfiguration of and/or a poor understanding of a switch fabric may unnecessarily expose an enterprise to known threats. A formal model of a switch security configuration is presented. This model is reasoned over to help manage complex switch fabric security configurations.

The growing complexity of mass storage systems at major data centers is causing stress on system administrators to keep performance at optimal levels. As storage requirements grow, so does the number of routine tasks that the... more

The growing complexity of mass storage systems at major data centers is causing stress on system administrators to keep performance at optimal levels. As storage requirements grow, so does the number of routine tasks that the administrator must perform, as well as the time it takes for these to be executed. The solution being proposed to ease this burden is the Mass Storage System Administrator Autonomic Assistant (MSSAAA). The MSSAAA is a collection of agents that perform some of the more common tasks while the administrators handle higher-level issues. Using the principles of autonomic computing, the MSSAAA is governed by a centralized set of policies that the administrator will review on a regular basis and can adjust as necessary. The goal is to develop an autonomic assistant to substantially reduce the amount of time it takes to address specific problems in the system. Using tools such as IBM's Generic Log Adapter, Resource Model Builder, and Autonomic Management Engine, the MSSAAA has been able to (i) quickly determine when tape errors occur and correct them, (ii) monitor the network file system mounts for poor performance and report those, and (iii) correct network file system handle problems through continuous monitoring. The preliminary savings analyses show that the assistant saves the system administrator at least 185 hours per year, and over six thousand dollars in related costs. The results show how efficiently and effectively the MSSAAA handled its assigned tasks, and how it has eased the daily burden of storage system administrators. DEDICATION This thesis is dedicated to my wife Katie for her patience, understanding and constant support during the pursuit of my Master's degree. iii ACKNOWLEDGEMENTS There are many other groups of people who have provided feedback, insight and support at various levels.

The usage of network-based applications is increasing, as network speeds increase, and the use of streaming applications, e.g BBC iPlayer, YouTube etc., running over network infrastructure is becoming commonplace. These applications... more

The usage of network-based applications is increasing, as network speeds increase, and the use of streaming applications, e.g BBC iPlayer, YouTube etc., running over network infrastructure is becoming commonplace. These applications access data sequentially. However, as processor speeds and the amount of memory available increase, the rate at which streaming applications access data is now faster than the rate at which the blocks can be fetched consecutively from network storage. In addition to sequential access, the system also needs to promptly satisfy demand misses in order for applications to continue their execution. This thesis proposes a design to provide Quality-Of-Service (QoS) for streaming applications (sequential accesses) and demand misses, such that, streaming applications can run without jitter (once they are started) and demand misses can be satisfied in reasonable time using network storage. To implement the proposed design in real time, the thesis presents an analytical model to estimate the average time taken to service a demand miss. Further, it defines and explores the operational space where the proposed QoS could be provided. Using database techniques, this region is then encapsulated into an autonomous algorithm which is verified using simulation. Finally, a prototype Experimental File System (EFS) is designed and implemented to test the algorithm on a real test-bed.

In this paper, we present a novel real-time image restoration approach using a truncated constrained least-squares (TCLS) filter and spatially adaptive noise smoothing (SANS) algorithm based on alpha map for the extended depth of field... more

In this paper, we present a novel real-time image restoration approach using a truncated constrained least-squares (TCLS) filter and spatially adaptive noise smoothing (SANS) algorithm based on alpha map for the extended depth of field (EDoF) system in an image signal processing (ISP) chain. The proposed TCLS filter and the alpha map-based SANS algorithm can be implemented in the Bayer-domain by using a general finite impulse response (FIR) structure. The TCLS filter coefficients are priori determined according to the given point-spread-function (PSF) by optical lens simulation, and deconvolution is performed in the Bayer-domain instead of the RGB-domain. The SANS algorithm can successfully remove noise in flat regions without affecting the sharply restored details. Based on the extended set of experimental results the proposed algorithm is proved to be able to restore an accurately focused image in realtime, and is suitable for commercial low-cost, highquality imaging devices such as a digital camera and a camcorder 1 .

In current social networking world, organizations are demanding high-speed security for client data such as images, audio video files even huge databases. Also no flexibility to connect and share remote devices and servers reduces the... more

In current social networking world, organizations are demanding high-speed security for client data such as images, audio video files even huge databases. Also no flexibility to connect and share remote devices and servers reduces the performance of service given by organization. To overcome this issues Storage Area Network (SAN) was introduced which is a dedicated storage network that carries I/O traffic only between servers and storage devices. The paper describes requirement of SAN in real world by implementing a SAN using Openfiler, Based on seven cases SAN’s performance analysis done with help of Iometer tool and SAN’s importance in clustered Database.

Today, access control security for storage area networks (zoning and masking) is implemented by mechanisms that are inherently insecure, and are tied to the physical network components. However, what we want to secure is at a higher... more

Today, access control security for storage area networks (zoning and masking) is implemented by mechanisms that are inherently insecure, and are tied to the physical network components. However, what we want to secure is at a higher logical level independent of the transport network; raising security to a logical level simplifies management, provides a more natural fit to a virtualized infrastructure, and enables a finer grained access control. In this paper, we describe the problems with existing access control security solutions, and present our approach which leverages the OSD (Object-based Storage Device) security model to provide a logical, cryptographically secured, in-band access control for today’s existing devices. We then show how this model can easily be integrated into existing systems and demonstrate that this in-band security mechanism has negligible performance impact while simplifying management, providing a clean match to compute virtualization and enabling fine gra...

and the functionality and data that reside in the storage subsystem. Recent technological trends, such as shared SAN or NAS storage and virtualization, have the potential to break this tight association between functionality and machines.... more

and the functionality and data that reside in the storage subsystem. Recent technological trends, such as shared SAN or NAS storage and virtualization, have the potential to break this tight association between functionality and machines. We describe the design and implementation of ENCOM-PASS -an image management system centered around a shared storage repository of "master system images", each representing different functionality. The functionality is provisioned by "cloning" master images, associating the resulting "clone images" with specified physical and/or virtual resources ("machines"), customizing the clone images for the specific environment and circumstances, and automatically performing the necessary operations to activate the clones. "Machines" -physical or virtual -are merely computational resources that do not have any permanent association with functionality. ENCOMPASS supports the complete lifecycle of a system image, including reallocation and re-targeting of resources, maintenance, updates, etc. It separates image creation from image management from resource allocation policies -an emerging trend that is manifested in particular by proliferation of turn-key "virtual appliances".

The TCP connection endpoint migration allows arbitrary server-side connection endpoint assignments to server nodes in cluster-based servers. The mechanism is clienttransparent and supports back-end level request dispatching. It has been... more

The TCP connection endpoint migration allows arbitrary server-side connection endpoint assignments to server nodes in cluster-based servers. The mechanism is clienttransparent and supports back-end level request dispatching. It has been implemented in the Linux kernel and can be used as part of a policy-based software architecture for request distribution. We show that the TCP connection endpoint migration can be successfully used for request distribution in cluster-based Web servers, both for persistent and non-persistent HTTP connections. We present localityaware policies using TCP connection migration that outperform Round Robin by factors as high as 2.79 in terms of the average response time for certain classes of requests.

Security has been considered as the key concern of adopting cloud computing. This could be ascribed to the customers' lack of control over their confidential data once in the cloud and to the absence of trust in cloud providers.... more

Security has been considered as the key concern of adopting cloud computing. This could be ascribed to the customers' lack of control over their confidential data once in the cloud and to the absence of trust in cloud providers. Unfortunately, the research efforts in the area of cloud security have not yet succeeded to give the required "peace of mind" to cloud customers. Therefore, in an attempt to figure out the main loopholes in cloud security, this paper questions the fundamental principles of the security approaches in the cloud. The paper highlights two main drawbacks; the first is due to a potential incompatibility among security measures, while the other is due to the focus on securing the infrastructure rather than the data itself. Therefore, the paper advocates the need for integrating information-centric security approaches, and hence presents a novel security framework that adopts both infrastructure and information-centric security services.

Optical DWDM technology has delivered immense scalability across long-haul and metro core networks. However, since many client services still operate at slower speeds, there remains a large "granularity gap" with larger multi-gigabit... more

Optical DWDM technology has delivered immense scalability across long-haul and metro core networks. However, since many client services still operate at slower speeds, there remains a large "granularity gap" with larger multi-gigabit wavelengths. Hence, various sub-rate grooming technologies have been developed to efficiently multiplex smaller demands on wavelengths and improve utilizations. The most notable such offering is "next-generation" SONET/SDH, which is now gaining significant market traction. A key feature of NGS is inverse multiplexing, which can resolve demands into smaller "subconnections." However, most studies on sub-rate grooming have not fully leveraged this vital addition. This paper presents a detailed initial look at inverse multiplexing in SONET/SDH-DWDM networks. Here, a novel grooming algorithm is tabled and its detailed performance is studied. Overall, results indicate notable improvements via the inverse multiplexing capability.

Cluster federations are attractive to execute applications like large scale code coupling. However, faults may appear frequently in such architectures. Thus, checkpointing longrunning applications is desirable to avoid to restart them... more

Cluster federations are attractive to execute applications like large scale code coupling. However, faults may appear frequently in such architectures. Thus, checkpointing longrunning applications is desirable to avoid to restart them form the beginning in the event of a node failure. To take into account the constraints of a cluster federation architecture, an hybrid checkpointing protocol is proposed. It uses global coordinated checkpointing inside clusters but only quasi-synchronous checkpointing techniques between clusters. The proposed protocol has been evaluated by simulation and fits well for applications that can be divided in modules with lots of communications within modules but few between them.

Dynamic network reconfiguration is defined as the process of changing from one routing function to another while the network remains up and running. The main challenge is in avoiding deadlock anomalies while keeping restrictions on packet... more

Dynamic network reconfiguration is defined as the process of changing from one routing function to another while the network remains up and running. The main challenge is in avoiding deadlock anomalies while keeping restrictions on packet injection and forwarding minimal. Current approaches either require virtual channels in the network or they work only for a limited set of routing algorithms and/or fault patterns. In this paper, we present a methodology for devising deadlock free and dynamic transitions between old and new routing functions that is consistent with newly proposed theory [1]. The methodology is independent of topology, can be applied to any deadlock-free routing function, and puts no restrictions on the routing function changes that can be supported. Furthermore, it does not require any virtual channels to guarantee deadlock freedom. This research is motivated by current trends toward using increasingly larger Internet and transaction processing servers based on clusters of PCs that have very high availability and dependability requirements, as well as other local, system, and storage area network-based computing systems.

Different studies show that programmers are more interested in finding definitions of functions and their uses than variables, statements, or arbitrary code fragments [30, 29, 31]. Therefore, programmers require support in finding... more

Different studies show that programmers are more interested in finding definitions of functions and their uses than variables, statements, or arbitrary code fragments [30, 29, 31]. Therefore, programmers require support in finding relevant functions and determining how those functions are used. Unfortunately, existing code search engines do not provide enough of this support to developers, thus reducing the effectiveness of

We present a novel algorithm, called IPASS, for root cause analysis of performance problems in Storage Area Networks (SANs). The algorithm uses configuration information available in a typical SAN to construct I/O paths, that connect... more

We present a novel algorithm, called IPASS, for root cause analysis of performance problems in Storage Area Networks (SANs). The algorithm uses configuration information available in a typical SAN to construct I/O paths, that connect between consumers and providers of the storage resources. When a performance problem is reported for a storage consumer in the SAN, IPASS uses the configuration information in an on-line manner to construct an I/O path for this consumer. As the path construction advances, IPASS performs an informed search for the root cause of the problem. The underlying rationale is that if the performance problem registered at the storage consumer is indeed related to the SAN itself, the root causes of the problem are more likely to be found on the relevant I/O paths within the SAN. We evaluate the performance of IPASS analytically and empirically, comparing it to known, informed and uninformed search algorithms. Our simulations suggest that IPASS scales 7 to 10 times better than the reference algorithms. Although our primary target domain is SAN, IPASS is a generic algorithm. Therefore, we believe that IPASS can be efficiently used as a building block for performance management solutions in other contexts as well.

Dependability evaluation is useful and important in designing computer systems, while at the same time it is also difficult. The difficulty becomes significant when the model combines hardware, software and their interactions due to the... more

Dependability evaluation is useful and important in designing computer systems, while at the same time it is also difficult. The difficulty becomes significant when the model combines hardware, software and their interactions due to the difference in failure behaviour between hardware and software. An approach to simplify the integrated evaluation of dependability is proposed using a structured, top-down technique based on decomposition of system function in several levels. The sub-functions are then modelled using stochastic activity networks in terms of hardware, software and interaction. This in turn decomposes the system into components. The complete model is composed of the subnets of the sub-functions.

Future missions to the moon, Mars, or other planetary surfaces will use planetary rovers for exploration or other tasks. Operation of these rovers as unmanned robotic vehicles with some form of remote or semi-autonomous control is... more

Future missions to the moon, Mars, or other planetary surfaces will use planetary rovers for exploration or other tasks. Operation of these rovers as unmanned robotic vehicles with some form of remote or semi-autonomous control is desirable to reduce the cost and increase the capability and safety of many types of missions. However, the long time delays and relatively low bandwidths associated with radio communications between planets precludes a total “telepresence” approach to controlling the vehicle. A program to develop planetary rover technology has been initiated at the Jet Propulsion Laboratory (JPL) under sponsorship of the National Aeronautics and Space Administration (NASA). Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed and initial testing has been completed. These testbed systems, the associated navigation techniques currently used and planned for implementation, and long-term mission strategies employing them are described.

A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of NASA. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate... more

A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of NASA. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed, and initial testing has been completed. These testbed systems and the associated navigation techniques used are described. Particular emphasis is placed on three technologies: Computer-Aided Remote Driving (CARD), Semiautonomous Navigation (SAN), and behavior control. It is concluded that, through the development and evaluation of such technologies, research at JPL has expanded the set of viable planetary rover mission possibilities beyond the limits of remotely teleoperated systems such as Lunakhod. These are potentially applicable to exploration of all the solid planetary surfaces in the solar system, including Mars, Venus, and the moons of the gas giant planets

Capturing attacker behavior in a security test plan allows the systematic, repeated assessment of a system's defenses against attacks. To address the lack of security experts capable of developing effective black box security test plans,... more

Capturing attacker behavior in a security test plan allows the systematic, repeated assessment of a system's defenses against attacks. To address the lack of security experts capable of developing effective black box security test plans, we have empirically developed an initial set of six black box security test patterns. These patterns capture the expertise involved in creating a black box security test plan in the same way that software design patterns capture design expertise. Security test patterns can enable software testers lacking security expertise (in this paper, "novices") to develop a test plan the way experts could. The goal of this paper is to evaluate the ability of novices to effectively generate black box security tests by accessing security expertise contained within security test patterns. We conducted a user study of 47 student novices, who used our six initial patterns to develop black box security test plans for six requirements from a publicly available specification for electronic health records systems. We created an oracle for the security test plan by forming a panel of researchers who manually completed the same task as the novices. We found that novices will generate a similar black box test plan to the oracle when aided by the six black box security test patterns.

This paper focuses on a switching architecture designed for Storage Area Network (SAN) applications, with a crossbar switching fabric and an aggregate bandwidth of hundreds of Gbps. We describe the architecture and adopt an abstract model... more

This paper focuses on a switching architecture designed for Storage Area Network (SAN) applications, with a crossbar switching fabric and an aggregate bandwidth of hundreds of Gbps. We describe the architecture and adopt an abstract model of the flow-controlled, credit-based, packet transfer around the switching fabric. The major effects on performance of the credit-based flow control are investigated under different system parameters. 0-7803-8924-7/05/$20.00 (C) 2005 IEEE

Efficient support of multicast traffic in Storage Area Networks (SANs) enables applications such as remote data replication and distributed multimedia systems, in which a server must access concurrently multiple storage devices or,... more

Efficient support of multicast traffic in Storage Area Networks (SANs) enables applications such as remote data replication and distributed multimedia systems, in which a server must access concurrently multiple storage devices or, conversely, multiple servers must access data on a single device. In this paper we extend an innovative switching architecture, proposed in a previous paper, to support multicast traffic. We describe the most important aspects, focusing in particular on the mechanisms that permit to achieve lossless behavior. We then use simulation to analyze system performance and the impact of such mechanisms under various traffic patterns. Although the work is inspired by a specific switch architecture, results have a more general flavor and permit to highlight interesting trends in flow controlled architectures.

ABSTRACT This paper presents a solution for optimal business continuity, with storage architecture for enterprise applications, which shall ensure negligible data loss and quick recovery. The solution makes use of IP SAN, which are used... more

ABSTRACT This paper presents a solution for optimal business continuity, with storage architecture for enterprise applications, which shall ensure negligible data loss and quick recovery. The solution makes use of IP SAN, which are used for data management without burdening the application server, as well as replication techniques to replicate data to remote disaster recovery site. We have presented the design using open source database Postgres to prove our point for optimal business continuity. The theoretical presentation is also given for the same.

This paper presents a novel data mirroring method for storage area networks (SANs) in metropolitan area WDM (wavelength division multiplexing) ring network scenario. We describe our network architecture, the protocol, the network traffic... more

This paper presents a novel data mirroring method for storage area networks (SANs) in metropolitan area WDM (wavelength division multiplexing) ring network scenario. We describe our network architecture, the protocol, the network traffic and our SAN mirroring method. The WDM ring network with the SAN mirroring is analyzed next using simulation results for average node throughput, queuing delay and packet dropping probability.

— Storage Area Networks (SAN’s) connect groups of storage devices to servers over fast interconnects using protocols like Fibre Channel or iSCSI, so that storage resources can be assigned to servers in a flexible and scalable way. An... more

— Storage Area Networks (SAN’s) connect groups of storage devices to servers over fast interconnects using protocols like Fibre Channel or iSCSI, so that storage resources can be assigned to servers in a flexible and scalable way. An important challenge is controlling the complexity of the SAN configuration resulting from the high scalability of the network and the diversity and the interconnectivity of the devices. Policybased validation has been proposed earlier as a solution to this configuration problem. We propose a light-weight, SQL-based solution that uses existing well-known technologies to implement such a validation system. Our approach is based on a relational database which stores configuration data extracted from the system via a WBEM standard interface. In contrast to other approaches, we use SQL to define our policy rules as executable checks on these configuration data. Each rule is embedded in a test case, defined by an XML schema, which combines each check with an ...