Storage Area Network Research Papers (original) (raw)
iSCSI is a new IETF standard protocol that makes it possi- ble for a SCSI initiator (client) on one machine to exchange SCSI commands and data with a SCSI target (server) on an- other machine connected via a TCP/IP network. When an... more
iSCSI is a new IETF standard protocol that makes it possi- ble for a SCSI initiator (client) on one machine to exchange SCSI commands and data with a SCSI target (server) on an- other machine connected via a TCP/IP network. When an initiator establishes a connection to a target, a large numb er of standard parameters can be negotiated in order to cus- tomize many aspects of that connection. Both the initiator and target implementations can configure other parameters outside the iSCSI standard that also customize their inter- actions with the SCSI environment. This paper reports the major results from an exten- sive study of the effect of these parameters on iSCSI per- formance. It also develops a linear relationship between a number of them. We suggest settings that offer the best per- formance in the situations tested, with the belief that thes e settings offer the best available general guidance for con- figuring iSCSI until results from testing in more complex situations becomes ...
- by
- •
- Storage Area Network
We propose the creation of a wireless storage area network (SAN) and analyze its benefits. The proposed wireless SAN (WSAN) consists of a SAN switch that is connected to multiple wireless access points (APs) that communicate with the... more
We propose the creation of a wireless storage area network (SAN) and analyze its benefits. The proposed wireless SAN (WSAN) consists of a SAN switch that is connected to multiple wireless access points (APs) that communicate with the storage devices. This network would save space and reduce overall costs by not requiring wired connections. Wireless SANS would also provide more freedom in the placement of storage devices. However, because the number of wireless access points is less than the number of storage devices, it is possible for user data requests to be blocked if all access points are busy. An important design goal is therefore to minimize the probability that a network access request will be blocked.
This paper discusses the concept of a virtual digital forensic laboratory, which incorporates networked examination and storage machines, secure communications, multi-factor authentication, role-based access control, and case management... more
This paper discusses the concept of a virtual digital forensic laboratory, which incorporates networked examination and storage machines, secure communications, multi-factor authentication, role-based access control, and case management and digital asset management systems. Laboratory activities such as the examination, storage and presentation of digital evidence can be geographically distributed and accessed over a network by users with the appropriate credentials. The advantages of such a facility include reduced costs through shared resources and the availability of advanced expertise for specialized cases.
Storage-area networks are a popular and efficient way of building large storage systems both in an enterprise environment and for multi-domain storage service providers. In both environments the network and the storage has to be... more
Storage-area networks are a popular and efficient way of building large storage systems both in an enterprise environment and for multi-domain storage service providers. In both environments the network and the storage has to be configured to ensure that the data is maintained securely and can be delivered efficiently. In this paper, we describe a model of mandatory security for SAN services that incorporates the notion of risk as a measure of the robustness of the SAN's configuration and that formally defines a vulnerability common in systems with mandatory security, i.e. cascaded threats. Our abstract SAN model is flexible enough to reflect the data requirements, tractable for the administrator, and can be implemented as part of an automatic configuration system. The implementation is given as part of a prototype written in OPL.
One of the main problems in the high performance computing area is to find the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior before the real implementation... more
One of the main problems in the high performance computing area is to find the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior before the real implementation of such applications seems to be an interesting alternative and can help to identify better directions for the implementation strategies. In this work, the Stochastic Automata Network (SAN) formalism is adopted to model and evaluate the performance of parallel applications. The methodology used is based on the construction of generic SAN models to describe classical parallel programming patterns, like Master/Slave, Pipeline and Divide and Conquer. Those models are adapted to represent cases of a real application through the definition of input parameters values. Finally, we present a comparison between the results of the SAN models and a real application, aiming at verifying the accuracy of the adopted technique.
Performance, reliability and scalability in data access are key issues in the context of HEP data processing and analysis applications. In this paper we present the results of a large scale performance measurement performed at the... more
Performance, reliability and scalability in data access are key issues in the context of HEP data processing and analysis applications. In this paper we present the results of a large scale performance measurement performed at the INFN-CNAF Tier-1, employing some storage solutions presently available for HEP computing, namely CASTOR, GPFS, Scalla/Xrootd and dCache. The storage infrastructure was based on Fibre Channel systems organized in a Storage Area Network, providing 260 TB of total disk space, and 24 disk servers connected to the computing farm (280 worker nodes) via Gigabit LAN. We also describe the deployment of a StoRM SRM instance at CNAF, configured to manage a GPFS file system, presenting and discussing its performances. 6 Corresponding author. email: Luca.dellAgnello@cnaf.infn.it.
- by Antonia Ghiselli and +5
- •
- Engineering, Physical sciences, Fibre Channel, Data Access
Abstract-Storage Area Networks (SANs) connect storage devices to servers over fast network interconnects. We consider the problem of optimal SAN configuration with the goal of retaining more security in meeting service level agreements... more
Abstract-Storage Area Networks (SANs) connect storage devices to servers over fast network interconnects. We consider the problem of optimal SAN configuration with the goal of retaining more security in meeting service level agreements (SLAs) on unexpected peaks. First, we give an algorithm for assigning storage devices to applications running on the SAN's hosts. This algorithm tries to balance the workload as evenly as possible over all storage devices. Our second algorithm takes these assignments and computes the interconnections (data paths) that are necessary to achieve the desired configuration while respecting redundancy (safety) requirements in the SLAs. Again, this algorithm tries to balance the workload of all connections and devices. Thus, our network configurations respect all SLAs and provide flexibility for future changes by avoiding bottlenecks on storage devices or switches. We also discuss integrating our solution with the open source SAN management software Aperi.
Policy-based management provides the ability to dynamically re-configure DiffServ networks such that desired quality of service (QoS) goals are achieved. This includes network provisioning decisions, performing admission control, and... more
Policy-based management provides the ability to dynamically re-configure DiffServ networks such that desired quality of service (QoS) goals are achieved. This includes network provisioning decisions, performing admission control, and adapting bandwidth allocation dynamically. QoS management aims to satisfy the service level agreements (SLAs) contracted by the provider and therefore QoS policies are derived from SLA specifications and the provider's business goals. This policy refinement is usually ...
The novel framework presented in this paper provides a comprehensive methodology and statistical approach for benchmarking two Storage Area Network (SAN) products. This framework has the ability to assess the complexity of the SAN... more
The novel framework presented in this paper provides a comprehensive methodology and statistical approach for benchmarking two Storage Area Network (SAN) products. This framework has the ability to assess the complexity of the SAN products benchmarking characteristics. This approach enables us to explore the complex interactions and relationships between the SAN products in the same environment. There are many evaluation techniques available for testing, but their results and evaluations may lack maturity and are not statistically sound. This novel unified framework provides a cost effective way that quantifies the benchmarking and interaction level of two SAN products by using single Design of Experiments (DOE) analysis. This novel approach provides good SAN product comparative analysis to customer's with high degree of accuracy with good predictabilities in product improvement, product marketing and product selection.
Management of a switch fabric security configuration, a core component of Storage Area Networks, is complex and error prone. As a consequence, misconfiguration of and/or a poor understanding of a switch fabric may unnecessarily expose an... more
Management of a switch fabric security configuration, a core component of Storage Area Networks, is complex and error prone. As a consequence, misconfiguration of and/or a poor understanding of a switch fabric may unnecessarily expose an enterprise to known threats. A formal model of a switch security configuration is presented. This model is reasoned over to help manage complex switch fabric security configurations.
Today, access control security for storage area networks (zoning and masking) is implemented by mechanisms that are inherently insecure, and are tied to the physical network components. However, what we want to secure is at a higher... more
Today, access control security for storage area networks (zoning and masking) is implemented by mechanisms that are inherently insecure, and are tied to the physical network components. However, what we want to secure is at a higher logical level independent of the transport network; raising security to a logical level simplifies management, provides a more natural fit to a virtualized infrastructure, and enables a finer grained access control. In this paper, we describe the problems with existing access control security solutions, and present our approach which leverages the OSD (Object-based Storage Device) security model to provide a logical, cryptographically secured, in-band access control for today’s existing devices. We then show how this model can easily be integrated into existing systems and demonstrate that this in-band security mechanism has negligible performance impact while simplifying management, providing a clean match to compute virtualization and enabling fine gra...
In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements on the performance of 10 Gigabit... more
In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements on the performance of 10 Gigabit Ethernet, using the network interface cards mounted on the PCI-Express bus of commodity PCs both as transmitters and receivers. In real operating conditions, the achievable maximum transfer rate through a network link is not only limited by the capacity of the link itself, but also by that of the memory and peripheral buses and by the ability of the CPUs and of the Operating System to handle packet processing and interrupts raised by the network interface cards in due time. Besides the TCP and UDP maximum data transfer throughputs, we also measured the CPU loads of the sender/receiver processes and of the interrupt and soft-interrupt handlers as a function of the packet size, either using standard or "jumbo" Ethernet frames. In addition, we also performed the same measurements by simultaneously reading data from Fibre Channel links and forwarding them through a 10 Gigabit Ethernet link, hence emulating the behavior of a disk server in a Storage Area Network exporting data to client machines via 10 Gigabit Ethernet.
In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements on the performance of 10 Gigabit... more
In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements on the performance of 10 Gigabit Ethernet, using the network interface cards mounted on the PCI-Express bus of commodity PCs both as transmitters and receivers. In real operating conditions, the achievable maximum transfer rate through a network link is not only limited by the capacity of the link itself, but also by that of the memory and peripheral buses and by the ability of the CPUs and of the Operating System to handle packet processing and interrupts raised by the network interface cards in due time. Besides the TCP and UDP maximum data transfer throughputs, we also measured the CPU loads of the sender/receiver processes and of the interrupt and soft-interrupt handlers as a function of the packet size, either using standard or "jumbo" Ethernet frames. In addition, we also performed the same measurements by simultaneously reading data from Fibre Channel links and forwarding them through a 10 Gigabit Ethernet link, hence emulating the behavior of a disk server in a Storage Area Network exporting data to client machines via 10 Gigabit Ethernet.
One of the main problems in the high performance computing area is the difficulty to define the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior of such... more
One of the main problems in the high performance computing area is the difficulty to define the best strategy to parallelize an application. In this context, the use of analytical methods to evaluate the performance behavior of such applications seems to be an interesting alternative and can help to identify the best implementation strategies. In this work, the Stochastic Automata Network formalism is adopted to model and evaluate the performance of parallel applications, specially developed for clusters of workstations platforms. The methodology used is based on the construction of generic models to describe classical parallel implementation schemes, like Master/Slave, Parallel Phases, Pipeline and Divide and Conquer. Those models are adapted to represent cases of real applications through the definition of input parameters values. Finally, aiming to verify the accuracy of the adopted technique, some comparisons with real applications implementation results are presented.
Storage Area Net work (SAN) switches and Storage Arrays have used for many t ime; they have become mo re ab ility of fault tolerance. These often involve some degree of redundancy. However, there are still issues that can occur and take... more
Storage Area Net work (SAN) switches and Storage Arrays have used for many t ime; they have become mo re ab ility of fault tolerance. These often involve some degree of redundancy. However, there are still issues that can occur and take some t ime t o resolve that problem co me in the Storage area network applicat ions. The number of Storage Protocols and Storage Interfaces rapidly increased in a Networking technology field, it avoids the Bottleneck of data centers. This paper focuses on few guidelines that may help to understand some of the design issues involved in SAN. Those Problems that are abstract and cannot solve on SAN infrastructure and application run on SAN can solve after understanding all the parameters. Fib re channels also, make so me concern that is not solvable and creates issues. This paper discusses some of the co mmon problem related to SAN to prevent the issues.
Dynamic network reconfiguration is defined as the process of changing from one routing function to another while the network remains up and running. The main challenge is in avoiding deadlock anomalies while keeping restrictions on packet... more
Dynamic network reconfiguration is defined as the process of changing from one routing function to another while the network remains up and running. The main challenge is in avoiding deadlock anomalies while keeping restrictions on packet injection and forwarding minimal. Current approaches either require virtual channels in the network or they work only for a limited set of routing algorithms and/or fault patterns. In this paper, we present a methodology for devising deadlock free and dynamic transitions between old and new routing functions that is consistent with newly proposed theory [1]. The methodology is independent of topology, can be applied to any deadlock-free routing function, and puts no restrictions on the routing function changes that can be supported. Furthermore, it does not require any virtual channels to guarantee deadlock freedom. This research is motivated by current trends toward using increasingly larger Internet and transaction processing servers based on clusters of PCs that have very high availability and dependability requirements, as well as other local, system, and storage area network-based computing systems.
Different studies show that programmers are more interested in finding definitions of functions and their uses than variables, statements, or arbitrary code fragments [30, 29, 31]. Therefore, programmers require support in finding... more
Different studies show that programmers are more interested in finding definitions of functions and their uses than variables, statements, or arbitrary code fragments [30, 29, 31]. Therefore, programmers require support in finding relevant functions and determining how those functions are used. Unfortunately, existing code search engines do not provide enough of this support to developers, thus reducing the effectiveness of
Storage area networks, remote backup storage systems, and similar information systems frequently modify stored data with updates from new versions. In these systems, it is desirable for the data to not only be compressed but to also be... more
Storage area networks, remote backup storage systems, and similar information systems frequently modify stored data with updates from new versions. In these systems, it is desirable for the data to not only be compressed but to also be easily modified during updates. A malleable coding scheme considers both compression efficiency and ease of alteration, promoting some form of reuse or recycling of codewords.
utility computing, resource assignment, storage area networks, mixed integer programming
We present a novel algorithm, called IPASS, for root cause analysis of performance problems in Storage Area Networks (SANs). The algorithm uses configuration information available in a typical SAN to construct I/O paths, that connect... more
We present a novel algorithm, called IPASS, for root cause analysis of performance problems in Storage Area Networks (SANs). The algorithm uses configuration information available in a typical SAN to construct I/O paths, that connect between consumers and providers of the storage resources. When a performance problem is reported for a storage consumer in the SAN, IPASS uses the configuration information in an on-line manner to construct an I/O path for this consumer. As the path construction advances, IPASS performs an informed search for the root cause of the problem. The underlying rationale is that if the performance problem registered at the storage consumer is indeed related to the SAN itself, the root causes of the problem are more likely to be found on the relevant I/O paths within the SAN. We evaluate the performance of IPASS analytically and empirically, comparing it to known, informed and uninformed search algorithms. Our simulations suggest that IPASS scales 7 to 10 times better than the reference algorithms. Although our primary target domain is SAN, IPASS is a generic algorithm. Therefore, we believe that IPASS can be efficiently used as a building block for performance management solutions in other contexts as well.
Many people might not be aware of the technical background of storing digital data, but few can be unaware of the way data storage requirements are exploding. From the early '70s, with the introduction of the first standalone... more
Many people might not be aware of the technical background of storing digital data, but few can be unaware of the way data storage requirements are exploding. From the early '70s, with the introduction of the first standalone minicomputers, through the enterprise-communication-software boom during the '80s and the data analysis frenzy in the '90s, to our current dependency on email and the Internet, we're building mountains of information on both a global and an organizational basis.
•The objective of this project was to design a new storage solution for a fire bridge station •This was designed to use a hybrid approach of both NAS and Cloud •Describe how to identify and prevent possible security threats. Be they... more
•The objective of this project was to design a new storage solution for a fire bridge station
•This was designed to use a hybrid approach of both NAS and Cloud
•Describe how to identify and prevent possible security threats. Be they internal and external.
•Provided a detailed account of backups including RAID, Disaster Recovery and Archiving
This paper examines how VI-based interconnects can be used to improve I/O path performance between a database server and the storage subsystem. We design and implement a software layer, DSA, that is layered between the application and VI.... more
This paper examines how VI-based interconnects can be used to improve I/O path performance between a database server and the storage subsystem. We design and implement a software layer, DSA, that is layered between the application and VI. DSA takes advantage of specific VI features and deals with many of its shortcomings. We provide and evaluate one kernel-level and two user-level implementations of DSA. These implementations trade transparency and generality for performance at different degrees, and unlike research prototypes are designed to be suitable for realworld deployment. We present detailed measurements using a commercial database management system with both micro-benchmarks and industrial database workloads on a mid-size, 4 CPU, and a large, 32 CPU, database server.
The LHCb experiment uses a single, high performance storage system to serve all kinds of storage needs: home directories, shared areas, raw data storage and buffer storage for event reconstruction. All these applications are concurrent... more
The LHCb experiment uses a single, high performance storage system to serve all kinds of storage needs: home directories, shared areas, raw data storage and buffer storage for event reconstruction. All these applications are concurrent and require careful optimisation. In particular for accessing the raw data in read and write mode, a custom light weight non-POSIX compliant file system has been developed. File serving is achieved by running several redundant file servers in an active active configuration with high availability capabilities and good performance. In this paper we describe the design and current architecture of this storage system. We discuss implementation issues and problems we had to overcome during the hitherto 18 months run-in period. Based on our experience we will also discuss the relative advantages and disadvantages of such a system over a system composed of several smaller storage systems. We also present performance measurements.
This paper focuses on a switching architecture designed for Storage Area Network (SAN) applications, with a crossbar switching fabric and an aggregate bandwidth of hundreds of Gbps. We describe the architecture and adopt an abstract model... more
This paper focuses on a switching architecture designed for Storage Area Network (SAN) applications, with a crossbar switching fabric and an aggregate bandwidth of hundreds of Gbps. We describe the architecture and adopt an abstract model of the flow-controlled, credit-based, packet transfer around the switching fabric. The major effects on performance of the credit-based flow control are investigated under different system parameters. 0-7803-8924-7/05/$20.00 (C) 2005 IEEE
Efficient support of multicast traffic in Storage Area Networks (SANs) enables applications such as remote data replication and distributed multimedia systems, in which a server must access concurrently multiple storage devices or,... more
Efficient support of multicast traffic in Storage Area Networks (SANs) enables applications such as remote data replication and distributed multimedia systems, in which a server must access concurrently multiple storage devices or, conversely, multiple servers must access data on a single device. In this paper we extend an innovative switching architecture, proposed in a previous paper, to support multicast traffic. We describe the most important aspects, focusing in particular on the mechanisms that permit to achieve lossless behavior. We then use simulation to analyze system performance and the impact of such mechanisms under various traffic patterns. Although the work is inspired by a specific switch architecture, results have a more general flavor and permit to highlight interesting trends in flow controlled architectures.
InfiniBand Architecture is a newly established general-purpose interconnect standard applicable to local area, system area and storage area networking and I/O. Networks based on this standard should be capable of tolerating topological... more
InfiniBand Architecture is a newly established general-purpose interconnect standard applicable to local area, system area and storage area networking and I/O. Networks based on this standard should be capable of tolerating topological changes due to resource failures, link/switch ...
Storage Area Networks (SANs) provide the scalability required by the IT servers. InfiniBand (IBA) interconnect is very likely to become the de facto standard for SANs as well as for NOWs. The routing algorithm is a key design issue in... more
Storage Area Networks (SANs) provide the scalability required by the IT servers. InfiniBand (IBA) interconnect is very likely to become the de facto standard for SANs as well as for NOWs. The routing algorithm is a key design issue in irregular networks. Moreover, as several virtual lanes can be used and different network issues can be considered, the performance of the routing algorithms may be affected. In this paper we evaluate three existing routing algorithms (up*/down*, DFS, and smart-routing) suitable for being applied to IBA. Evaluation has been performed by simulation under different synthetic traffic patterns and I/O traces. Simulation results show that the smart-routing algorithm achieves the highest performance.
In production grids, high performance disk storage solutions using parallel file systems are becoming increasingly important to provide reliability and high speed I/O operations needed by HEP analysis farms. Today, Storage Area Network... more
In production grids, high performance disk storage solutions using parallel file systems are becoming increasingly important to provide reliability and high speed I/O operations needed by HEP analysis farms. Today, Storage Area Network solutions are commonly deployed at LHC center and and parallel file systems such as GPFS and Lustre allow for reliable, high-speed native POSIX I/O operations in parallel fashion.
- by Antonia Ghiselli and +1
- •
- High performance, High Speed, Storage Area Network, Test Bed
Modal transition systems (MTS) is a formalism which extends the classical notion of labelled transition systems by introducing transitions of two types: must transitions that have to be present in any implementation of the MTS and may... more
Modal transition systems (MTS) is a formalism which extends the classical notion of labelled transition systems by introducing transitions of two types: must transitions that have to be present in any implementation of the MTS and may transitions that are allowed but not required.
Black Hole Attacks are a serious threat to communication in tactical MANETs. In this work we present TOGBAD a new centralised approach, using topology graphs to identify nodes attempting to create a black hole. We use well-established... more
Black Hole Attacks are a serious threat to communication in tactical MANETs. In this work we present TOGBAD a new centralised approach, using topology graphs to identify nodes attempting to create a black hole. We use well-established techniques to gain knowledge about the network topology and use this knowledge to perform plausibility checks of the routing information propagated by the nodes in the network. We consider a node generating fake routing information as malicious. Therefore, we trigger an alarm if the plausibility check fails. Furthermore, we present promising first simulation results. With our new approach, it is possible to already detect the attempt to create a black hole before the actual impact occurs.
This work explores performance issues of system-level interactions by means of performance modeling. We focus on I/O performance in a storage area network (SAN), namely, the performance of I/O interactions of host servers and storage... more
This work explores performance issues of system-level interactions by means of performance modeling. We focus on I/O performance in a storage area network (SAN), namely, the performance of I/O interactions of host servers and storage subsystems via the SAN fabric. We present a component-based simulation performance model, which supports a rich variety of both existing and future storage subsystems, allows
- by Nava Aizikowitz and +1
- •
- OPERATING SYSTEM, Performance Model, Storage Area Network
This paper presents a novel data mirroring method for storage area networks (SANs) in metropolitan area WDM (wavelength division multiplexing) ring network scenario. We describe our network architecture, the protocol, the network traffic... more
This paper presents a novel data mirroring method for storage area networks (SANs) in metropolitan area WDM (wavelength division multiplexing) ring network scenario. We describe our network architecture, the protocol, the network traffic and our SAN mirroring method. The WDM ring network with the SAN mirroring is analyzed next using simulation results for average node throughput, queuing delay and packet dropping probability.
— Storage Area Networks (SAN’s) connect groups of storage devices to servers over fast interconnects using protocols like Fibre Channel or iSCSI, so that storage resources can be assigned to servers in a flexible and scalable way. An... more
— Storage Area Networks (SAN’s) connect groups of storage devices to servers over fast interconnects using protocols like Fibre Channel or iSCSI, so that storage resources can be assigned to servers in a flexible and scalable way. An important challenge is controlling the complexity of the SAN configuration resulting from the high scalability of the network and the diversity and the interconnectivity of the devices. Policybased validation has been proposed earlier as a solution to this configuration problem. We propose a light-weight, SQL-based solution that uses existing well-known technologies to implement such a validation system. Our approach is based on a relational database which stores configuration data extracted from the system via a WBEM standard interface. In contrast to other approaches, we use SQL to define our policy rules as executable checks on these configuration data. Each rule is embedded in a test case, defined by an XML schema, which combines each check with an ...
The amount of intelligent packet processing in an Ethernet switch continues to grow, in order to support of embedded applications such as network security, load balancing and quality of service assurance. This increased packet processing... more
The amount of intelligent packet processing in an Ethernet switch continues to grow, in order to support of embedded applications such as network security, load balancing and quality of service assurance. This increased packet processing is contributing to greater per-packet latency through the switch. In addition, there is a growing interest in using Ethernet switches in low latency environments such as high-performance clusters, storage area networks and real-time media distribution. In this paper we propose Packet Prediction for Speculative Cut-through Switching (PPSCS), a novel approach to reducing the latency of modern Ethernet switches without sacrificing feature rich policybased forwarding enabled by deep packet inspection. PPSCS exploits the temporal nature of network communications to predict the flow classification of incoming packets and begin the speculative forwarding of packets before complex lookup operations are complete. Simulation studies using actual network traces indicate that correct prediction rates of up to 97% are achievable using only a small amount of prediction circuitry per port. These studies also indicate that PPSCS can reduce the latency in traditional storeand-forward switches by nearly a factor of 8, and reduce the latency of cut-through switches by a factor of 3.
This paper explores the effect of the current generation of hardware support for IP storage area networks on application performance. In this regard, this paper presents a comprehensive analysis of three competing approaches to build an... more
This paper explores the effect of the current generation of hardware support for IP storage area networks on application performance. In this regard, this paper presents a comprehensive analysis of three competing approaches to build an IP storage area network that differ in their level of hardware support: software, TOE (TCP Offload Engine) and HBA (Host Bus Adapter). The software approach is based on the unmodified TCP/IP stacks that are part of a standard operating system distribution. For the two hardware-based approaches (TOE, HBA), we experimented with a range of adapters and chose a representative adapter for the current generation of each of the hardware approaches. The micro-benchmark analysis reveals that while hardware support does reduce the CPU utilization for large block sizes, the hardware support can itself be a performance bottleneck that hurts throughput and latency with small block sizes. Furthermore, the macro-benchmark analysis demonstrates that while the curren...
With increasing demands on storage devices in the modern communication environment, the storage area network (SAN) has evolved to provide a direct connection allowing these storage devices to be accessed efficiently. To optimize the... more
With increasing demands on storage devices in the modern communication environment, the storage area network (SAN) has evolved to provide a direct connection allowing these storage devices to be accessed efficiently. To optimize the performance of a SAN, a three-stage hybrid electronic/optical switching node architecture based on the concept of a MPLS label switching mechanism, aimed at serving as a
We present two methods for weighted consistent hashing also known as weighted distributed hash tables. The first method, called Linear Method, combines the standard consistent hasing introduced by Karger et al. [9] with a linear weighted... more
We present two methods for weighted consistent hashing also known as weighted distributed hash tables. The first method, called Linear Method, combines the standard consistent hasing introduced by Karger et al. [9] with a linear weighted distance measure. By using node copies and different partitions of the hash space, the balance of this scheme approximates the fair weight relationship with high probability. The second method, called the Logarithmic Method, uses a logarithmic weighted distance between the peers and the data to find the corresponding node. For distributing one data element it provides perfect weighted balance. To provide this distribution for many data elements we use partitions to achieve a fair balance with high probability. These methods provide small fragmentation, which means that the hash space is divided into at most O(n log n) intervals. Furthermore, there is an efficient data structure that assigns data elements to the nodes in expected time O(log n). If small fragmentation is not an issue one can replace the use of partitions by a method we call double hash functions. This method needs O(n) for assigning elements to a node, yet it can be directly used for Storage Area Networks, where the number of nodes is small compared to participating nodes in Peer-to-Peer networks.
This paper presents a novel lossless data compression device that extends enterprise network to branch offices by integrating multiple communication technologies. The presented device incorporates Gigabit Ethernet, STM1/STM4/STM16... more
This paper presents a novel lossless data compression device that extends enterprise network to branch offices by integrating multiple communication technologies. The presented device incorporates Gigabit Ethernet, STM1/STM4/STM16 interfaces for WAN connectivity, fiber channel interfaces for storage area network and 10G Ethernet interface for enterprise network connectivity. The device implements a novel architecture that implements the LZ77 lossless data compression algorithm in hardware. The high throughput data compression architecture enables the interfacing of the diverse high-speed communication technologies besides preserving the channel bandwidth for accommodating multiple applications. The device finds consumer applications to optimize WAN bandwidth, healthcare, media & broadcasting and storage area networks (SAN). 1
To meet higher and higher demands of communications, carrier telecom networks must be upgraded into NGN (Next Generation Network) [1,3,6]. NGN is a layered and converged network with open and standardized interfaces between each layer,... more
To meet higher and higher demands of communications, carrier telecom networks must be upgraded into NGN (Next Generation Network) [1,3,6]. NGN is a layered and converged network with open and standardized interfaces between each layer, which supports different multi-media ...
Policy-based management provides the ability to dynamically re-configure DiffServ networks such that desired Quality of Service (QoS) goals are achieved. This includes network provisioning decisions, performing admission control, and... more
Policy-based management provides the ability to dynamically re-configure DiffServ networks such that desired Quality of Service (QoS) goals are achieved. This includes network provisioning decisions, performing admission control, and adapting bandwidth allocation dynamically. QoS management aims to satisfy the Service Level Agreements (SLAs) contracted by the provider and therefore QoS policies are derived from SLA specifications and the provider's business goals. This policy refinement is usually ...