Effect of Plants on the Bioavailability of Metals and Other Chemical Properties of Biosolids in a Column Study (original) (raw)

A Generalized FAST TCP scheme

Computer Communications, 2008

FAST TCP has been shown to be promising in terms of system stability, throughput and fairness. However, it requires buffering which increases linearly with the number of flows bottlenecked at a link. This paper proposes a new TCP algorithm that extends FAST TCP to achieve (α, n)-proportional fairness in steady state, yielding buffer requirements which grow only as the nth power of the number of flows. We call the new algorithm Generalized FAST TCP. We prove stability for the case of a single bottleneck link with homogeneous sources in the absence of feedback delay. Simulation results verify that the new scheme is stable in the presence of feedback delay, and that its buffering requirements can be made to scale significantly better than standard FAST TCP.

Transmission Control Protocol

Elsevier eBooks, 2009

In this paper we propose a sender side modification to TCP to accommodate small network buffers. We exploit the fact that the manner in which network buffers are provisioned is intimately related to the manner in which TCP operates. However, rather than designing buffers to accommodate the TCP AIMD algorithm, as is the traditional approach in network design, we suggest simple modifications to the AIMD algorithm to accommodate buffers of any size in the network. We demonstrate that networks with small buffers can be designed that transport TCP traffic in an efficient manner while retaining fairness and friendliness with standard TCP traffic.

Abstract— Traditional Tcp Implementations Are Tuned To

work well over wired networks. A packet loss is occurred in a wired network mainly due to network congestion. On the other hand in a wireless link packet losses are caused mainly due to bit errors resulted from noise, interference, and various kinds of fadings. TCP performance in these environments is impacted by three path characteristics, not normally present in wired environments: high bandwidth delay product, packet losses due to corruption and bandwidth asymmetry. Wireless TCP has no idea whether a packet loss is caused by congestion or bit error. TCP assumes loss is caused by congestion and turns on its congestion control algorithms to slow down the amount of data it transmits as well as adopts retransmission policy. Invoking congestion control for bit errors in wireless channel reduces TCP throughput drastically. We propose an empirical architecture to recover these bit errors at Data Link Layer dynamically before entering the frame into buffer which reduces the number of ret...

Socket Buffer AutoSizing for High-Performance Data Transfers

Journal of Grid Computing, 2003

It is often claimed that TCP is not a suitable transport protocol for data intensive Grid applications in high-performance networks. We argue that this is not necessarily the case. Without changing the TCP protocol, congestion control, or implementation, we show that an appropriately tuned TCP bulk transfer can saturate the available bandwidth of a network path. The proposed technique, called SOBAS, is based on automatic socket buffer sizing at the application layer. In non-congested paths, SOBAS limits the socket buffer size based on direct measurements of the received throughput and of the corresponding round-trip time. The key idea is that the send window should be limited, after the transfer has saturated the available bandwidth in the path, so that the transfer does not cause buffer overflows (‘self-induced losses’). A difference with other socket buffer sizing schemes is that SOBAS does not require prior knowledge of the path characteristics, and it can be performed while the transfer is in progress. Experimental results in several high bandwidth-delay product paths show that SOBAS provides consistently a significant throughput increase (20% to 80%) compared to TCP transfers that use the maximum possible socket buffer size. We expect that SOBAS will be mostly useful for applications such as GridFTP in non-congested wide-area networks.

An Effective Approach to Alleviating the Challenges of TCP

Recently, TCP incast problem in data center networks has attracted a wide range of industrial and academic attention. Lots of attempts have been made to address this problem through experiments and simulations. This paper analyzes the TCP incast problem in data centers by focusing on the relationships between the TCP throughput and the congestion control window size of TCP. The root cause of the TCP incast problem is explored and the essence of the current methods to mitigate the TCP incast is well explained. The rationality of our analysis is verified by simulations. The analysis as well as the simulation results provides significant implications to the TCP incast problem. Based on these implications, an effective approach named IDTCP (Incast Decrease TCP) is proposed to mitigate the TCP incast problem. Analysis and simulation results verify that our approach effectively mitigates the TCP incast problem and noticeably improves the TCP throughput.

An Improvement of TCP Downstream Between Heterogeneous Terminals in an Infrastructure Network

2007

We measured a performance of data transmission between a desktop PC and a PDA in an infrastructure network based on IEEE 802.11x wireless LAN. Assuming that a PDA is mainly used for downloading data from its stationary server, i.e., a desktop PC, a PC and a PDA acts as a fast sender and a slow receiver, respectively, due to substantial differences in their computational capabilities. With data transmission between these heterogeneous terminals a transmission time during downstream is slower than that during upstream by 20% at maximum. To mitigate this, we present two distinct approaches. First, by increasing the size of a receive buffer for a PDA the congestion window size of TCP becomes more stable. Thus, an approximate 32% increase in throughput can be obtained by increasing its size from 512 bytes to 32768 bytes. Second, a pre-determined delay between packets to be transmitted at the sender should be given. By assigning the inter-packet delay of 5 ms during downstream achieves a ...

TCP throughput and buffer management

Proceedings Third IEEE International Symposium on Object-Oriented Real-Time Distributed Computing (ISORC 2000) (Cat. No. PR00607), 2000

There have been many debates about the feasibility of providing guaranteed Quality of Service (QoS) when network traffic travels beyond the enterprise domain and into the vast unknown of the Internet. Many mechanisms have been proposed to bring QoS to TCP/IP and the Internet (RSVP, DiffServ, 802.1p). However, until these techniques and the equipment to support them become ubiquitous, most enterprises will rely on local prioritization of the traffic to obtain the best performance for mission critical and time sensitive applications. This work explores prioritizing critical TCP/IP traffic using a multi-queue buffer management strategy that becomes biased against random low priority flows and remains biased while congestion exists in the network. This biasing implies a degree of unfairness but proves to be more advantageous to the overall throughput of the network than strategies that attempt to be fair. Only two classes of services are considered where TCP connections are assigned to these classes and mapped to two underlying queues with round robin scheduling and shared memory. In addition to improving the throughput, cell losses are minimized for the class of service (queue) with the higher priority.

Cascaded TCP: Applying pipelining to TCP for efficient communication over wide-area networks

2013 IEEE Global Communications Conference (GLOBECOM), 2013

The bandwidth utilization in traditional TCP protocols (e.g., TCP New Reno) suffers over high-latency and highbandwidth links due to the inherent characteristics of TCP congestion control. Conventional methods of improving throughput cannot be applied per se for streaming applications. The challenge is exacerbated by "big data" applications such as with the Long Wavelength Array data that is generated at a rate of up to 4 terabytes per hour. To improve bandwidth utilization, we introduce layer-4 relay(s) that enable the pipelining of TCP connections. That is, a traditional end-to-end connection is split into independent streams, each with shorter latencies, that are then concatenated (or cascaded) together to form an equivalent end-to-end TCP connection. This addresses the root cause by decreasing the latency over which the congestion-control protocol operates. To understand when relays are beneficial, we present an analytical model, empirical data and its analyses, to validate our argument and to characterize the impact of latency and available bandwidth on throughput. We also provide insight into how relays may be setup to achieve better bandwidth utilization.