Assessing the service quality of an Internet path through end-to-end measurement (original) (raw)
Related papers
Predicting and bypassing end-to-end internet service degradations
IEEE Journal on Selected Areas in Communications, 2003
We study the patterns and predictability of Internet End-to-End service degradations, where a degradation is a significant deviation of the round trip time between a client and a server. We use simultaneous RTT measurements collected from several locations to a large representative set of Web sites and study the duration and extent of degradations. We combine these measurements with BGP cluster information to learn on the location of the cause.
Locating disruptions on an Internet path through end-to-end measurements
2013 IEEE Symposium on Computers and Communications (ISCC), 2013
In backbone networks carrying heavy traffic loads, unwanted and unusual end-to-end delay changes can happen, though possibly rarely. In order to understand and manage the network to potentially avoid such abrupt changes, it is crucial and challenging to locate where in the network lies the cause of such delays so that some corresponding actions may be taken. To tackle this challenge, the present paper proposes a simple and novel approach. The proposed approach relies only on end-to-end measurements, unlike literature approaches that often require a distributed and possibly complicated monitoring / measurement infrastructure. Here, the key idea of the proposed approach is to make use of compressed sensing theory to estimate delays on each hop between the two nodes where end-to-end delay measurement is conducted, and infer critical hops that contribute to the abrupt delay increases. To demonstrate its effectiveness, the proposed approach is applied to a real network. The results are encouraging, showing that the proposed approach is able to locate the hops that have the most significant impact on or contribute the most to abrupt increases on the end-to-end delay.
2 Locating Disruptions on Internet Paths through End-to-End Measurements
2016
In backbone networks carrying heavy traffic loads, unwanted and unusual end-to-end delay changes can happen, though possibly rarely. In order to understand and manage the network to potentially avoid such abrupt changes, it is crucial and challenging to locate where in the network lies the cause of such delays so that some corresponding actions may be taken. To tackle this challenge, the present paper proposes a simple and novel approach. The proposed approach relies only on end-to-end measurements, unlike literature approaches that often require a distributed and possibly complicated monitoring / measurement infrastructure. Here, the key idea of the proposed approach is to make use of compressed sensing theory to estimate delays on each hop between the two nodes where end-to-end delay measurement is conducted, and infer critical hops that contribute to the abrupt delay increases. To demonstrate its effectiveness, the proposed approach is applied to a real network. The results are encouraging, showing that the proposed approach is able to locate the hops that have the most significant impact on or contribute the most to abrupt increases on the end-to-end delay.
Rigorous Statistical Analysis of Internet Loss Measurements
IEEE/ACM Transactions on Networking, 2000
Loss measurements are widely used in today's networks. There are existing standards and commercial products to perform these measurements. The missing element is a rigorous statistical methodology for their analysis. Indeed, most existing tools ignore the correlation between packet losses and severely underestimate the errors in the measured loss ratios.
Measurement Based Modeling of Quality of Service in the Internet: A Methodological Approach
2001
This paper introduces a newmetho dology for analyzing and interpreting QoS values collected by active measurement and associated with an a priori constructive model. The originality of this solution is that we start with observed performance (or QoS) measures and derive inputs that have lead to these observations. This process is illustrated in the context of the modeling of the loss observed in an Internet path. It provides a powerful solution to address the complex problem of QoS estimation and network modeling.
End-to-end quality of service seen by applications: A statistical learning approach
Computer Networks, 2010
The focus of this work is on the estimation of quality of service (QoS) parameters seen by an application. Our proposal is based on end-to-end active measurements and statistical learning tools. We propose a methodology where the system is trained during short periods with application flows and probe packets bursts. We learn the relation between QoS parameters seen by the application and the state of the network path, which is inferred from the interarrival times of the probe packets bursts. We obtain a continuous non intrusive QoS monitoring methodology. We propose two different estimators of the network state and analyze them using Nadaraya-Watson estimator and Support Vector Machines (SVM) for regression. We compare these approaches and we show results obtained by simulations and by measures in operational networks.
PlanetSeer: Internet Path Failure Monitoring and Characterization in Wide-Area Services
2004
Detecting network path anomalies generally requires examining large volumes of traffic data to find misbehavior. We observe that wide-area services, such as peerto-peer systems and content distribution networks, exhibit large traffic volumes, spread over large numbers of geographically-dispersed endpoints. This makes them ideal candidates for observing wide-area network behavior. Specifically, we can combine passive monitoring of wide-area traffic to detect anomalous network behavior, with active probes from multiple nodes to quantify and characterize the scope of these anomalies.
End-to-end service quality measurement using source-routed probes
25th Annual IEEE Conference on Computer …, 2006
The need to monitor real time network services has prompted service providers to use new measurement technologies, such as service-specific probes. Service-specific probes are active probes that closely mimic the service traffic so that it receives the same treatment from the network as the actual service traffic. These probes are end-to-end and their deployment depends on solutions that address questions such as minimizing probe traffic, while still obtaining maximum coverage of all the links in the network. In this paper, we provide a polynomial-time probe-path computation algorithm, as well as a ¾-approximate solution for merging probe paths when the number of probes exceed a required bound. Our algorithms are evaluated using ISP topologies generated via Rocketfuel. We find that for most topologies, it is possible to cover more than ± of the edges using just ± of the nodes as terminals. Our work also suggests that the deployment strategy for active probes is dependent on cost issues, such as probe installation, probe setup , and maintenance costs.
Internet Access Quality Monitor
Proceedings of the Fourth International Conference on Web Information Systems and Technologies, 2008
Assessing the perceived Quality of Service (QoS) offered on Broadband Internet accesses from end-User standpoint is important not only to monitor the performance, but also to assist end-Users to quantify the effective quality offered by their Internet accesses and compare it with the quality parameters specified in the Service Level Specification (SLS) of the connectivity service contracted to the ISPs (Internet Service Providers). Other key benefits are end-Users being aware about the Geographic quality distribution of the Internet accesses, pinpointing congestion links or areas, and being able to make price-performance tradeoffs and subscribe better ISPs. In this paper, we present the design principles of a practical and comprehensible Internet Access Quality Monitoring (IAQM) system and some aspects regarding its deployment and security. IAQM aims at supporting accurate assessment of the performance of Internet Broadband accesses and satisfaction of end-Users at large scale. A collection of tests are planned to measure several Quality of Service metrics of an Internet access, such as (but not limited to) download and upload rates, latency, jitter or DNS (Domain Name System) lookup times.