Automated detection of unexpected communication network performance changes (original) (raw)

Detection of Network Faults and Performance Problems

2001

Network normal operation baselining for automatic detection of anomalies is addressed. A model for network traffic is presented in which studied variables are modeled as a finite mixture model. Based on stochastic approximation of the maximum likelihood function, we propose a baseline of network normal operation as the asymptotic distribution of the difference between successive estimates of model parameters. The baseline multivariate random variable is shown to be stationary, with mean zero under normal operation. Performance problems are characterized by sudden jumps in the mean. Detection is formulated as an online change point problem, where the task is to process residuals and raise alarms as soon as anomalies occur. An analytical expression of false alarm rate allows us to choose the threshold, automatically. Extensive experimental results on a real network showed that the monitoring agent is able to detect even slight changes in the characteristics of the network, and adapt t...

Comparison of Anomaly Detection Techniques Applied to Different Problems in the Telecom Industry

2021

Nowadays, with the growth of digital transformation in companies, a huge amount of data is generated every second as a result of various processes. Often this data contains important information which, when properly analyzed, can help a company gain a competitive advantage. One data processing task common to many different applications is detection of anomalies, that is, data points or groups of data points that stand out from most of the others. Since it is not feasible to have an operator constantly analyzing the data to find anomalous values, due to the generally large volumes of data, the focus of this dissertation is the exploration of a Data Mining area called anomaly detection. In this dissertation we first develop an anomaly detection software in Python, that applies 10 different anomaly detection algorithms, after automatically optimizing their parameters, to an arbitrary dataset. Before applying these algorithms, the software also performs the task of data scaling and imputation of missing values. It outputs the results of the performance metrics of each algorithm, the values of the optimized parameters and the graphics for the results visualization generated using the method t-SNE. This software was then applied to three case studies to compare the performance of different anomaly detection approaches using real-world datasets. These datasets have an increasing level of difficulty associated with them: the amount of missing data and the uncertainty associated with the ground truth regarding the anomalies. In the first case study, we detected fraudulent bank transactions using a public dataset. Then, in the second case we identified clients of a telecommunication company who were likely to miss their payment, leading to contract termination. For this case we used a dataset from a telecommunications company. In the third case, we detected low quality of internet service, again using a large dataset with real measurements from a telecommunications company. Finally, we implemented a state of the art, neural network model, specially applicable to the task of identifying anomalies in time-series data. We optimized the parameters of the network, and applied it to address the problem of low quality of service.

Statistical detection of enterprise network problems

1999

The detection of network fault scenarios wasachieved using an appropriate subset of ManagementInformation Base (MIB) variables. Anomalous changes inthe behavior of the MIB variables was detected using a sequential Generalized Likelihood Ratio (GLR) test. This information was then temporally correlatedusing a duration filter to provide node level alarmswhich correlated with observed network faults and performance problems. The algorithm wasimplemented on data obtained from two different networknodes.

Statistical analysis of measurements for telecommunication-network troubleshooting

IEEE Transactions on Instrumentation and Measurement, 2003

This paper presents a method for assessing network failures by means of a suitable statistical post-processing analysis of measurement data collected by nonintrusive devices. The proposed solution can be easily implemented by already available monitoring instruments for network performance surveillance and automatic detection of anomalies. The effectiveness of the method has been proven on real-life telecommunication systems, and results are reported.

THE ROLE OF NETWORK MONITORING AND ANALYSIS IN ENSURING OPTIMAL NETWORK PERFORMANCE

Nwakeze Osita Miracle, 2024

Network monitoring and analysis are significant components in successfully functioning networks because they offer real-time data and information about the performance of the network. The following literature review gives an overview on how network monitoring tools have developed, speaking of the shift from simple availability and connection tests, to the use of Machine Learning and Artificial intelligence. To support the provided key areas of concern, an analytical overview of performance indicators like uptime, response time, packet loss, and bandwidth are provided with live examples of their importance in a network. Present-day approaches are discussed, with focus made on the most popular service solutions such as SolarWinds, Nagios, or PRTG. Network Monitor is provided and examined through case studies illustrating its efficient usage in various contexts. The paper also examines the latest methods improving the effective monitoring of the network, the security aspects, the problems arising from the increasing trends in scalability, integration, and varied forms of cyber threats. Possible advances, specifically automation and ai solutions are proposed as crucial for combating these challenges. This review has highlighted the necessity to have strong monitoring process in managing a complex network to address the emergent technology needs in the future.

A Decision-Theoretic Approach to Detect Anomalies in Internet Paths

2010

Many algorithms have been proposed in the last decade to detect traffic anomalies in enterprise networks. However, most of these algorithms cannot detect anomalies that occur beyond enterprise boundaries. Anomaly monitoring and detection on end-to-end Internet paths, although important for network operations, is challenging due to lack of access and control over intermediate network devices. In this paper, we propose an algorithm that detects anomalies or significant events on an end-to-end Internet path by monitoring the path's available bandwidth. We first evaluate existing algorithms on a comprehensive dataset of more than a million bandwidth measurements spanning three years. We show that existing algorithms do not incorporate the typical behavior of a path in the anomaly detection process and consequently incur accuracy degradations. We therefore propose to filter noisy bandwidth measurements to extract a typical or baseline statistical distribution of a path's bandwidth. This baseline model is in turn leveraged in a generic decision-theoretic framework to provide timely detection of significant path events. We show that the proposed detector provides highly accurate performance and easily surpasses the accuracy of existing techniques.

Detection of Anomalies in the Computer Network Behaviour

European Journal of Engineering and Formal Sciences, 2020

The goal of anomaly-based intrusion detection is to build a system which monitors computer network behaviour and generates alerts if either a known attack or an anomaly is detected. Anomaly-based intrusion detection system detects intrusions based on a reference model which identifies normal behaviour of the computer network and flags an anomaly. Basic challenges in anomaly-based detection are difficulties to identify a ‘normal’ network behaviour and complexity of the dataset needed to train the intrusion detection system. Supervised machine learning can be used to train the binary classifiers in order to recognize the notion of normality. In this paper we present an algorithm for feature selection and instances normalization which reduces the Kyoto 2006+ dataset in order to increase accuracy and decrease time for training, testing and validating intrusion detection systems based on five models: k-Nearest Neighbour (k-NN), weighted k-NN (wk-NN), Support Vector Machine (SVM), Decisio...

Passive, automatic detection of network server performance anomalies in large networks

Network management in a large organization often involves-- whether explicitly or implicitly-- the responsibility for ensuring the availability and responsiveness of network resources attached to the network, such as servers and printers. Users often think of the services they rely on, such as web sites and email, as part of the network. Although tools exist for ensuring the availability of the servers running these services, ensuring their performance is a more difficult problem. In this dissertation, I introduce a novel approach to managing the performance of servers within a large network broadly and cheaply. I continuously monitor the border link of an enterprise network, building for each inbound connection an abstract model of the application-level dialog contained therein without affecting the operation of the server in any way. The model includes, for each request/response exchange, a measurement of the server response time, which is the fundamental unit of performance I use...

IJERT-Modeling and Simulation of Internet Network Fault Diagnostics and Optimization using Neural Network

International Journal of Engineering Research and Technology (IJERT), 2013

https://www.ijert.org/modeling-and-simulation-of-internet-network-fault-diagnostics-and-optimization-using-neural-network https://www.ijert.org/research/modeling-and-simulation-of-internet-network-fault-diagnostics-and-optimization-using-neural-network-IJERTV2IS90726.pdf The fault diagnostics system constructs a neural network model of the performance of each subsystem in a normal operating mode and each of a plurality of different possible failure modes. This work explores the implementation of a neural network (NN) for analysis and decision-making purposes within the realm of IP network management(IPNM) .The majority of NN models examined in this thesis use RTA and Loss, but Time is also included as an input for some of the models. We shall start this work by addressing RQ1 and RQ2 and built a NN model that will a benchmark on which to base the rest of our work. This initial model will then be examined to determine if outliers existed and whether their removal will improve the benchmark model. In order to determine where the network may have been experiencing trouble when classifying the sample event data, a translated model configuration is used. Unlike the previous NN model which had a single output, this model had three outputs, one for each state the network is tasked with determining.