A similarity study of I/O traces via string kernels (original) (raw)
Related papers
A Novel String Representation and Kernel Function for the Comparison of I/O Access Patterns
2017
Parallel I/O access patterns act as fingerprints of a parallel program. In order to extract meaningful information from these patterns, they have to be represented appropriately. Due to the fact that string objects can be easily compared using Kernel Methods, a conversion to a weighted string representation is proposed in this paper, together with a novel string kernel function called Kast Spectrum Kernel. The similarity matrices, obtained after applying the mentioned kernel over a set of examples from a real application, were analyzed using Kernel Principal Component Analysis (Kernel PCA) and Hierarchical Clustering. The evaluation showed that 2 out of 4 I/O access pattern groups were completely identified, while the other 2 conformed a single cluster due to the intrinsic similarity of their members. The proposed strategy can be promisingly applied to other similarity problems involving tree-like structured data.
Discovery of application workloads from network file traces
Proceedings of the …, 2010
applications are doing with their data at a semantic level helps in designing efficient storage systems. Second, it helps create benchmarks that mimic realistic application behavior closely. Third, it enables autonomic systems as the information obtained can be used to adapt the system in a closed loop.
Self-similarity in file systems
ACM SIGMETRICS Performance Evaluation Review, 1998
We demonstrate that high-level le system events exhibit selfsimilar behaviour, but only for short-term time scales of approximately under a day. W e do so through the analysis of four sets of traces that span time scales of milliseconds through months, and that di er in the trace collection method, the lesystems being traced, and the chronological times of the tracing. Two s e t s o f detailed, short-term le system trace data are analyzed both are shown to have self-similar like behaviour, with consistent Hurst parameters (a measure of self-similarity) for all le system trafc as well as individual classes of le system events. Long-term le system trace data is then analyzed, and we discover that the traces' high variability and self-similar behaviour does not persist across time scales of days, weeks, and months. Using the short-term trace data, we s h o w that sources of le system trafc exhibit ON/OFF source behaviour, which i s c haracterized by highly variably lengthed bursts of activity, followed by similarly variably lengthed periods of inactivity. This ON/OFF behaviour is used to motivate a simple technique for synthesizing a stream of events that exhibit the same self-similar short-term behaviour as was observed in the le system traces.
The Case for Efficient File Access Pattern Modeling
Most modern I/O systems treat each file access independently. However, events in a computer system are driven by programs. Thus, accesses to files occur in consistent patterns and are by no means independent. The result is that modern I/O systems ignore useful information. Using traces of file system activity we show that file accesses are strongly correlated with preceding accesses. In fact, a simple last-successor model (one that predicts each file access will be followed by the same file that followed the last time it was accessed) successfully predicted the next file 72% of the time. We examine the ability of two previously proposed models for file access prediction in comparison to this baseline model and see a stark contrast in accuracy and high overheads in state space. We then enhance one of these models to address the issues of model space requirements. This new model is able to improve an additional 10% on the accuracy of the last-successor model, while working within a state space that is within a constant factor (relative to the number of files) of the last-successor model. While this work was motivated by the use of file relationships for I/O prefetching, information regarding the likelihood of file access patterns has several other uses such as disk layout and file clustering for disconnected operation.
An In-Depth I/O Pattern Analysis in HPC Systems
2021
High-performance computing (HPC) systems consist of thousands of compute nodes, storage systems and highspeed networks, providing multiple layers of I/O stack with high complexity. By adjusting the diverse configuration settings that HPC systems provide, the I/O performance of applications can be improved. However, it is challenging to identify the optimal configuration settings without a thorough knowledge of the system, as each of the different I/O characteristics of applications can be an important factor for parameter decision. In this paper, we use multiple machine learning approaches to perform an in-depth analysis on I/O behaviors of HPC applications and to search for the optimal configuration settings for jobs sharing similar I/O characteristics. Improved by maximum 0.07 R-squared score, our results in overall show that jobs run on the HPC systems can obtain the predicted I/O performance for different configuration parameters with a high accuracy, using the proposed machine learning-based prediction models.
I/O acceleration with pattern detection
Proceedings of the 22nd international symposium on High-performance parallel and distributed computing, 2013
The I/O bottleneck in high-performance computing is becoming worse as application data continues to grow. In this work, we explore how patterns of I/O within these applications can significantly affect the effectiveness of the underlying storage systems and how these same patterns can be utilized to improve many aspects of the I/O stack and mitigate the I/O bottleneck. We offer three main contributions in this paper. First, we develop and evaluate algorithms by which I/O patterns can be efficiently discovered and described. Second, we implement one such algorithm to reduce the metadata quantity in a virtual parallel file system by up to several orders of magnitude, thereby increasing the performance of writes and reads by up to 40 and 480 percent respectively. Third, we build a prototype file system with pattern-aware prefetching and evaluate it to show a 46 percent reduction in I/O latency. Finally, we believe that efficient pattern discovery and description, coupled with the observed predictability of complex patterns within many high-performance applications, offers significant potential to enable many additional I/O optimizations.
Discovering Structure in Unstructured I/O
2014
Abstract—Checkpointing is the predominant storage driver in today’s petascale supercomputers and is expected to remain as such in tomorrow’s exascale supercomputers. Users typically prefer to checkpoint into a shared file yet parallel file systems often perform poorly for shared file writing. A powerful technique to address this problem is to transparently transform shared file writing into many exclusively written as is done in ADIOS and PLFS. Unfortunately, the metadata to reconstruct the fragments into the original file grows with the number of writers. As such, the current approach cannot scale to exaflop supercomputers due to the large overhead of creating and reassembling the metadata. In this paper, we develop and evaluate algorithms by which patterns in the PLFS metadata can be discovered and then used to replace the current metadata. Our evaluation shows that these patterns reduce the size of the metadata by several orders of magnitude, increase the performance of writes by...
IOscope: A Flexible I/O Tracer for Workloads' I/O Pattern Characterization
Storage systems are getting complex to handle HPC and Big Data requirements. This complexity triggers performing in-depth evaluations to ensure the absence of issues in all systems' layers. However, the current performance evaluation activity is performed around high-level metrics for simplicity reasons. It is therefore impossible to catch potential I/O issues in lower layers along the Linux I/O stack. In this paper, we introduce IOscope tracer for uncovering I/O patterns of storage sys-tems' workloads. It performs filtering-based profiling over fine-grained criteria inside Linux kernel. IOscope has near-zero overhead and verified behaviours inside the kernel thanks to relying on the extended Berkeley Packet Filter (eBPF) technology. We demonstrate the capabilities of IOscope to discover patterns-related issues through a performance study on MongoDB and Cassandra. Results show that clustered MongoDB suffers from a noisy I/O pattern regardless of the used storage support (HDDs or SSDs). Hence, IOscope helps to have better troubleshooting process and contributes to have in-depth understanding of I/O performance.
Program-Counter-Based Pattern Classification in Buffer Caching
2004
Program-counter-based (PC-based) prediction techniques have been shown to be highly effective and are widely used in computer architecture design. In this paper, we explore the opportunity and viability of applying PC-based prediction to operating systems design, in particular, to optimize buffer caching. We propose a Program-Counterbased Classification (PCC) technique for use in pattern-based buffer caching that allows the operating system to correlate the I/O operations with the program context in which they are issued via the program counters of the call instructions that trigger the I/O requests. This correlation allows the operating system to classify I/O access pattern on a per-PC basis which achieves significantly better accuracy than previous per-file or per-application classification techniques. PCC also performs classification more quickly as per-PC pattern just needs to be learned once. We evaluate PCC via trace-driven simulations and an implementation in Linux, and compa...
An Execution Fingerprint Dictionary for HPC Application Recognition
2021 IEEE International Conference on Cluster Computing (CLUSTER), 2021
Applications running on HPC systems waste time and energy if they: (a) use resources inefficiently, (b) deviate from allocation purpose (e.g. cryptocurrency mining), or (c) encounter errors and failures. It is important to know which applications are running on the system, how they use the system, and whether they have been executed before. To recognize known applications during execution on a noisy system, we draw inspiration from the way Shazam recognizes known songs playing in a crowded bar. Our contribution is an Execution Fingerprint Dictionary (EFD) that stores execution fingerprints of system metrics (keys) linked to application and input size information (values) as key-value pairs for application recognition. Related work often relies on extensive system monitoring (many system metrics collected over large time windows) and employs machine learning methods to identify applications. Our solution only uses the first 2 minutes and a single system metric to achieve F-scores above 95 percent, providing comparable results to related work but with a fraction of the necessary data and a straightforward mechanism of recognition.