A Framework for Testing Concurrent Programs (original) (raw)
Related papers
CalFuzzer: An Extensible Active Testing Framework for Concurrent Programs
2009
Active testing has recently been introduced to effectively test concurrent programs. Active testing works in two phases. It first uses predictive off-the-shelf static or dynamic program analyses to identify potential concurrency bugs, such as data races, deadlocks, and atomicity violations. In the second phase, active testing uses the reports from these predictive analyses to explicitly control the underlying scheduler of the concurrent program to accurately and quickly discover real concurrency bugs, if any, with very high probability and little overhead. In this paper, we present an extensible framework for active testing of Java programs. The framework currently implements three active testers based on data races, atomic blocks, and deadlocks.
ConcJUnit: Unit Testing for Concurrent Programs
In test-driven development, tests are written for each program unit before the code is written, ensuring that the code has a comprehensive unit testing harness. Unfortunately, unit testing is much less effective for concurrent programs than for conventional sequential programs, partly because extant unit testing frameworks provide little help in addressing the challenges of testing concurrent code. In this paper, we present ConcJUnit, an extension of the popular unit testing framework JUnit that simplifies the task of writing tests for concurrent programs by handling uncaught exceptions and failed assertions in all threads, and by detecting child threads that were not forced to terminate before the main thread ends.
Framework for testing multi-threaded Java programs
Concurrency and Computation: Practice and Experience, 2003
Finding bugs due to race conditions in multi-threaded programs is difficult, mainly because there are many possible interleavings, any of which may contain a fault. In this work we present a methodology for testing multi-threaded programs which has minimal impact on the user and is likely to find interleaving bugs. Our method reruns existing tests in order to detect synchronization faults. We find that a single test executed a number of times in a controlled environment may be as effective in finding synchronization faults as many different tests. A great deal of resources are saved since tests are very expensive to write and maintain. We observe that simply rerunning tests, without ensuring in some way that the interleaving will change, yields almost no benefits. We implement the methodology in our test generation tool-ConTest. ConTest combines the replay algorithm, which is essential for debugging, with our interleaving test generation heuristics. ConTest also contains an instrumentation engine, a coverage analyzer, and a race detector (not finished yet) that enhance bug detection capabilities. The greatest advantage of ConTest, besides finding bugs of course, is its minimal effect on the user. When ConTest is combined into the test harness, the user may not even be aware that ConTest is being used.
Test-First Java Concurrency for the Classroom
Concurrent programming is becoming more important due to the growing dominance of multi-core processors and the prevalence of graphical user interfaces (GUIs). To prepare students for the concurrent future, instructors have begun to address concurrency earlier in their curricula. Unfortunately, test-driven development, which enables students and practitioners to quickly develop reliable single-threaded programs, is not as effective in the domain of concurrent programming. This paper describes how ConcJUnit can simplify the task of writing unit tests for multi-threaded programs, and provides examples that can be used to introduce students to concurrent programming.
Groundwork for the Development of Testing Plans for Concurrent Software}
2010
While multi-threading has become commonplace in many application domains (e.g., embedded systems, digital signal processing (DSP), networks, IP services, and graphics), multi-threaded code often requires complex coordination of threads. As a result, multithreaded implementations are prone to subtle bugs that are difficult and time-consuming to locate. Moreover, current testing techniques that address multi-threading are generally costly while their effectiveness is unknown. The development of cost-effective testing plans requires an in-depth study of the nature, frequency, and cost of concurrency errors in the context of real-world applications. The full paper will lay the groundwork for such a study, with the purpose of informing the creation of a parametric cost model for testing multi-threaded software. The current version of the paper provides motivation for the study, an outline of the full paper, and a bibliography of related papers.
Multithreaded Java program test generation
IBM Systems Journal, 2000
We describe ConTest, a tool for detecting synchronization faults in multithreaded Java™ programs. The program under test is seeded with a sleep( ), yield( ), or priority( ) primitive at shared memory accesses and synchronization events. At run time, ConTest makes random or coveragebased decisions as to whether the seeded primitive is to be executed. Thus, the probability of finding concurrent faults is increased. A replay algorithm facilitates debugging by saving the order of shared memory accesses and synchronization events.
Concurrent software testing in practice: a catalog of tools
Proceedings of the 6th International Workshop on Automating Test Case Design, Selection and Evaluation - A-TEST 2015, 2015
The testing of concurrent programs is very complex due to the non-determinism present in those programs. They must be subjected to a systematic testing process that assists in the identification of defects and guarantees quality. Although testing tools have been proposed to support the concurrent program testing, to the best of our knowledge, no study that concentrates all testing tools to be used as a catalog for testers is available in the literature. This paper proposes a new classification for a set of testing tools for concurrent programs, regarding attributes, such as testing technique supported, programming language, and paradigm of development. The purpose is to provide a useful categorization guide that helps testing practitioners and researchers in the selection of testing tools for concurrent programs. A systematic mapping was conducted so that studies on testing tools for concurrent programs could be identified. As a main result, we provide a catalog with 116 testing tools appropriately selected and classified, among which the following techniques were identified: functional testing, structural testing, mutation testing, model based testing, data race and deadlock detection, deterministic testing and symbolic execution. The programming languages with higher support were Java and C/C++. Although a large number of tools have been categorized, most of them are academic and only few are available on a commercial scale. The classification proposed here can contribute to the state-of-the-art of testing tools for concurrent programs and also provides information for the exchange of knowledge between academy and industry.
A Concurrency Testing Tool and Its Plug-Ins for Dynamic Analysis and Runtime Healing
Lecture Notes in Computer Science, 2009
This report presents a tool for concurrency testing (abbreviated as ConTest) and some of its extensions. The extensions (called plug-ins in this report) are implemented through the listener architecture of ConTest. Two plugins for runtime detection of common concurrent bugs are presented-the first (Eraser+) is able to detect data races while the second (AtomRace) is able to detect not only data races but also more general bugs caused by violation of atomicity presumptions. A third plug-in presented in this report is designed for hiding bugs that made it into the field so that when problems are detected they can be circumvented. Several experiments demonstrate the capabilities of these plug-ins. ConTest [3] is an advanced tool for testing, debugging, and measuring coverage of concurrent Java programs. Its main goal is to expose concurrency-related bugs in parallel and distributed programs, using random noise injection. ConTest instruments the bytecode-either off-line or at runtime during class load-and injects calls to ConTest runtime functions at selected places. These functions sometimes try to cause a thread switch or a delay (generally referred to as noise). The selected places are those whose relative order among the threads can impact the result; such as entrances and exits from synchronised blocks, accesses to shared variables, and calls to various synchronisation primitives. Context switches and delays are attempted by calling methods such as yield() or sleep(). The decisions are random so that different interleavings are attempted at each run, which increases the probability that a concurrency bug will manifest. Heuristics are used to try to reveal typical bugs. No false alarms are reported because all interleavings that occur with ConTest are legal as far as the JVM rules are concerned. ConTest itself does not know that an error occurred. This is left to the user or the test framework to discern, exactly as they do without ConTest.
Testing concurrent programs to achieve high synchronization coverage
… on Software Testing and …, 2012
The effectiveness of software testing is often assessed by measuring coverage of some aspect of the software, such as its code. There is much research aimed at increasing code coverage of sequential software. However, there has been little research on increasing coverage for concurrent software. This paper presents a new technique that aims to achieve high coverage of concurrent programs by generating thread schedules to cover uncovered coverage requirements. Our technique first estimates synchronization-pair coverage requirements, and then generates thread schedules that are likely to cover uncovered coverage requirements. This paper also presents a description of a prototype tool that we implemented in Java, and the results of a set of studies we performed using the tool on a several open-source programs. The results show that, for our subject programs, our technique achieves higher coverage faster than random testing techniques; the estimation-based heuristic contributes substantially to the effectiveness of our technique.