Testing and Validating the Quality of Specifications (original) (raw)
Related papers
Model Checking Conformance with Scenario-Based Specifications
Lecture Notes in Computer Science, 2003
Specifications that describe typical scenarios of operations have become common for software applications, using, for example, use-cases of UML. For a system to conform with such a specification, every execution sequence must be equivalent to one in which the specified scenarios occur sequentially, where we consider computations to be equivalent if they only differ in that independent operations may occur in a different order. A general framework is presented to check the conformance of systems with such specifications using model checking. Given a model and additional information including a description of the scenarios and of the operations' independence, an augmented model using a transducer and temporal logic assertions for it are automatically defined and model checked. In the augmentation, a small window with part of the history of operations is added to the state variables. New transitions are defined that exchange the order of independent operations, and that identify and remove completed scenarios. If the model checker proves all the generated assertions, every computation is equivalent to some sequence of the specified scenarios. A new technique is presented that allows proving equivalence with a small fixed-size window in the presence of unbounded out-of-order of operations from unrelated scenarios. This key technique is based on the prediction of events, and the use of anti-events to guarantee that predicted events will actually occur. A prototype implementation based on Cadence SMV is described.
Generating test data from state-based specifications
Software Testing, Verification and Reliability, 2003
Although the majority of software testing in industry is conducted at the system level, most formal research has focused on the unit level. As a result, most system-level testing techniques are only described informally. This paper presents formal testing criteria for system level testing that are based on formal specifications of the software. Software testing can only be formalized and quantified when a solid basis for test generation can be defined. Formal specifications represent a significant opportunity for testing because they precisely describe what functions the software is supposed to provide in a form that can be automatically manipulated. This paper presents general criteria for generating test inputs from state-based specifications. The criteria include techniques for generating tests at several levels of abstraction for specifications (transition predicates, transitions, pairs of transitions and sequences of transitions). These techniques provide coverage criteria that are based on the specifications and are made up of several parts, including test prefixes that contain inputs necessary to put the software into the appropriate state for the test values. The test generation process includes several steps for transforming specifications to tests. These criteria have been applied to a case study to compare their ability to detect seeded faults.
Validating Specifications for Model-Based Testing
In model-based testing the behavior of a system under test is compared automatically with the behavior of a model. A significant fraction of issues found in testing appear to be caused by mistakes in the model. In order to ensure that it prescribes the desired behavior, it has to be validated by a human. In this work we describe a tool, esmViz, to support this validation. Models are given in a pure, lazy functional programming language. esmViz provides an interactive simulator of the model, as well as diagrams of the observed behavior. The tool is built on the iTask toolkit which results in an extremely concise GUI definition. Experiments show that esmViz helps to gain understanding of a model and to detect and remedy errors.
Abstracting formal specifications to generate software tests via model checking
1999
A recent method combines model checkers with specification-based mutation analysis to generate test cases from formal software specifications. However high-level software specifications usually must be reduced to make analysis with a model checker feasible. We propose a new reduction, parts of which can be applied mechanically, to soundly reduce some large, even infinite, state machines to manageable pieces. Our work differs from other work in that we use the reduction for generating test sets, as opposed to the typical goal of analyzing for properties. Consequently, we have different criteria, and we prove a different soundness rule. Informally, the rule is that counterexamples from the model checker are test cases for the original specification. The reduction changes both the state machine and temporal logic constraints in the model checking specification to avoid generating unsound test cases. We give an example of the reduction and test generation. 1 * Partially supported by the National Science Foundation under grant number CCR-99-01030 feasible. Our approach is to define a reduction from a given specification to a smaller one that is more likely to be tractable for a model checker. We tailor the reduction for test generation, as opposed to the usual goal of analysis. A broad span of research from early work on algebraic specifications [13] to more recent work such as [21] addresses the problem of relating tests to formal specifications. In particular, counterexamples from model checkers are potentially useful test cases. In addition to our use of the Symbolic Model Verifier (SMV) model checker [19] to generate mutation adequate tests [2], Callahan, Schneider, and Easterbrook use the Simple PROMELA Interpreter (SPIN) model checker [16] to generate tests that cover each block in a certain partitioning of the input domain [8]. Gargantini and Heitmeyer use both SPIN and SMV to generate branch-adequate tests from Software Cost Reduction (SCR) requirements specifications [14]. The model checking approach to formal methods has received considerable attention in the literature, and readily available tools such as SMV and SPIN are capable of handling the state spaces associated with realistic problems [11]. Although model checking began as a method for verifying hardware designs, there is growing evidence that model checking can be applied with considerable automation to specifications for relatively large software systems, such as the Traffic Alert &: Collision Avoidance System (TCAS) II [9]. The increasing usefulness of model checkers for software systems makes them attractive targets for use in aspects of software development other than pure analysis, which is their primary role today. Model checking has been successfully applied to a wide variety of practical problems, including hardware design, protocol analysis, operating systems, reactive system analysis, fault tolerance, and security. The chief advantage of
Using model checking to generate tests from requirements specifications
1999
Recently, many formal methods, such as the SCR (Software Cost Reduction) requirements method, have been proposed for improving the quality of software specifications. Although improved specifications are valuable, the ultimate objective of software development is to produce software that satisfies its requirements. To evaluate the correctness of a software implementation, one can apply black-box testing to determine whether the implementation, given a sequence of system inputs, produces the correct system outputs.
Model Checking Large Software Specifications
IEEE Transactions on Software Engineering, 1998
In this paper we present our results and experiences of using symbolic model checking to study the speci cation of an aircraft collision avoidance system. Symbolic model checking has been highly successful when applied to hardware systems. We are interested in the question of whether or not model checking techniques can be applied to large software speci cations.
Using a formal specification and a model checker to monitor and direct simulation
Proceedings 2003. Design Automation Conference (IEEE Cat. No.03CH37451), 2003
We describe a technique for verifying that a hardware design correctly implements a protocol-level formal specification. Simulation steps are translated to protocol state transitions using a refinement map and then verified against the specification using a model checker. On the specification state space, the model checker collects coverage information and identifies states violating certain properties. It then generates protocol-level traces to these coverage gaps and error states. This technique was applied to the multiprocessing hardware of the Alpha 21364 microprocessor and the cache coherence protocol. We were able to generate an error trace which exercised a bug in the implementation that had not been discovered before a prototype was built.
Validating specifications of dynamic systems using automated reasoning techniques
1995
In this paper, we propose a new approach to validating formal specifications of observable behavior of discrete dynamic systems. By observable behavior we mean system behavior as observed by users or other systems in the environment of the system. Validation of a formal specification of an informal domain tries to answer the question whether the specification actually describes the intended domain. This differs from the verification problem, which deals with the correspondence between formal objects, e.g. between a formal specification of a system and an implementation of it. We consider formal specifications of object-oriented dynamic systems that are subject to static and dynamic integrity constraints. To validate that such a specification expresses the intended behavior, we propose to use a tool that can answer reachability queries. In a reachability query we ask whether the system can evolve from one state into another without violating the integrity constraints. If the query is answered positively, the system should exhibit an example path between the states; if the answer is negative, the system should explain why this is so. An example path produced by the tool can be used to produce scenarios for presentations of system behavior, but can also be used as a basis for acceptance testing. In this paper, we discuss the use of planning and theoremproving techniques to answer such queries, and illustrate the use of reachability queries in the context of information system development.