Automated generation of test cases using a performability model (original) (raw)

Quantitative Software Reliability Modeling from Testing to Operation

2000

We first describe how several existing software reliability growth models based on nonhomogeneous Poisson processes (NHPPs) can be derived based on a unified theory for NHPP models. Under this general framework, we can verify existing NHPP models and derive new NHPP models. The approach covers a number of known models under different conditions. Based on these approaches, we show a method of estimating and computing software reliability growth during the operational phase. We can use this method to describe the transitions from the testing phase to operational phase. That is, we propose a method of predicting the fault detection rate to reflect changes in the user's operational environments. The proposed method offers a quantitative analysis on software failure behavior in field operation and provides useful feedback information to the development process

Automated performance and dependability evaluation using model checking

2002

Markov chains (and their extensions with rewards) have been widely used to determine performance, dependability and performability characteristics of computer communication systems, such as throughput, delay, mean time to failure, or the probability to accumulate at least a certain amount of reward in a given time. Due to the rapidly increasing size and complexity of systems, Markov chains and Markov reward models are difficult and cumbersome to specify by hand at the state-space level.

RELAI testing: a technique to assess and improve software reliability

Testing software for assessing or improving reliability presents several practical challenges. Conventional operational testing is a fundamental strategy that simulates the real usage of the system in order to expose failures with the highest occurrence probability. However, practitioners find it unsuitable for assessing/delivering high reliability levels, and they do not see the adoption of a “real” usage profile estimate as a sensible idea, being it a source of non-quantifiable uncertainty. Debug testing techniques aim to expose as many failures as possible, but regardless of their impact on runtime reliability. These strategies are used either to assess or to improve reliability, but cannot improve and assess reliability in the same testing session. This article proposes Reliability Assessment and Improvement (RELAI) testing, a new technique thought to improve the delivered reliability, by an adaptive testing scheme, while providing, at the same time, a continuous assessment of reliability attained through testing and fault removal. The technique also quantifies the impact of a partial knowledge of the operational profile. RELAI is positively evaluated on four software applications compared, in separate experiments, with techniques conceived either for reliability improvement or for reliability assessment, demonstrating substantial improvements in both cases.