A testability-dependent maintainability-prediction technique (original) (raw)

1992, arm

Sign up for access to the world's latest research

checkGet notified about relevant papers

checkSave papers to use in your research

checkJoin the discussion with peers

checkTrack your impact

Abstract

k CONCLUSIONS Existing maintainability prediction techniques for electronic systems do not directly take into account some important measures of testability. This paper outlines a new meantime to repair (MTTR) prediction technique vhich i s a modification of MIL-HDBK-472 procedure V.

Testability Estimation Framework

International Journal of Computer Applications, 2010

Testability has always been an elusive concept and its correct measurement or evaluation a difficult exercise. Most of the studies measure testability or more precisely the attributes that have impact on testability but at the source code level. Though, testability measurement at the source code level is a good indicator of effort estimation, it leads to the late arrival of information in the development process. A decision to change the design in order to improve testability after coding has started may be very expensive and error-prone. While estimating testability early in the development process may greatly reduce the overall cost. This paper provides a roadmap to industry personnel and researchers to assess, and preferably, quantify software testability in design phase. A prescriptive framework has been proposed in order to integrate testability within the development life cycle. It may be used to benchmark software products according to their testability.

A Brief Review of Software Reliability Prediction Models

Software plays an important role in every field of human activity today varying from medical diagnosis to remote controlling spacecraft. Hence it is important for the software to provide failure-free performance whenever needed. The Information technology industry has witnessed rapid growth in the recent past. The competition among the firms also increased. The software organization in the developing countries like India can no longer survive on cost advantage alone. The software companies need to deliver reliable and quality software on time. A lot of research has been carried out on software quality management and reliability estimation. The objective of this paper is to provide a brief review of the major research contribution in the field of software reliability and identify the future research areas in software reliability estimation and prediction Keywords: software reliability growth models, nonhomogeneous Poisson process models, s-shaped models, imperfect debugging I. INTRODUCTION Many organizations utilize information technology (IT) to improve productivity, enhance operational efficiency, responsiveness, etc [1] As a result, the IT industry has witnessed tremendous growth in the past few decades. As the number of information technology companies increased, the competition among them also increased. The software organization in the developing countries like India can no longer survive or grow based on cost advantage alone. But delivering reliable and quality software on time within budgeted cost is a challenge for many organizations [2], [3]. Many times the companies would compromise on software testing and release the software with residual defects. This would make the software unreliable. The software reliability is defined as the probability of failure-free operation of a software system for a specified time in a specified environment [4]. The failure of the software during operations can lead to customer dissatisfaction, loss of market share, etc. The failure of a software used in the medical device or that used in air traffic control system can have a disastrous effect on the individual as well as society. Hence it is imperative for the software firms to ensure their product is sufficiently reliable before releasing the software for usage. This paper is a brief review of the important developments happened in the field of software reliability and identifies the future research areas. The remaining part of this article is arranged as follows: the session II describes the literature review methodology, the literature review analysis is given in session III and the conclusion are discussed in session IV. II. LITERATURE REVIEW METHODOLOGY A lot of articles have been presented at conferences, published in journals and books have been written in the last few decades on software reliability estimation and prediction. The aim of this paper is to provide a brief review of the important researches carried on developing software reliability models. The process started with searching for relevant published articles. The scope of the review is limited to the published books and papers published in journals and important conference proceedings. The databases searched are IEEE explore, Science direct, Google scholar and research gate. Two hundred and nine papers are identified for review. After reading the abstract, ninety-seven papers are shortlisted for review. Another twenty-nine papers are later dropped as the content is not directly related to the focus area of the review. Finally, sixty-eight papers are included in the review. The details are given in fig 1.

Advanced models for software reliability prediction

2011 Proceedings - Annual Reliability and Maintainability Symposium, 2011

This article describes the advanced parametric models for assessment and prediction of software reliability, based on statistics of bugs at the initial stage of testing. The parametric model approach, commonly associated with reliability issues, deals with the evaluation of the amount of bugs in the code. Computed parameter values inserted into the model allow to estimate: (a) number of bugs remaining in the product, and (b) time required to detect the remaining bugs. Many models are developed for similar purpose: Duane Reliability Growth Model, Goel Model, Weibull Model, Classical S-shaped Model, Ohba S-shaped Model, etc. Taking into account some detailed, but practical, aspects of the software testing process, a few Advanced Models were developed and usefully implemented by the authors. The proposed models are sensitive to the situations typical for the early stages of Software development. As a result, one deals with the essentially non-linear, multimodal goal function to define the optimal value as the estimation of the unknown control parameter. To support the optimization of such complex models, the Cross-Entropy Global Optimization Method is proposed. Some authentic numerical examples are considered to demonstrate the efficiency of the proposed models.

Needs and Importance of Reliability Prediction: An Industrial Perspective

Information Sciences Letters, 2020

Reliability plays a really important role for getting quality software. Already developed models of reliability prediction are well-recognised resources to support the management of software quality. Though many practitioners proposed several reliability prediction models but still, these prediction models have various problems during the use in the industry because there is a gap between proper implementation of software reliability prediction model development and their industrial use. There is a need to fill the gap between these two bridges of problems. Also the literature review of previous work and best practices, discloses the noticeable needs and importance of software reliability prediction. With this, author also gives some suggestions to practitioners for increasing the usability of the reliability prediction models. This article may help to developers for reducing the failure rate and enhancing the software reliability.

Analysis of the reliability of a subset of change metrics for defect prediction

2008

ABSTRACT In this paper, we describe an experiment, which analyzes the relative importance and stability of change metrics for predicting defects for 3 releases of the Eclipse project. The results indicate that out of 18 change metrics 3 metrics contain most information about software defects. Moreover, those 3 metrics remain stable across 3 releases of the Eclipse project.

A method proposal for early software reliability estimation

1992

This paper presents a method proposal for estimation of software reliability before the implementation phase. The method is based upon that a formal description technique is used and that it is possible to develop a tool performing dynamic analysis, i.e. locating semantic faults in the design. The analysis is performed with both applying a usage profile as input as well as doing a full analysis, i.e. locate all faults that the tool can find. The tool must provide failure data in terms of time since the last failure was detected. The mapping of the dynamic failures to the failures encountered during statistical usage testing and operation is discussed. The method can be applied either on the software specification or as a step in the development process by applying it on the design descriptions. The proposed method will allow for software reliability estimations that can be used both as a quality indicator, but also for planning and controlling resources, development times etc. at an early stage in the development of software systems.

Analyzing Forty Years of Software Maintenance Models

— Software maintenance has dramatically evolved in the last four decades, to cope with the continuously changing software development models and programming languages and adopting increasingly advanced prediction models. In this work, we present the initial results of a Systematic Literature Review (SLR), highlighting the evolution of the metrics and models adopted in the last forty years.

Fuzzy Based Approach for Predicting Software Maintainability

Software maintenance is a process of modifying existing operational software while leaving its primary functions intact. Software maintenance encompasses a broad range of activates, like error correction, enhancement of capabilities, deletion of obsolete capabilities and optimization.sofwtare maintainability assessment is major issue these days. producing software that is easy to maintain may save large costs in industries. the maintenance of existing software can account for 70% of the total efforts put-in application development[Pres05].the value of software can be enhanced by meeting additional requirements, making it easier to use, improving efficiency and employing newer technologies. this paper discusses various issues and challenges, related with the maintainability assessment of software systems. The present work proposes a fuzzy logic based approach for quantification of maintainability of software system based on combined effect of four major aspects of software. i.e average number of live variables ,average life span of variables, average cyclomatic complexity and the comment ratio. Classroom projects are considered to estimate and validate the proposed maintainability model.

PREDICTABILITY MEASURES FOR SOFTWARE RELIABILITY MODELS

It is critical to be able to achieve an acceptable quality level before a software package is released. It is often important to meet a target release date. To be able to estimate the testing efforts required, it is necessary t o use a software reliability growth model. While several different software reliability growth models have been proposed, there exist no clear guidelines about which model should be used. Here a twwcomponent predictability measure is presented that characterizes the long term predictability of a model. The first component, average predictability, measures how well a model predicts throughout the testing phase. The second component, average bias, is a measure of the general tendency to overestimate or underestimate the number of faults. Data sets for both large and small projects from diverse sources have been analyzed. Results presented here indicate that some models perform better than others in most cases.

Software Reliability Modeling during the early stages of System test

work

This is the continuation of research addressing the performance of a subset of a specific class of software reliability models -the nonhomogeneous Poisson Process (NHPP) models. The object of this paper is to determine the reliability of software during the early stages of the system engineering period using software reliability growth models (SRGMs). Actual software failure data are analyzed with the early stage of system test being represented as the 30% percentile of the total test time of each dataset. The study takes three well known NHPP models and determines their ability to estimate reliability during the defined early stages of the system testing period. The predictive quality of each of the models on test is examined. The method of parameter estimation for the models is the Maximum Likelihood Method. Results of the study also address the reliability growth of the failure data for the datasets on test.

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

Explicit Modelling and Treatment of Repair in Prediction of Dependability

IEEE Transactions on Dependable and Secure Computing, 2020

In engineering practice, multiple repair actions are considered carefully by designers, and their success or failure defines further control actions and the evolution of the system state. Such treatment is not fully supported by the current state-of-the-art in dependability analysis. We propose a novel approach for explicit modelling and analysis of repairable systems, and describe an implementation, which builds on HiP-HOPS, a method and tool for model-based synthesis of dependability evaluation models. HiP-HOPS is augmented with Pandora, a temporal logic for the qualitative analysis of Temporal Fault Trees (TFTs), and capabilities for quantitative dependability analysis via Stochastic Activity Networks (SAN). Dependability prediction is achieved via explicit modelling of local failure and repair events in a system model and then by: (i) propagation of local effects through the model and synthesis of repair-aware TFTs for the system, (ii) qualitative analysis of TFTs that respects both failure and repair logic and (iii) quantification of dependability via translation of repair-aware TFTs into SAN. The approach provides insight into the effects of multiple and alternative

Maintainability prediction: a regression analysis of measures of evolving systems

21st IEEE International Conference on Software Maintenance (ICSM'05), 2005

In order to build predictors of the maintainability of evolving software, we first need a means for measuring maintainability as well as a training set of software modules for which the actual maintainability is known. This paper describes our success at building such a predictor. Numerous candidate measures for maintainability were examined, including a new compound measure. Two datasets were evaluated and used to build a maintainability predictor. The resulting model, Maintainability Prediction Model (MainPredMo), was validated against three held-out datasets. We found that the model possesses predictive accuracy of 83% (accurately predicts the maintainability of 83% of the modules). A variant of MainPredMo, also with accuracy of 83%, is offered for interested researchers.

United States Patent ( 19 ) McEnroe et al . ( 54 AUTOMATED SYSTEM TESTABILITY ASSESSMENT METHOD

2017

A procedure for calculating maintainability and testabil ity parameters of a complex system uses computer soft ware to enable the calculations to be made repeatedly during the development of the system. Failure modes and failure rates, elemental task times and test path data from a branching test flow diagram are input. Screens which identify the data to be input are displayed for ease of data entry. A hierarchical relationship between the modules in the system can be entered so that failure modes and failure rates need only be entered for lowest level modules. The procedure iteratively calculates maintainability and testability parameters starting at the lowest level and using previously calculated data in the next highest level. Fault isolation ambiguity is automati cally taken into account by ordering the modules in descending order of total test path/module failure rate isolated by each test path. The ordered data are used in many of the calculations of the maintainability and t...

Predicting Faults before Testing Phase using Halstead’s Metrics

Software designers are motivated to utilize off-the-shelf software components for rapid application development. Such applications are expected to have high reliability as a result of deploying trusted components. This paper introduces Halstead’s software science to predict the fault before testing phase for component based system. Halstead’s software science is used to predict the faults for individual component and based on this faults reliability of different component is measured so that only reliable component will be reused.

Methodology for maintainability-based risk assessment

2006

Abstract A software product spends more than 65% of its lifecycle in maintenance. Software systems with good maintainability can be easily modified to fix faults or to adapt to changing environment. We define maintainability-based risk as a product of two factors: the probability of performing maintenance tasks and the impact of performing these tasks. In this paper, we present a methodology for assessing maintainability-based risk to account for changes in the system requirements.

Complex system maintainability verification with limited samples

Microelectronics Reliability, 2011

Complex system maintainability verification is always a challenging problem due to limited sample sizes. Consequently, conducting maintenance experiments in a laboratory environment is an appropriate way to obtain data for maintainability verification. In maintenance experiments, faults are seeded in the equipment and maintenance activities are implemented to record repair time. In this process, two problems arise when laboratory experimental data (in-lab data) are used together with field data during the operational test and evaluation stage. The first problem is the verification of segmental maintenance data and the second one is the combination of in-lab data and field data for integrative maintainability verification. Regarding the problems mentioned above, this paper proposes a suitable methodology to solve them. Firstly, the idea of segmentally weighted verification is adopted and the segmentally weighted verification (SWV) method is proposed to realize in-lab data verification. Secondly, the Dempster-Shafer (D-S) evidence theory based integrative verification method is presented to solve the problem of in-lab and field data combination. A case study concerning radar system maintainability verification is presented as an example of the implementation of complex system maintainability verification in industry.

Software for Testability Analysis of Aviation Systems

Advances in systems science and applications, 2021

This paper describes models, methods and software tool for testability analysis of aviation systems. It comprises analytical and programing aspects of calculations of main testability, reliability and availability indices. It presents general description of the software, XML schema of input data and technique of their mapping to the database structure. The procedure for generating the initial data for testability analysis based on the line replaceable units failure modes report is described. Fault tree model for analysis of the built-in test conformity is suggested. Markov models have been created for analyzing reliability and availability, taking into account the features of the built-in test and the specifics of the aircraft operation. An approach to the construction of trends in the operative availability of aviation systems in the inter-maintenance interval is proposed.

Use of Combined System Dependability and Software Reliability Growth Models

International Journal of Reliability, Quality and Safety Engineering, 2002

This paper describes how MEADEP, a system level dependability prediction tool, and CASRE, a software reliability growth prediction tool can be used together to predict system reliability (probability of failure in a given time interval), availability (proportion of time service is available), and performability (reward-weighted availability). The system includes COTS hardware, COTS software, radar, and communication gateways. The performability metric also accounts for capacity changes as processors in a cluster fail and recover. The Littlewood Verall and Geometric model is used to predict reliability growth from software test data this prediction is integrated into a system level Markov model that incorporates hardware failures and recoveries, redundancy, coverage failures, and capacity. The results of the combined model can be used to predict the contribution of additional testing upon availability and a variety of other figures of merit that support management decisions.