ESSE: an expert system for software evaluation (original) (raw)

ESSE: An Expert System for Software Evaluation I.Vlahavas

Solving software evaluation problems is a particularly difficult software engineering process and many different -often contradictory -criteria must be considered in order to reach a decision. This paper presents ESSE, a prototype expert system for software evaluation that embodies various aspects of the Multiple-Criteria Decision Aid (MCDA) methodology. Its main features are the flexibility in problem modeling and the built-in knowledge about software problem solving and software attribute assessment. Evaluation problems are modeled around top-level software attributes, such as quality and cost. Expert assistants guide the evaluator in feeding values to the decision model. ESSE covers all important dimensions of software evaluation through the integration of different technologies.

Knowledge based evaluation of software systems: a case study

Information and Software Technology, 2000

Solving software evaluation problems is a particularly difficult software engineering process and many contradictory criteria must be considered to reach a decision. Nowadays, the way that decision support techniques are applied suffers from a number of severe problems, such as naive interpretation of sophisticated methods and generation of counter-intuitive, and therefore most probably erroneous, results. In this paper we identify some common flaws in decision support for software evaluations.

Knowledge Based Evaluation of Software Systems: a Case Study1 Ioannis Stamelos+ , Ioannis Vlahavas+ , Ioannis Refanidis

Solving software evaluation problems is a particularly difficult software engineering process and many contradictory criteria must be considered to reach a decision. Nowadays, the way that decision support techniques are applied suffers from a number of severe problems, such as naive interpretation of sophisticated methods and generation of counter-intuitive, and therefore most probably erroneous, results. In this paper we identify some common flaws in decision support for software evaluations.

Software Quality Assurance and Expert Systems

There are several models for software quality assurance, such as Capability Maturity Model Integration (CMMI), ISO/IEC 9000-3, and the Software Process Improvement and Capability dEtermination (SPICE). However, the proper selection and implementation of these models is often a difficult and a costly task for software companies especially small and medium ones. Expert system technology is beginning to play an important role and will become more common in the-future in quality management. This paper discusses the SQA models and Expert System technology illustrating how the Expert System can be used to automate the selection and implementation process of such SQA models. The paper provides a detailed comparative study between eight models, TQM, CMMI, ISO, SIX-SIGMA, BOOTSTRAP, TRILLIUM, TICKIT, and SPICE according to a proposed framework of 30 characteristics in 5 categories. The results of this study can be used as a first step for building an expert system for software quality assurance which can be used to help software-producing organizations in selecting the most suitable models to be adopted according to their properties and needs.

Expert Rating Based Software Quality Evaluation

2013

The quality of the software is very essential before the deployment. Quality check of software can be done at any level of the software, be it in the initial or final stages of development. The software should be tested rigorously in order to avoid any future inconvenience. ISO/IEC 9126-1 selects 6 criteria along with 27 sub criteria for determining the quality of the software. The main challenge faced by Software Quality Assurance (SQA) is that it should apply more comprehensive techniques, and decide whether the software is meeting the good standards in terms of quality. The proposed approach is to evaluate the software by the rating given by a group of experts. The ratings are direct rating in the scale of 1 to 9. We calculate the arithmetic mean of all the experts to find the level of quality. We also narrated the calculation of low level metrics of each criterion. It can help the developers to decide whether to go ahead or make any changes in the faulty areas of software.

Reasoning about software using metrics and expert opinion

Innovations in Systems and Software Engineering, 2007

When comparing software programs on the basis of more than one metric a difficulty arises when the metrics are contradictory or if there are no standard acceptance thresholds. An appealing solution in such cases is to incorporate expert opinion to resolve the inconsistencies. A rigorous framework, however, is essential when fusing metrics and expert opinion in this decision-making process. Fortunately, the analytical hierarchy process (AHP) can be used to facilitate rigorous decision-making in this particular problem. In this work a combination of expert opinion and tool-collected measures are used to reason about software programs using AHP. The methodology employed can be adapted to other decision-making problems in software engineering when both metrics data and expert opinion are available, some of which are described.

A Technique and Tool for Software Evaluation

Software evaluation is the problem of determining the extent to which a software product satisfies a set of requirements. We create quantitative models for software evaluation using a general system evaluation method called LSP (Logic Scoring of Preference). In this paper we define and classify software evaluation problems, overview the LSP method, and present design concepts, implementation, and use of a new LSP-based tool for software evaluation. Our tool is called the Integrated System Evaluation Environment (ISEE). ISEE is suitable for rapid development of software evaluation models and their use for evaluation, comparison, and selection of complex software systems.

Software Quality Assurance Models and Expert Systems

There are several models for software quality assurance, such as Capability Maturity Model Integration (CMMI), ISO/IEC 9000-3, and the Software Process Improvement and Capability dEtermination (SPICE). However, the proper selection and implementation of these models is often a difficult and a costly task for software companies especially small and medium ones. Expert system technology is beginning to play an important role and will become more common in the-future in quality management. This paper discusses the SQA models and Expert System technology illustrating how the Expert System can be used to automate the selection and implementation process of such SQA models. The paper provides a detailed comparative study between eight models, TQM, CMMI, ISO, SIX-SIGMA, BOOTSTRAP, TRILLIUM, TICKIT, and SPICE according to a proposed framework of 30 characteristics in 5 categories. The results of this study can be used as a first step for building an expert system for software quality assurance which can be used to help software-producing organizations in selecting the most suitable models to be adopted according to their properties and needs.

IJERT-Expert Rating Based Software Quality Evaluation

International Journal of Engineering Research and Technology (IJERT), 2013

https://www.ijert.org/expert-rating-based-software-quality-evaluation https://www.ijert.org/research/expert-rating-based-software-quality-evaluation-IJERTV2IS100548.pdf The quality of the software is very essential before the deployment. Quality check of software can be done at any level of the software, be it in the initial or final stages of development. The software should be tested rigorously in order to avoid any future inconvenience. ISO/IEC 9126-1 selects 6 criteria along with 27 sub criteria for determining the quality of the software. The main challenge faced by Software Quality Assurance (SQA) is that it should apply more comprehensive techniques, and decide whether the software is meeting the good standards in terms of quality. The proposed approach is to evaluate the software by the rating given by a group of experts. The ratings are direct rating in the scale of 1 to 9. We calculate the arithmetic mean of all the experts to find the level of quality. We also narrated the calculation of low level metrics of each criterion. It can help the developers to decide whether to go ahead or make any changes in the faulty areas of software.

Hybrid assessment method for software engineering decisions

Decision Support Systems, 2011

During software development, many decisions need to be made to guarantee satisfaction of the stakeholders' requirements and goals. The full satisfaction of all of these requirements and goals may not be possible, requiring decisions over conflicting human interests as well as technological alternatives, with an impact on the quality and cost of the final solution. This work aims at assessing the suitability of multi-criteria decision making (MCDM) methods to support software engineers' decisions. To fulfil this aim, a HAM (Hybrid Assessment Method) is proposed, which gives its user the ability to perceive the influence different decisions may have on the final result. HAM is a simple and efficient method that combines one single pairwise comparison decision matrix (to determine the weights of criteria) with one classical weighted decision matrix (to prioritize the alternatives). To avoid consistency problems regarding the scale and the prioritization method, HAM uses a geometric scale for assessing the criteria and the geometric mean for determining the alternatives ratings.