ESSE: an expert system for software evaluation (original) (raw)

Abstract

Solving software evaluation problems is a particularly difficult software engineering process and many different -often contradictory -criteria must be considered in order to reach a decision. This paper presents ESSE, a prototype expert system for software evaluation that embodies various aspects of the Multiple-Criteria Decision Aid (MCDA) methodology. Its main features are the flexibility in problem modeling and the built-in knowledge about software problem solving and software attribute assessment. Evaluation problems are modeled around top-level software attributes, such as quality and cost. Expert assistants guide the evaluator in feeding values to the decision model. ESSE covers all important dimensions of software evaluation through the integration of different technologies.

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

References (37)

  1. Albrecht A.J. and Gaffney J.E., Software function, source lines of code and development effort prediction: a software science validation, IEEE Trans., 6 (1983), 639-648.
  2. Basili V.R., Applying the GQM paradigm in the experience factory, in N. Fenton, R. Whitty and Y. Iizuka ed., Software Quality Assurance and Measurement, (Thomson Computer Press, London, 1995) 23-37.
  3. Boehm B.W., Software Engineering Economics (Prentice-Hall, 1981).
  4. Boehm B., In Hoh, Aids for Identifying Conflicts Among Quality Requirements, IEEE Software, (1996).
  5. Boehm B. and Scacchi W., SAMSA: Simulation and Modeling for Software Acquisition: Air Force Opportunities, Extended Report (1997).
  6. Boloix G. and Robillard N.P., A Software Evaluation Framework, IEEE Computer, vol. 28, no.12, (1995) 17-26.
  7. Capacci C. and Stamelos I., Constructing Software Cost Models, Proc. 2nd Conf. on Achieving Quality In Software (1993).
  8. Conte S.D., Shen V.Y. and Dunsmore H.E., Software Engineering Metrics and Models (Benjamin Cummins Publishing Inc., 1986).
  9. Costamagna M., De Bonis R., Squarotti R. and Stamelos I., An Integrated Environment for Productivity Assessment of Telecommunications Software, Proc. European Conference on Software Cost Modelling '95, pp. 21.1-21.16 (1995).
  10. Fenton N., Software metrics -A Rigorous Approach (Chapman & Hall, London, 1991).
  11. Giarratano J. and Riley G., Expert Systems: Principles and Programming (PWS Publishing Company, Boston, 1994).
  12. Halstead M.H., Elements of Software Science, (Elsevier, N-Holland, 1975).
  13. Hugonnard J. and Roy B., Ranking of suburban line extension projects for the Paris metro system by a multicriteria method, Transportation Research 16A (1982), 301-312.
  14. ISO/IEC 9126-1, Information Technology -Software quality chracteristics and sub- characteristics (1996).
  15. Jones Capers, Applied Software Measurement, (McGraw-Hill Inc., New York, 1991).
  16. Kan S. H., Metrics and Models in Software Quality Engineering, (Addison Wesley Longman, Inc., 1994)
  17. Keeney R.L. and Raiffa H., Decision with multiple objectives, (John Wiley, New York, 1976).
  18. Kitchenham B., Towards a constructive quality model. Part 1: Software quality modeling, measurement and prediction, Software Engineering Journal (July 1987).
  19. Kontio, A Case Study in Applying a Systematic Method for COTS Selection, Proceedings of the IEEE Int'l Conference on Software Engineering (1996).
  20. Le Blank L. and Jelassi T., An empirical assessment of choice models for software selection: a comparison of the LWA and MAUT techniques, Revue des systemes de decision, vol.3 no. 2 (1994), pp. 115-126.
  21. LPA flex, LPA flex Technical Reference, (Logic Programming Associates ltd, 1997).
  22. McCabe T.J., A complexity measure, IEEE Trans Soft Eng 2(4) (1976), 308-320.
  23. Meskens N., A knowledge-based system for measuring the quality of existing software, Revue des systemes de decision, vol.3, no.3 (1994), 201-220.
  24. Miyoshi T. and Azuma M., An empirical study of evaluating software development environment quality, IEEE Transactions of Software Engineering, SE-19 (1993).
  25. Morisio M. and Tsoukiàs A., IusWare, A methodology for the evaluation and selection of software products, IEE Proceedings on Software Engineering, 144 (1997), 162-174.
  26. Mosley V., How to assess tools efficiently and quantatively, IEEE-Software (May 1992).
  27. Poston R.M. and Sexton M.P., Evaluating and selecting testing tools, IEEE Software, (May 1992).
  28. Roy B. and Bertier P., La methode ELECTRE II -Une application au media planning, in OR72, M. Ross (ed.), North Holland, Amsterdam (1973), 291-302.
  29. Roy B., Multicriteria Methodology for Decision Aiding (Kluwer Academic, Dordrecht, 1996).
  30. Schneidewind N.F., New software quality metrics methodology. Standard fills measurement needs, IEEE Computer, vol. 26, no. 4 (1993), 105-106.
  31. I.Stamelos, I. Vlahavas, I. Refanidis and A. Tsoukias, Knowledge Based Evaluation of Software Systems: a Case Study, Aristotle University, Dept. of Informatics, technical report.
  32. I. Stamelos, I. Refanidis, P. Katsaros, A. Tsoukias, I. Vlahavas and A. Pombortsis, Automating the Evaluation of Educational Software, to be presented at the 5 th International Conference of the Decision Sciences Institute, Athens, 4-7 July 1999.
  33. Troster J., Assessing Design-Quality Metrics on Legacy Software (Software Engineering Process Group, IBM Canada Ltd. Laboratory, North York, Ontario, 1992).
  34. Vanderpooten D. and Vincke P., Description and analysis of some representative interactive multicriteria procedures, Mathematical and computer modelling, 12 (1989), 1221-1238.
  35. Vincke P., Multicriteria decision aid, (John Wiley, New York, 1992).
  36. Vollman T.E., Software quality assessment and standards, IEEE Computer, vol.26, no.6 (1993), 118-120.
  37. Welzel D., Hausen H.L, Boegh J., Metric-Based Software Evaluation Method, Proceedings BCS 1 st European International Conference on Software Testing, Analysis and Review (London, 1993).