An Adaptive Assessment System to Evaluate Student Ability Level (original) (raw)

Abstract

The experience from years of development and use, the advance of technology, and the development of authoring tools for questions and tests has resulted in a sophisticated, computer based assessment system. However, there is still a lot of room for further development. Some of the current ideas for development are discussed in the remainder of this work. A primary aim of assessment, both formative and summative, is provide the necessary information to improve future educational experiences because it provides feedback on whether the course and learning objectives have been achieved to satisfactory level. Yet, it is important that the assessment data be accurate and relevant to effectively make informed decisions about the curriculum. Moreover, formative assessment can also be used to help bridge the gap between assessment and learning. This may be achieved particularly where assessment strategies are combined with useful feedback, and integrated within the learning process. The answers to the described objectives are enhanced if we could integrate adaptive testing techniques; accurate and fitted assessment data may improve both the curriculum and the student ability level. The idea behind a computerized adaptive testing (CAT) is quite forward: to apply to each examinee only those items useful to know his proficiency level. As a consequence of this, CAT is more efficient than conventional (i.e., fixeditem) tests. It provides more precise measurements for same-length tests or shorter tests for same-precision measurements.

Chapter PDF

Similar content being viewed by others

References

  1. T.J.H.M. Eggen, “Overexposure and underexposure of items in computerized adaptive testing”, 2001, http://download.citogroep.nl/pub/pok/reports/Report01-01.pdf
  2. Bradley A. Hanson, IRT Command Language (ICL), 2002, http://www.b-a-h.com/software/irt/icl/
  3. Chi-keung Leung, Hua-hua Chang, Kit-tai Hau, “Making _a_-Stratified Computerized Adaptive Testing Design More Practical: Imposing Non-statistical Constraints”,2001, http://www.fed.cuhk.edu.hk/GCJCE/gcjce04/gcjce04.html
  4. Hambleton R., Swaminathan H., Rogers J., Fundamentals of Item Response Theory, New Bury Park, SAGE Publications, 1991.
    Google Scholar
  5. Muraki, E. (1992). A generalized partial credit model: Application of an EM algorithm. Applied Psychological Measurement, 16, 159–176.
    Google Scholar
  6. P.W. van Rijn, T.J.H.M. Eggen, B.T. Hemker, P.F. Sanders,“A Selection Procedure for Polytomous Items in Computerized Adaptive Testing”, 2000, http://download.citogroep.nl/pub/pok/reports/Report00-05.pdf
  7. Wainer H. and other, Computerized Adaptive Testing: A Primer, 2° edizone, Mahwah, Lawerence Erlbaum Associates, 2000.
    Google Scholar

Download references

Author information

Authors and Affiliations

  1. University of Bologna, Italy
    Antonella Carbonaro, Giorgio Casadei & Simone Riccucci

Authors

  1. Antonella Carbonaro
  2. Giorgio Casadei
  3. Simone Riccucci

Editor information

Editors and Affiliations

  1. University of Portsmouth, UK
    Max Bramer
  2. University of Belgrade, Serbia and Montenegro
    Vladan Devedzic

Rights and permissions

© 2004 Springer Science + Business Media, Inc.

About this chapter

Cite this chapter

Carbonaro, A., Casadei, G., Riccucci, S. (2004). An Adaptive Assessment System to Evaluate Student Ability Level. In: Bramer, M., Devedzic, V. (eds) Artificial Intelligence Applications and Innovations. AIAI 2004. IFIP International Federation for Information Processing, vol 154. Springer, Boston, MA. https://doi.org/10.1007/1-4020-8151-0\_23

Download citation

Key words

Publish with us