The Challenge to Evaluation Today: My point of view (original) (raw)

Considerations in the Implementation of Program Evaluation

1973

Given the current, apparently favorable, .climate for introducing and implementing prograi evaluation schema, the author _ questions factors mitigating against success, and variables relevant to program evaluation..He cites two types of prpblems: technological or instrumental problems of methodology and measurement, and evaluation process into technological versus value aspects and, in preparatich. for program evaluation, cites the need for the simultaneous consideration of both technological and value aspects. Besides the issues of incredibility and confidence, there are the following-difficulties in program evaluation: 11) there is ono one ways, to perform evaluation; (2) there is no generic logical structure that will assure a aright method of choices; (3) evaluation ultimately becomes judgment as long as there is no ultimate ordering of priorities; and (4) the critical element in evaluation is who has the right to decide..Some specific suggestions for program evaluation conclude the report..(Author/LAA)

Knowing What Works: Program Evaluation

New Directions for Student Services, 2000

Program evaluation is a critical step in effective program planning. The programmer needs to know whether and how students change as a result of educational experiences. Through a systematic collection of data, the programmer gathers evidence of how programs help students meet their learning objectives and fulfill institutional goals. Further, the results of program evaluation can inform the next program planning cycle.

What is Program Evaluation? Generating Knowledge for Improvement

Archival Science, 2004

In this introductory article, we discuss the nature of Program Evaluation, describing the concepts that underlie our formal and informal evaluative efforts. Program Evaluation, like any deliberate inquiry process, is about learning. The process explicates program purposes, activities, and outcomes and generates knowledge about their merit and worth. This knowledge can inform planning and lead to program improvement. We present and discuss various definitions of Program Evaluation, focussing on its purposes and uses. We also provide an overview of the inquiry process, grounding the search for merit and worth in the American Evaluation Association's Guiding Principles for Evaluators. Because program evaluations are typically conducted to inform decision makers, we discuss aspects of professional practise that contribute to the use of an evaluation.

Program Evaluation Methodologies - A Comparative Assessment

"Social Science Tribune", Special Issue “Economy and Society”, 2003

This paper presents the methodologies used to build up a program evaluation. Several types of methodologies and techniques on program evaluation can be found in the literature composing a valuable reference base for the evaluator. By presenting the various methodologies we do not aim to measure their efficiency and propose the best one. On the contrary we believe that each methodology has some advantages and, moreover, in some cases it could be the most appropriate. The aim of this paper is the summarizing of different approaches in evaluation, the gathering of various methodologies and the determination of specific circumstances under which each methodology is the most appropriate to follow. A more ambitious aim is to propose some interesting methodology mixes, which can be useful guides when looking for evaluation designs that fit better in the reality of a program.

John M. Owen, Program Evaluation: Forms and Approaches (3rd ed), The Guilford Press, New York (2006) ISBN 1-59385-406-4 298pp., ($ 38.00, paperback)

Evaluation and Program Planning, 2007

time constraints (Chapter 4), their discussion of the differences was not satisfying to this reviewer to justify two separate chapters. The repetition of information throughout Chapters 2-8 seems unnecessary. Part III briefly but comprehensively introduces important methodological concepts that might not be covered in more basic science-focused design and analysis texts. The authors handily summarize qualitative methods (Chapter 12) and program theory (Chapter 9) and discuss the relevance of a mixed-method design (Chapter 13). The authors also succinctly introduce several common design strategies, but there is no citation supporting the assertion that these strategies are indeed the most widely used. In sum, this reviewer believes that this book is an introductory text appropriate for the inexperienced reader. The authors introduce a number of important, practical, and organizationally complex issues that evaluators regularly face. This reviewer believes that the RWE approach (Chapters 2-8) are better explicated within the context of the methods-driven Part III text, rather than the stand-alone chapters (in Part II), as readers with a basic research design and statistical background would most likely be aware of the RWE constraints. The book is a relatively easy but lengthy read and offers helpful generic checklists for evaluators. Disclaimer The views expressed in this review are those of the author and do not necessarily reflect the views or official policy of the

The Program Evaluation Cycle Today

There has been strong pressure from just about every quarter in the last twenty years for higher education institutions to evaluate and improve their programs. This pressure is being exerted by several different stake holder groups simultaneously, and also represents the growing cumulative impact of four somewhat contradictory but powerful evaluation and improvement movements, models and advocacy groups. Consequently, the program assessment, evaluation and improvement cycle today is much different and far more complex than it was fifty years ago, or even two decades ago, and it is actually a highly diversified and confusing landscape from both the practitioner's and consumer's view of such evaluative and improvement information relative to seemingly different and competing advocacies, standards, foci, findings and asserted claims. Therefore, the purpose of this article is to present and begin to elucidate a relatively simple general taxonomy that helps practitioners, consumers, and professionals to make better sense of competing evaluation and improvement models, methodologies and results today, which should help to improve communication and understanding and to have a broad, simple and useful framework or schema to help guide their more detailed learning.

Program theory evaluation: Practice, promise, and problems

New directions for …, 2000

For over thirty years now, many evaluators have recommended making explicit the underlying assumptions about how programs are expected to work-the program theory-and then using this theory to guide the evaluation. In this chapter, we provide an overview of program theory evaluation (PTE), based on our search to find what was value-added to evaluations that used this approach. We found fewer clear-cut, full-blown examples in practice than expected, but we found many interesting variations of PTE in practice and much to recommend it. And elements of PTE, whether the evaluators use the terminology or not, are being used in a wide range of areas of concern to evaluators. Based on this review, in this chapter we discuss the practice, promise, and problems of PTE.

Twenty-First Century Program Evaluation: A Comparative Analysis of Three Frameworks-based Approaches

The International Journal of Assessment and Evaluation, 2021

In order to evaluate an academic program successfully, numerous program evaluation approaches are available at an evaluator’s discretion. For this analysis, the researcher intended to comprehensively compare and contrast three program evaluation approaches that are still relevant in the new millennium—the Criticism and Connoisseurship approach, the Consumer-Oriented approach, and the Practical-Participatory Evaluation approach. The selection of these approaches was based on where and how they are positioned on Christie and Alkin’s updated evaluation theory tree and classified in Stufflebeam’s taxonomy of evaluation models. An in-depth analysis based on crucial similarities and differences discussed focus on philosophical tenets; purpose/aim and frequently asked questions; the role of the evaluator; contexts of use; stakeholder involvement; preferred data collection and analysis; intended users/audience; and strengths and challenges. By understanding the contrasts between these approaches, it is hoped that program evaluators can critically appraise their approaches, practically specify the scope of selection, and be more effective and versatile in their application of alternative evaluation approaches.