Program Evaluation and Accountability. Searchlight 7+: An Information Analysis Paper, November 1966 through December 1978 (original) (raw)

Program Evaluation Policy, Practice, and the Use of Results

Educational research quarterly, 2016

This scholarly commentary addresses the basic questions that underlie program evaluation policy and practice in education, as well as the conditions that must be met for the evaluation evidence to be used. The evaluation questions concern evidence on the nature and severity of problems, the programs deployed to address the issues, the programs’ relative effects and cost-effectiveness, and the accumulation of evidence. The basic conditions for the use of evidence include potential users’ awareness of the evidence, their understanding of it, as well as their capacity and incentives for its use. Examples are drawn from studies conducted in the United States and other countries, focusing on evaluation methods that address the questions above.

A Comparative Analysis of the Efficacy of Three Program-Evaluation Models –A Review on Their Implication in Educational Programs

Humanities & Social Sciences Reviews

Purpose of the study: This article reviews the comparative efficacy, theoretical and practical background of three program evaluation models (Stufflebeam’s CIPP model, Kirkpatrick’s model, and outcome-based evaluation models) and their implications in educational programs. The article discusses the strengths and limitations of the three evaluation models. Methodology: Peer-reviewed and scholarly journals were searched for articles related to program evaluation models and their importance. Keywords included program evaluation’, ‘assessment’, ‘CIPP model’, ‘evaluation of educational programs, ‘outcome-based model, and ‘planning’. Articles on Stufflebeam’s CIPP model, Kirkpatrick’s model, and outcome-based evaluation models were particularlyfocused because the review aimed at analysing these three models. The strengths and inadequacies of the three models were weighed and presented. Main Findings: The three models –outcome-based evaluation model, the Kirkpatric model, and the CIPP eval...

Concluding Thoughts and Reflections on the Special Issue on Program Evaluation Standards

Journal of MultiDisciplinary Evaluation

This paper seeks to address the issue of relevance of the current edition of the Program Evaluation Standards published by the Joint Committee on Standards for Educational Evaluation. The paper addresses concerns related to the nature of the standards, their current applicability to practice, their comprehensiveness and completeness. The presented conclusions are that the standards are applicable, relevant, complete, and comprehensive.

Developing a Guide for Authors of Evaluation Reports of Educational Programs. Final Report

1969

A complete description of the development of a guide to help authors improve the quality of descriptive evaluation reports of federally funded programs is provided. Improvements in reports would be particularly useful to planners who must decide if and how programs should he modified. The methods and procedures employed in the development of the auide from the initial version through field tests, which employed local program evaluators, to subsequent revisions are discussed in detail. The field test samples, embracing a wide variety of programs, locales, administrative units, and expertise, are described and the participants listed. Recommendations for the use of the guide are included and it is suggested that a complementary planning auicie is needed. %PR) \-% i AIR-855-10/69-FR .

An Appraisal of Educational Program Evaluations: Federal, State, and Local Agencies

This report concerns evaluation of federally supported educational programs at the national, state, and local levels. It was undertaken in response to Section 1526 cf the Education Amendments of 1978, which requires that the Commissioner of Education conduct a comprehensive study of evaluation practices and procedures. Two broad sources of information were used: contemporary research and development by other researchers, and direct investigations by the project staff. Introductory material is presented in the first chapter. Chapter 2 considers the rationale, evidence, and opinion tearing on why evaluations are done; the confusion and argument engendered by general demands for evaluation: and the audiences to wham evaluations are addressed. Chapter 3 addresses the question of how evaluations are executed. Chapter 4 covers the organization of evaluations and the capabilities of evaluators, and chapter 5 considers quality of evaluations. The way evaluation results are used is considered in chapter 6, and case studies on the use of evaluative information are included. Chapter 7 covers recommendations. An extensive bibliography concludes the report. Legislative and management background, and research strategies are contained in the appendixes. (Author/MLR)

What is Program Evaluation? Generating Knowledge for Improvement

Archival Science, 2004

In this introductory article, we discuss the nature of Program Evaluation, describing the concepts that underlie our formal and informal evaluative efforts. Program Evaluation, like any deliberate inquiry process, is about learning. The process explicates program purposes, activities, and outcomes and generates knowledge about their merit and worth. This knowledge can inform planning and lead to program improvement. We present and discuss various definitions of Program Evaluation, focussing on its purposes and uses. We also provide an overview of the inquiry process, grounding the search for merit and worth in the American Evaluation Association's Guiding Principles for Evaluators. Because program evaluations are typically conducted to inform decision makers, we discuss aspects of professional practise that contribute to the use of an evaluation.

Program evaluation primer: A review of three evaluations

2015

Program evaluation is an essential process to program assessment and improvement. This paper overviews three published evaluations, such as reduction of HIV-contraction, perceptions of teachers of a newly adopted supplemental reading program, and seniors farmers' market nutrition education program, and considers important aspects of program evaluation more broadly. Few human resource development (HRD) scholars, professionals, and practitioners would argue that the sub-field of program evaluation is not essential to the learning and performance goals of the HRD profession. Program evaluation, a “tool used to assess the implementation and outcomes of a program, to increase a program’s efficiency and impact over time, and to demonstrate accountability” (MacDonald et al., 2001, p. 1), is an essential process to program assessment and improvement. Program evaluation (a) establishes program effectiveness, (b) builds accountability into program facilitators and other stakeholders, (c) ...

Considerations in the Implementation of Program Evaluation

1973

Given the current, apparently favorable, .climate for introducing and implementing prograi evaluation schema, the author _ questions factors mitigating against success, and variables relevant to program evaluation..He cites two types of prpblems: technological or instrumental problems of methodology and measurement, and evaluation process into technological versus value aspects and, in preparatich. for program evaluation, cites the need for the simultaneous consideration of both technological and value aspects. Besides the issues of incredibility and confidence, there are the following-difficulties in program evaluation: 11) there is ono one ways, to perform evaluation; (2) there is no generic logical structure that will assure a aright method of choices; (3) evaluation ultimately becomes judgment as long as there is no ultimate ordering of priorities; and (4) the critical element in evaluation is who has the right to decide..Some specific suggestions for program evaluation conclude the report..(Author/LAA)