Toward an Evidence-Based Approach to Building Evaluation Capacity (original) (raw)

Understanding and Measuring Evaluation Capacity: A Model and Instrument Validation Study

American Journal of Evaluation, 2013

This study describes the development and validation of the Evaluation Capacity Assessment Instrument (ECAI), a measure designed to assess evaluation capacity among staff of nonprofit organizations that is based on a synthesis model of evaluation capacity. One hundred and sixty-nine staff of nonprofit organizations completed the ECAI. The 68-item measure assessed participants’ perceptions of individual and organizational predictors of two evaluation capacity outcomes: mainstreaming and use of evaluation findings. Confirmatory Factor Analysis and internal consistency results support the inclusion of the items and factors measured by the ECAI. Moreover, structural equation modeling results support the synthesis model and its depiction of relationships among evaluation capacity predictors and outcomes. We discuss the implications of using a validated model and instrument in evaluation capacity building research and practice.

Narrative review of strategies by organizations for building evaluation capacity

Evaluation and program planning, 2016

Program evaluation is an important source of information to assist organizations to make "evidence-informed" decisions about program planning and development. The objectives of this study were to identify evaluated strategies used by organizations and program developers to build the program evaluation capacity of their workforce, and to describe success factors and lessons learned. Common elements for successful evaluation capacity building (ECB) include: a tailored strategy based on needs assessment, an organizational commitment to evaluation and ECB, experiential learning, training with a practical element, and some form of ongoing technical support within the workplace. ECB is a relatively new field of endeavor, and, while existing studies in ECB are characterized by lower levels of evidence, they suggest the most successful approaches to ECB are likely to be multifaceted. To build the level of evidence in this field, more rigorous study designs need to be implemented i...

A Research Synthesis of the Evaluation Capacity Building Literature

American Journal of Evaluation, 2012

The continuously growing demand for program results has produced an increased need for evaluation capacity building (ECB). The Integrative ECB Model was developed to integrate concepts from existing ECB theory literature and to structure a synthesis of the empirical ECB literature. The study used a broad-based research synthesis method with systematic decision rules and demonstrates the viability of the method for producing a reliable analysis of disparate data from a variety of designs. There was a high degree of consistency in what was reported in the empirical literature and the theoretical literature in terms of strategies and outcomes. Reported outcomes at the individual level included attitudes, knowledge, and behaviors and at the organizational level included practices, leadership, culture, mainstreaming, and resources. Collaborative processes and programmatic outcomes emerged as important issues for ECB models and practice. The consistency between the empirical and the theoretical literature indicates that the field is ready to develop common measures, use stronger designs, and report more systematically. This synthesis provides an overview of existing data and an empirical basis for refining strategies and common measures for enhancing the research and practice of ECB to achieve ECB and programmatic goals and outcomes.

Understanding and Measuring Evaluation Capacity

American Journal of Evaluation, 2013

This study describes the development and validation of the Evaluation Capacity Assessment Instrument (ECAI), a measure designed to assess evaluation capacity among staff of nonprofit organizations that is based on a synthesis model of evaluation capacity. One hundred and sixty-nine staff of nonprofit organizations completed the ECAI. The 68-item measure assessed participants’ perceptions of individual and organizational predictors of two evaluation capacity outcomes: mainstreaming and use of evaluation findings. Confirmatory Factor Analysis and internal consistency results support the inclusion of the items and factors measured by the ECAI. Moreover, structural equation modeling results support the synthesis model and its depiction of relationships among evaluation capacity predictors and outcomes. We discuss the implications of using a validated model and instrument in evaluation capacity building research and practice.

Evaluator and Program Manager Perceptions of Evaluation Capacity and Evaluation Practice

American Journal of Evaluation, 2016

The evaluation community has demonstrated an increased emphasis and interest in evaluation capacity building in recent years. A need currently exists to better understand how to measure evaluation capacity and its potential outcomes. In this study, we distributed an online questionnaire to managers and evaluation points of contact working in grantee programs funded by four large federal public health programs. The goal of the research was to investigate the extent to which assessments of evaluation capacity and evaluation practice are similar or different for individuals representing the same program. The research findings revealed both similarities and differences within matched respondent pairs, indicating that whom one asks to rate evaluation capacity in an organization matters.

A Multidisciplinary Model of Evaluation Capacity Building

American Journal of Evaluation, 2008

Evaluation capacity building (ECB) has become a hot topic of conversation, activity, and study within the evaluation field. Seeking to enhance stakeholders' understanding of evaluation concepts and practices, and in an effort to create evaluation cultures, organizations have been implementing a variety of strategies to help their members learn from and about evaluation. Though there is a great deal of ECB occurring in a wide range of organizations, there is no overarching conceptual model that describes how ECB should be designed and implemented to maximize its success. If ECB is about learning how to think evaluatively and how to engage in sound evaluation practice, then something is missing in our work. The purpose of this article is to describe a model of ECB that may be used for designing and implementing capacity building activities and processes as well as for conducting empirical research on this topic.

The Five CS for Innovating in Evaluation Capacity Building: Lessons from the Field

2013

Innovation is essential in addressing complex evaluation capacity building (ECB) efforts that include a host of interacting, nonlinear, adaptive, and dynamical individual and organizational level factors. This article highlights five key ingredients in fostering innovation in ECB, based on evaluation capacity building efforts of the Ontario Centre of Excellence for Child and Youth Mental Health. For the past 5 years, 87 organizations have participated in an integrated ECB program combining funding, training, and coaching support. The five key ingredients to fostering innovation in ECB are curiosity, courage, communication, commitment, and connection.

Relationships between Quantitative Measures of Evaluation Plan and Program Modeling Quality and a Qualitative Measure of Participant Perceptions of an Evaluation Capacity Building Approach

Despite a heightened emphasis on building evaluation capacity and evaluation quality, there is still a lack of tools available to identify high quality evaluation. In the context of a longitudinal study testing the Systems Evaluation Protocol (SEP), rubrics were designed to assess the quality of evaluation plans and models. The rubrics were tested for reliability and internal consistency and used to calculate quantitative quality scores. Qualitative interview data were also collected and analyzed using a priori codes. A mixed methods approach was used to synthesize quantitative and qualitative data and explore trends. Consistencies between qualitative and quantitative data were found for attitude and capacity and disconnects between qualitative and quantitative data were found for knowledge, cyberinfrastructure, time, and quality. The approach to data integration demonstrated in this paper represents a novel way to tap the generative potential of divergence that arises when different methods produce contradictory results.