Teaching Evaluation: A Student-Run Consulting Firm (original) (raw)

Teaching Evaluation: A Student-Run Consulting Firm Teaching Evaluation: A Student Run Consulting Firm

Applied Research Consultants (ARC) is a graduate student run consulting firm that provides experience to students in evaluation and consultation. An overview of this program has been compiled in order to serve as a model of a graduate training practicum that could be applied to similar programs or aid in the development of such programs. Key performance aspects are described in detail to assist in implementation by departments in various higher education programs. A consulting practicum for graduate students is rare in advanced education, but it is highly needed in order to develop students into professionals in the field of evaluation (Belli, 2001; Morris, 1992; Trevisan, 2002). According to Shadish, Cook, and Leviton, we can evaluate the effectiveness and efficiency of anything, "including evaluation itself" (1991, p. 19). Graduate students need the opportunity to participate in evaluative work to be able to gain valuable knowledge and real-life experience. Cole (1995) c...

The Role of Institutional Research in Student Evaluations of Teaching. AIR 1998 Annual Forum Paper

1998

This paper examines the role played by an office of institutional research in developing a new student evaluation of teaching protocol. At Southeastern Louisiana University, a comprehensive 4-year public institution, the administration appointed a campus-wide committee to study the student evaluations of teaching and to make recommendations for improvement. Nine aspects of the process were examined, including the philosophy behind the evaluations; prior years' evaluations; computerized data reporting; confidentiality issues; administrative and personal use of results; whether or not evaluations were mandatory; and evaluation of nontraditional classes. In fall 1995 a new evaluation instrument was pilot-tested, refined, and administered to 43 class sections (n=1,100), following which interviews were conducted with five classes. Following distribution of pilot data, six faculty members were interviewed in depth. The final report was presented to the faculty committee charged with developing the final instrument, which was developed with the help of analyses provided by the institutional research department. In fall 1997, implementation of the student evaluations of teaching program was transferred to the institutional research office, thus ensuring continuing quality of the instrument. Appended are the pilot-test questionnaire, a response form, interview protocols, suggestions for evaluating pilot data, and copies of the current instrument. (CH)

Assessment, Evaluation, and Research

New Directions for Higher Education, 2016

The Assessment, Evaluation, and Research (AER) competency area focuses on the ability to design, conduct, critique, and use various AER methodologies and the results obtained from them; to utilize AER processes and the results obtained from them to inform practice; and to shape the political and ethical climate surrounding AER processes and uses in higher education. One should be able to: Basic  Differentiate among assessment, program review, evaluation, planning, and research as well as the methodologies appropriate to each.  Select AER methodologies, designs, and tools that fit with research and evaluation questions and with assessment and review purposes.  Facilitate appropriate data collection for system/department-wide assessment and evaluation efforts using up-to-datecurrent technology and methods.  Effectively articulate, interpret, and apply results of assessment, evaluation, and research reports and studies, including professional literature.  Assess the legitimacy, trustworthiness, and/or validity of studies of various methodological designs (qualitative, quantitative, and mixed methods, among others).  Consider the strengths and limitations of various methodological AER approaches in the application of findings to practice in diverse institutional settings and with diverse student populations.  Explain the necessity to follow institutional and divisional procedures and policies (e.g., IRB approval, informed consent) with regard to ethical assessment, evaluation, and other research activities.  Ensure that all communications of AER results are accurate, responsible, and effective.  Identify the political and educational sensitivity of raw and partially processed data and AER results, handling them with appropriate confidentiality and deference to organizational hierarchies.  Design program and learning outcomes that are appropriately clear, specific, and measureable, that are informed by theoretical frameworks such as Bloom's taxonomy, and that align with organizational outcomes, goals, and values.  Explain to students and colleagues the relationship of AER processes to learning outcomes and goals. Intermediate  Design ongoing and periodic data collection efforts such that they are sustainable, rigorous, as unobtrusive as possible, and technologically current.  Effectively manage, align, and guide the utilization of AER reports and

P.B. Stark and R. Freishtat An Evaluation of Course Evaluations An Evaluation of Course Evaluations

2014

An evaluation of course evaluations Student ratings of teaching have been used, studied, and debated for almost a century. This article examines student ratings of teaching from a statistical perspective. The common practice of relying on averages of student teaching evaluation scores as the primary measure of teaching effectiveness for promotion and tenure decisions should be abandoned for substantive and statistical reasons: There is strong evidence that student responses to questions of “effectiveness ” do not measure teaching effectiveness. Response rates and response variability matter. And comparing averages of categorical responses, even if the categories are represented by numbers, makes little sense. Student ratings of teaching are valuable when they ask the right questions, report response rates and score distributions, and are balanced by a variety of other sources and methods to evaluate teaching.

Teaching Program Evaluation Three Selected Pillars of Pedagogy

American Journal of Evaluation, 2008

Two challenges often associated with teaching program evaluation at the graduate level are the need to incorporate practical skills development and being limited to a one-semester-long course offering. The existing literature provides some information concerning a practical application component; however, there is almost no discussion of pedagogy or, more specifically, the selection of teaching strategies for a program evaluation course. This article explains and analyzes a pedagogical framework that has been used to teach program evaluation at a research university. The concepts and practices described may help those who plan to teach about program evaluation and other courses involving practical application components, particularly in an adult learner context. The article consists of four sections: (a) analysis of the pedagogical framework, (b) brief overview of the course, (c) explanation of how the pedagogy was integrated into the course, and (d) conclusions.

A Mentoring Approach to the One-Year Evaluation Course

This article presents the conceptual scheme for a one-year evaluation course. The scheme is based on the experience of the authors in developing a single-year evaluation course over a period of four years. The task of the evaluation course is to teach the competencies required to conduct evaluations that provide the sense-making needed for informed decision-making. Such competencies include eliciting, conceptualizing, and providing information, as well as interactions, processes, and experiences required to conduct evaluations that provide the sense-making needed for informed decision-making. The authors have found that these competencies are based on four kinds of knowledge: theoretical, methodological, conceptualization of practice (including converting tacit to explicit knowledge) and practical personal knowledge. These knowledge categories entail a dialog between theory and practice. Hence, the authors have constructed a conceptual setting for teaching evaluation against the background of mentoring, a system that combines theory and practice. The article presents each kind of knowledge, explaining its role in evaluation, the type of learning needed to master it, and the format adopted in the course for teaching it, including the dilemmas that arose in the course of study and practice. Solutions for such dilemmas are offered and discussed. The article ends with a discussion of student feedback on the course. Keywords: program evaluation, teaching evaluation, knowledge structure, evaluation knowledge