Educating Applied Assessment Professionals at the Masters Level (original) (raw)
Related papers
Design, Results, and Analysis of Assessment Components In a
The case for assessment of college writing programs no longer needs to be made. Although none of us would have chosen the words, we all have come to accept the truth of Roger Debreceny's words: the "free ride" for America's colleges and universities is indeed over (1). All writing programs face difficulties in selecting the means for the most effective evaluations for their individual programs. Key concerns include how appropriately, practically, and cost effectively various assessment tools address this problem.
Using An Assessment Center As a Developmental Tool for Graduate Students: A Demonstration
Journal of Social Behavior & Personality, 1997
Assessment centers are widely used in industrial settings to select, promote and provide developmentalfeedbackfor executive-level talent. This paper describes the development and implementation ofan assessment center (AC) administered to graduate students in a two-year industrial/organizational (I/O) psychology program as part of a practictim course requirement. To develop the AC, graduates of a master's program in I/O psychology and several of their supervisors were interviewed for the core process skills necessary for success in organizational settings. Interviews were content coded and six competencies identified: written communication; oral communication; problem solving; organizing; interpersonal; and orgatiizational survival skills. Four assessment center exercises were developed to measure these skills; a leaderless group discussion; an oral presentation; an inbasket; and a role play. As.sessors rate graduate students in the program on the six competencies. Written feedback on their performance in the assessment center is provided to the students for their use in career development planning. Benefits include rich developmental feedback for the graduate students and a fresh view for the I/O faculty of the requirements ofthe MS I/O program. Since their first use in military applications (U.S. OSS, 1948), assessment centers have become widely used in organizational settings to select, promote, and provide developmental feedback to executive-level talent (Byham, 1971; Gatewood & Feild, 1994). An assessment center may generally be described as "a variety of testing techniques designed to allow candidates to demonstrate, under standardized conditions, the skills and abilities most essential for success in a given job" (Joiner, 1984, p. 437). To make these assessments, the traditional assessment center Authors' Notes: The first author gratefully acknowledges the Califomia State Utiiversity for funding to support the identification ofthe critical cotnpetencies. The authors thank Diane Pfahler and three anonymous reviewers for their helpful comments on an earlier version of this paper.
A Portrait of the Assessment Professional in the United States: Results from a National Survey
National Institute for Learning Outcomes Assessment: Occasstional Paper, 2018
While the systematic assessment of student learning has been undertaken since the 1980s, scant research is available that outlines a profile of assessment professionals or the roles and responsibilities these individuals perform in institutions of higher education. This study presents the results of the Assessment Professional Survey (n=305). By examining the demographics, range of roles and responsibilities, types of methodological skills and the service contribution of these professionals, this study provides the first national portrait of the assessment professional. Findings are valuable for (a) the field of assessment, as it represents the first systematic attempt to create a profile of the assessment professional (b) institutions, as they work to provide authentic evidence of student learning, (c) assessment professionals, to understand how they fit within their own institutions and in relationship to other assessment professionals, and (d) new entrants to the assessment profession, to position themselves in the assessment job market.
2013
This chapter has as its focus the assessment of learning. Assessing learning has taken many forms, and understanding its history helps to explain the educational systems that are in place currently. Defining the term, assessment, can be difficult because some definitions focus on diagnostic approaches that learning specialists use to assess learning disabilities or differences while others take it to mean evaluation. More recently, assessment has taken on an accountability emphasis. Assessment of learning may mean standardized testing that is mandated by the state. In higher education, assessment has a distinctly bureaucratic flavor, as it is required for accreditation. With assessment taking on this administrative focus, some of its value to improve the learning/teaching process is lost. This chapter addresses the history of assessment in education and provides examples of authentic assessment tools. Future trends in assessment are also presented.
Assessment: Constructing and Evaluating
Based from the behavioural educational theories, higher learning institutions has been using assessment to measure the quality or success of a taught course and to evaluate whether the students have achieved (Ellery, 2008) the minimum standard that is acceptable to be awarded with the degree. An assessment can be conducted by means of paper and pencil, presentations, lab work, case studies, essays, multiple choice questions, true/false statements, short essays, etc. During the semester, students may be tested to improve their learning experience; this is called a formative test (continuous assessment), whereas a summative test (final assessment) is done at the end or completion of the course or program. A test can be used to measure students' ability or to determine the basic mastery or skills or competencies acquired during a course. There are several types of tests; such as, placement test, diagnostic test, progress test, achievement test, and aptitude test. A placement test is done to place students in teaching groups or classes so that they are within the same level of ability or competency. A diagnostic test is done to identify students' strengths and weaknesses in a particular course. A progress test is done during the semester to measure the progress of students in acquiring the subject taught. An achievement test is done to determine students' mastery of a particular subject at the end of the semester. Whereas an aptitude test is done to determine the students' ability to learn new skills or the potential to succeed in a particular academic program. A good assessment should be valid, reliable, and practical. In terms of validity, an assessment should test what it is intended to measure. For example, content validity is when the test items adequately cover the syllabus. A valid assessment measures achievement of the course learning outcomes. In terms of reliability, does the assessment allow the examiners to evaluate it consistently and differentiate between varying levels of performance? Whereas in terms of practicality, we need to ensure that the length given to students for their assessments are appropriate. There are two types of tests, objective and subjective. For objective, we can choose multiple choice questions, true/false, or fill in the blanks; whereas for subjective we can choose either short or long essay. Although there are objective and subjective tests, I would like to focus on subjective test (essays) because we use this type most often; especially in final exam. When constructing an assessment, we need to bear in mind the objectives of learning of a particular course. Specifically, we need to refer to the course information of the course learning outcomes before constructing the exam questions. In addition, we need to understand Bloom's Taxonomy or classifications of objectives. The three classifications are cognitive, affective, and psychomotor. The six levels of cognitive domain are knowledge, comprehension, application, analysis, synthesis, and evaluation. The levels for affective domain are receiving, responding, valuing, organizing, and characterizing. Psychomotor levels are imitation, manipulation, precision, articulation, and naturalization. I have discussed in detail about the levels of each domain in the previous issue; thus, in this issue I would like to discuss on cognitive domain because this is the most frequently used in final exam and we are quite familiar with it.
Mapping the maze of assessmentAn investigation into practice
Active Learning in Higher Education, 2009
This article presents the results of a preliminary survey of assessment tasks undertaken by students in higher education at a particular university. A key premise of the study was that the ability to handle assessment is central to the development of academic and professional literacy. Much of the current literature on assessment demonstrates a concern that it is not currently
Assessment, Evaluation, and Research
New Directions for Higher Education, 2016
The Assessment, Evaluation, and Research (AER) competency area focuses on the ability to design, conduct, critique, and use various AER methodologies and the results obtained from them; to utilize AER processes and the results obtained from them to inform practice; and to shape the political and ethical climate surrounding AER processes and uses in higher education. One should be able to: Basic Differentiate among assessment, program review, evaluation, planning, and research as well as the methodologies appropriate to each. Select AER methodologies, designs, and tools that fit with research and evaluation questions and with assessment and review purposes. Facilitate appropriate data collection for system/department-wide assessment and evaluation efforts using up-to-datecurrent technology and methods. Effectively articulate, interpret, and apply results of assessment, evaluation, and research reports and studies, including professional literature. Assess the legitimacy, trustworthiness, and/or validity of studies of various methodological designs (qualitative, quantitative, and mixed methods, among others). Consider the strengths and limitations of various methodological AER approaches in the application of findings to practice in diverse institutional settings and with diverse student populations. Explain the necessity to follow institutional and divisional procedures and policies (e.g., IRB approval, informed consent) with regard to ethical assessment, evaluation, and other research activities. Ensure that all communications of AER results are accurate, responsible, and effective. Identify the political and educational sensitivity of raw and partially processed data and AER results, handling them with appropriate confidentiality and deference to organizational hierarchies. Design program and learning outcomes that are appropriately clear, specific, and measureable, that are informed by theoretical frameworks such as Bloom's taxonomy, and that align with organizational outcomes, goals, and values. Explain to students and colleagues the relationship of AER processes to learning outcomes and goals. Intermediate Design ongoing and periodic data collection efforts such that they are sustainable, rigorous, as unobtrusive as possible, and technologically current. Effectively manage, align, and guide the utilization of AER reports and