Measuring Writing as a Representation of Disciplinary Knowledge (original) (raw)
Related papers
Building and Validating a Tool for Assessing Academic Writing Skills
International Journal of Linguistics, 2013
Most studies that examine the academic writing skills of freshmen students point to a gap between the writing skills of entering freshman students and the requirements of academic writing. This study describes a research tool designed to assess the academic writing skills of entering students. This tool, known as the Test of Academic Writing Skills (TAWS), was developed by breaking down the criteria of academic writing into measurable components that can be explicitly interpreted, thus enabling accurate identification of the weak points in students' academic writing. The results achieved using this tool reinforce the dichotomy
Creating and validating assessment instruments for a discipline-specific writing course
Journal of Applied Linguistics, 2007
This paper reports on a sustained interdisciplinary effort between applied linguists and chemistry faculty to create and validate writing assessment instruments for an advanced-level Write Like a Chemist course, one component of a much larger interdisciplinary project. The article describes a multiple-year effort to form valid analytic and holistic assessment instruments to be used by chemistry faculty to assess the writing performance of chemistry majors. Emphasis is placed on the joint contributions of applied linguists and chemists in (a) the identification of meaningful writing criteria, (b) the development of assessment scales with distinct score points and descriptors, (c) socialization sessions that prepared chemists to help build the assessment instruments, and (d) the validation of assessment instruments with other chemists. Outcomes exemplify the mediating role that applied linguistics can play in the design of a discipline-specific course, instructional materials, and assessment instruments that support the development of disciplinary expertise. The results also demonstrate the positive consequences of crossing disciplinary boundaries for both subject-area faculty and applied linguists.
Writing, Assessment, and the Authority of the Disciplines
AZTI arrantza, 2008
The writing program at Cornell University involves professors from across the disciplines teaching writing courses at each level of students' undergraduate careers. This program undertook an assessment of its effectiveness in the years 2002-2004. The process of creating and carrying out an assessment developed by professors involved in the program is reported, and the assessment results are presented. These results lead the writer to argue for the assessment process itself as a key experience in developing the disciplinary awareness of participating professors, who became involved in deep questioning of what 'good' student writing might be in higher education, and in what relationship to the language practices of each discipline. The assessment project's challenges and benefits support the value of assessment of students' work across disciplines as fundamentally owned by each discipline.
2014
This dissertation presents a study designed to uncover instructors' conceptualizations of disciplinary writing in order to understand how those conceptualizations may hinder or support students' development as successful writers across disciplines. Data sources included surveys, where instructors answered open response questions and constructed analogies about disciplinary writing, and a series of semi-structured focus groups and interviews with key informants. Data analysis included content analysis to identify the contexts surrounding instructors' discussions of disciplinary writing, discourse analysis to elucidate instructors' personal conceptualizations about disciplinary writing, and metaphor analysis to illuminate the analogical expressions instructors used to make sense of disciplinary writing. Analyses revealed that instructors not only held wide-ranging conceptualizations of disciplinary writing, but also that when instructors' conceptualizations were grouped together by field, conceptual mismatches of disciplinary writing were uncovered within and between disciplines, within and between fields, and even within and between content-area courses and literacy courses. The findings suggest that instructors' conceptualizations about the purposes, descriptions, and values of disciplinary writing are embedded within unconscious and conscious understandings of the nature of each discipline, as well as within instructors' understandings of their responsibility to the teaching of writing. respond to readings or prompts, to identify inferences, to connect to prior knowledge, and to paraphrase or summarize information. Through this view of communication, these instructors were interested in how students communicated their understandings and connections to material, whether it was through prior knowledge, opinions, or personal thoughts. As long as students could articulate their thoughts clearly, their communication was considered acceptable. On the other hand, math and science instructors tended to represent communication as the ability to be precise, concise, accurate, and logical in describing processes. Since many processes, such as experiments, labs, and tests, are dependent upon someone writing down the process exactly, with no room for error, the construct of communication holds a specific meaning. If a student used any type of opinion, interpretation, feeling, or emotive language, their communicative skills were not considered good. In this view of communication, these instructors were interested in specificity, brevity, and accuracy. Similarly, since social scientists rely heavily upon making claims about the past or social or personal behaviors, everything is highly speculative requiring a writer to be critical of biases, interpretation, and evidence. Thus, these instructors did not accept personal opinion, feelings, or personal thoughts to be acceptable in any form of communication. These instructors were more interested in research, evidence, criticism, and interpretation. Therefore, what are acceptable characteristics in written communication in composition courses are not necessarily acceptable characteristics in communication in content-areas.
College Writing Instructors Using Rubrics to Drive Instruction
Academia Letters, 2021
Based on data from the top 50 US News & World Report's ranked national universities and top 10 liberal arts colleges, on average college classes contain anywhere from 18 to 117 students (Giddon). Most basic college composition courses require students to write thousands of words to get credit for the course, and class sizes are generally not small, which suggests what a monumental task grading student writing is. This translates to teachers spending many hours grading student work, and teachers who are contracted for more than one class find their workload multiplying rapidly. High quality, individualized feedback must be distributed to the student in a timely manner, a difficult task which can be both time-consuming and impractical, so how can a teacher reduce his or her grading time while providing quality student feedback? The answer is using rubrics. Composition teachers should master how to use rubrics in a specific way within their writing classes because using a rubric is not a "one size fits all" solution. Students appreciate the instructor's use of a rubric because they perceive that it makes the analysis of their writing less subjective. Learning to write is an arduous and anxietyriddled undertaking for many students, and knowing that their work will be graded according to a predetermined set of guidelines helps to ease student anxiety. Atkinson and Lim conducted a study which focused on students' perception of the use of rubrics embedded within a learning management system and concluded that 95% of students felt the use of rubrics "contributed consistency and fairness to their professors' evaluation," which substantiates this claim (Leader and Clinton 90). Learning to write academically is an arduous process that requires students to strategize the organization of their paper and grapple with the content within it. Knowing exactly what is required mitigates some of the stress associated with writing and releases students to express themselves, while ensuring that the teacher adheres
A Framework for Analyzing Features of Writing Curriculum in Studies of Student Writing Achievement
The journal of writing analytics, 2022
• Background: Researchers interested in quantitative measures of student "success" in writing cannot control completely for contextual factors which are local and site-based (i.e., in context of a specific instructor's writing classroom at a specific institution). (In)ability to control for curriculum in studies of student writing achievement complicates interpretation of features measured in student writing. This article demonstrates how identifying and analyzing features of writing curriculum can provide dimensions of local context not captured in analysis of student-generated texts alone. Using a dataset of 48 curricular texts collected from 21 instructors teaching in five disciplines across six four-year public universities in the United States, this article: 1) presents a set of curriculum scoring rubrics developed through qualitative analysis, 2) describes a protocol for training raters to use the rubrics to score curricular texts to achieve rater agreement and generate quantitative data, and 3) explores how this framework 1 Jill Burstein completed her work on this paper while employed at ETS.
Examining consistency among different rubrics for assessing writing
Language Testing in Asia, 2020
The literature on using scoring rubrics in writing assessment denotes the significance of rubrics as practical and useful means to assess the quality of writing tasks. This study tries to investigate the agreement among rubrics endorsed and used for assessing the essay writing tasks by the internationally recognized tests of English language proficiency. To carry out this study, two hundred essays (task 2) from the academic IELTS test were randomly selected from about 800 essays from an official IELTS center, a representative of IDP Australia, which was taken between 2015 and 2016. The test takers were 19 to 42 years of age, 120 of them were female and 80 were males. Three raters were provided with four sets of rubrics used for scoring the essay writing task of tests developed by Educational Testing Service (ETS) and Cambridge English Language Assessment (i.e., Independent TOELF iBT, GRE, CPE, and CAE) to score the essays which had been previously scored officially by a certified IELTS examiner. The data analysis through correlation and factor analysis showed a general agreement among raters and scores; however, some deviant scorings were spotted by two of the raters. Follow-up interviews and a questionnaire survey revealed that the source of score deviations could be related to the raters' interests and (un)familiarity with certain exams and their corresponding rubrics. Specifically, the results indicated that despite the significance which can be attached to rubrics in writing assessment, raters themselves can exceed them in terms of impact on scores.