A Computer Program for Invention and Feedback (original) (raw)

Writing, thinking and computers

British Journal of Educational Technology, 1993

The author reviews the potential of word-processors for changing the ways in which writers process written text and, indeed, think about writing. He draws distinctions between different kinds of writing software and discusses whether changing the processes of writing necessarily leads to improvements in writing quality. The review also raises questions about the use of word-processors in examinations.

Issues of Integrating Computers into Writing Instruction

1984

Computers can pfovide four kinds-of, help to practicing writers: (1) data storage and retrieval, (2) computer-assisted instruction and text feedback-utility or word processing programs, an0,014) telecommunications capability. Writing teachers must incorporate certain values into the planning of computer applications in2the writing curriculum. The first value is freedom, in light of the limits of composition pedagogy and of the function of computersk as enforcers and controllers. Teachers must allow students opportunity to learn for themselves, whether by wrestling with a program thought to be too advanced for them (the way people learn to play computer games without instructions), by using programs that open-ended questions, or by-not using the computer at all if they so choose. A-second value involves honesty and humaneness in the feedback given by computers. Programs that evaluate style may be counter-productive, whereas feedback such as "that was my toughest question" gives the student pride in a right answer -or information for evaluating the test in he event of a wrong answer. A third value is playful creativity. For example, a computer "bulletin board" is a safe playgieound for trying out essays on other readers, and word processing programs-encourage risk-taking in revision.(NTH) I

An Annotated Bibliography of Writing Assessment: Machine Scoring and Evaluation of Essay-length Writing

Journal of Writing Assessment, 2012

This installment of the JWA annotated bibliography focuses on the phenomenon of machine scoring of whole essays composed by students and others. "Machine scoring" is defined as the rating of extended or essay writing by means of automated, computerized technology. Excluded is scoring of paragraph-sized free responses of the sort that occur in academic course examinations. Also excluded is software that checks only grammar, style, and spelling. Included, however, is software that provides other kinds of evaluative or diagnostic feedback along with a holistic score. While some entries in this bibliography describe, validate, and critique the ways computers "read" texts and generate scores and feedback, other sources critically examine how these results are used. The topic is timely, since the use of machine scoring of essays is rapidly growing in standardized testing, sorting of job and college applicants, admission to college, placement into and exit out of writing courses, content tests in academic courses, and value-added study of learning outcomes. This installment of the JWA annotated bibliography focuses on the phenomenon of machine scoring of whole essays composed by students and others. This bibliography extends Rich Haswell's bibliography on the topic, "A Bibliography of Machine Scoring of Student Writing, 1962-2005" in Machine Scoring of Student Essays: Truth and Consequences (Logan, UT: Utah State University Press, pp. 234-243). Here we define "machine scoring" as the rating of extended or essay writing by means of automated, computerized technology. We exclude scoring of paragraph-sized free responses of the sort that occur in academic course examinations. We also exclude software that checks only grammar, style, and spelling. But we include software that provides other kinds of evaluative or diagnostic feedback along with a holistic score. While some entries in this bibliography describe, validate, and critique the ways computers "read" texts and generate scores and feedback, other sources critically examine how these results are used. The topic is timely, since the use of machine scoring of essays is rapidly growing in standardized testing, sorting of job and college applicants, admission to college, placement into and exit out of writing courses, content tests in academic courses, and value-added study of learning outcomes. With this abridged collection of sources, our goal is to provide readers of JWA with a primer on the topic, or at least an introduction to the methods, jargon, and implications associated with computer-based writing evaluation. We have omitted pieces of historical interest, focusing mainly on developments in the last twenty years, although some pieces recount the forty-five years of machine scoring history. The scholarship should provide teachers of writing, writing program administrators, and writing assessment specialists a broad base of knowledge about how machine scoring is used and to what ends, as well as insights into some of the main theoretical and pedagogical issues current among those who advocate for or caution against machine scoring. At the least, we hope this collection of resources will provide readers with a sense of the conversation about machine scoring by experts in the enterprise of making and marketing the software and scholars in the field of writing studies and composition instruction. We all need to be better prepared to articulate the benefits, limitations, and problems of using machine scoring as a method of writing assessment and response. Attali, Yagli. (2004). Exploring the feedback and revision features of Criterion. Paper presented at the National Council of Measurement in Education, April 12-16, San Diego, CA. http://www.ets.org/Media/Research/pdf/erater\_NCME\_2004\_Attali\_B.pdf Reports on a large-scale, statistically based study of the changes in student essays, from grades 6-12, from the first to last submission to Educational Testing Services's Criterion. Evaluates "the effectiveness of the automated feedback" feature to decide whether "students understand the feedback provided to them and have the ability to attend to the comments" (p. 17). Concludes that statistical analysis demonstrates students were able to significantly lower the rate of specific errors and significantly increase the occurrence of certain desirable discourse elements (e.g., introduction, conclusion); therefore, students are able to revise and improve their essays using the automated feedback system. Although the study's research questions are framed in terms of how students understand the feedback and why they choose to revise, the comparison of first and last essay submissions is purely text based. The study doesn't present any data about how feedback was used by the students, if there was any teacher intervention, or if other factors external to the feedback could have influenced the students' revisions and final texts. Baron, Dennis. (1998). When professors get A's and machines get F's. The Chronicle of Higher Education (November 29), p. A56. Examines Intelligent Essay Assessor (IEA), arguing that while this assessment program claims to scan student essays consistently and objectively in seconds using what its developers call "latent semantic analysis," consistent, objective readings would not necessarily improve classroom assessments or teaching-if such readings could be achieved. Baron posits that consistency, whether in the grading or reading of student essays, is not a human trait. Breland, Hunter M. (1996). Computer-assisted writing assessment: The politics of science versus the humanities. In

Teaching Writing with Language Feedback Technology

Computers and Composition , 2019

Against the current backdrop of the controversies and concerns over machine scoring, this paper focuses on one specific, less controversial, aspect of how machine can be effective in improving students' writing, that is in identifying and providing timely feedback on language accuracy to students. This paper investigates the use of a Linguistic Feedback Tool (LiFT) to identify and provide feedback of the use of grammar, spelling, and punctuation in students' composition as well as the potential reduction in the teacher's marking time through a study conducted in Singapore schools. Part One of the study explores the teachers and students' reception as well as the students' experience of using a LiFT in their compositions. Part Two of the study investigates the hypothesis that the students' use of a LiFT to review composition drafts before submission to the teachers would reduce the teachers' marking time. The findings indicate that both teachers and students are receptive to the use of a LiFT to improve students' English composition and that there are time-saving from marking for the teachers.

Smagorinsky, P. (1991). The writer's knowledge and the writing process: A protocol analysis. Research in the Teaching of English, 25, 339-364.

This study used on-line protocol analysis to contrast the effects on the writing process of knowledge taught in three instructional treatments: Models (declarative knowledge of form), General Procedures (declarative knowledge of form plus general procedural knowledge related to content plus procedural knowledge related to form), and Task-Specific Procedures (declarative knowledge of form plus task-specific procedural knowledge related to content plus procedural knowledge related to form). Pretest and posttest protocols from six students in each treatment measured treatment effects on the processes of students writing essays involving extended definition. Students in the Models treatment made weak improvements in relating the elements of definition and did not think critically about the concepts being defined. Students in the General Procedures treatment made gains in linking ideas according to particular task constraints and improved their critical thinking skills. Students in the Task-Specific Procedures integrated their ideas purposefully, thought critically about the concepts being defined, and appeared to establish a conversational voice to anticipate composing needs.

How Learners Use Automated Computer-Based Feedback to Produce Revised Drafts of Essays

CALL: Using, Learning, Knowing, EUROCALL Conference, Gothenburg, Sweden, 22-25 August 2012, Proceedings, 2012

Our previous results suggest that the use of Criterion, an automatic writing evaluation (AWE) system, is particularly successful in encouraging learners to produce amended drafts of their essays, and that those amended drafts generally represent an improvement on the original submission. Our analysis of the submitted essays and the feedback provided on the first drafts suggests, however, that the students use a variety of quite different strategies when using the automated computer-based feedback to produce amended drafts. These include simply accepting a suggested correction, interpreting a feedback comment to modify the text, and avoidance strategies such as leaving out text that was highlighted as incorrect or problematic. Our data suggest that the strategies the students use are at least partly influenced by the confidence they have in the feedback, and therefore in the system itself, but may also be influenced by their interpretation of how marks are awarded by the system. This presentation will discuss the findings of an in depth analysis of the changes made in second drafts submitted to the system, linking the changes to the automatic feedback provided on the first draft, and exploring the reasons for the changes made by the students. We will suggest ways in which teachers can explore the utility of various strategies with their learners.

Writing and Response: Theory, Practice, and Research

College Composition and Communication, 1990

to writing is often seen procedurally, as something which, from an instructional perspective, can be regulated and institutionalized as a coherent part of any writing program. From both the student's and the teacher's perspective, however, response to writing is often difficult and tense. For the teacher, it is the schizophrenia of roles now the helpful facilitator, hovering next to the writer to lend guidance and support, and now the authority, passing critical judgment on the writer's work; at one moment the intellectual peer, giving "reader-based" feedback (Elbow 1981), and at the next the imposer of criteria, the gatekeeper of textual standards. For students, as for all writers, response can generate its own special anxieties: misguided expectations as their private creations struggle with the public nature of discourse; conflicts of ego as the instinct to present themselves at their best battles with the fundamental need to share their doubts and imperfections at the very moment when they are most vulnerable. As teachers and as agents of the theories we generate in the field of writing studies, we have tried to circumvent these tensions and complexities with the certainties afforded by simple procedures and activities. Like farmers with new chemicals, we grasp at the slightest success of this or that treatment, forgetting for the moment that nothing in our human world, least of all writing, can be so simple. Occasionally, the keys to unlocking the complexities of writing have, in fact, been elegantly simple, lying in plain view but for our lack of vision to see them. More often than not, however, the field of composition has erred when it has too hastily trusted and laid claim to certainty when it has assumed, for example, that some new key will work with great accuracy and consistency; that it will always work, no matter what the writing, in the same way; that it will work at different stages of intellectual development and writing proficiency; that it will work across contexts, from the classroom to the newspaper office to the chemist's lab; and that it will work among diverse personalities, from the introspective, moody, or defensive to the outgoing, self-consuming, or manic. Instead of providing solutions, the urge for certainty has often created new problems by encouraging simpleminded, mechanical procedures for teaching or learning highly complex skills and processes. Guised in the cloak of reliability and efficiency, such procedures are instructionally very attractive, and teachers adopt them rapidly, often in spite of their deepest convictions about the complexities of the writing process. Easily promulgated and passed along, both the methods and their underlying philosophical origins begin to saturate the habits Q L) imaginative. (10) That such an ideology of response has prevailed in American writing instruction since Hill's "English Composition Card" is not difficult to discern: it is hard to overlook the card's educational durability, laminated as it is to the inside front and back covers of most typical composition textbooks. Symbolically sandwiching in everything else rhetorical, the card is still, in many contexts, the first and last word on writing. But we have, in fact, known better all along if not about how and when to respond to student writing, at least that real, substantive response is in one form or another fundamental to language development. Some eighteen years before the specimen theme was written, Harvard began to require the "correction of bad English as part of the examination for admission" to the college. This move simply reinforced much of what was already happening in secondary schools, where students spent most of their "writing time" parsing sentences, working on penmanship, and memorizing dozens of numbered rules of grammar and usage. Some educators, however, were already pointing out the futility of divorcing these principles from actual use from sustained with it manfully. Yet though his theme was remarkably long, a few commas in the te:::t and two or three words in the margin were almost all the guidance that he got from the instructor. He lost three or four marks, for punctuation, I believe; but the instructor added to his score a well-earned bonus for general excellence, and marked the theme one hundred. Having secured permission to criticise this theme, I spent an hour or two upon it, trying to point out both its faults and its merits. The boy had done so much that it was a shame to see his teacher doing so little. There were a hundred things to say about the composition; and the boy, by the strong intelligence of his work, showed himself able to apply them all. Yet it was nobody's business to examine his writing minutely. Nobody had time for him. (3riggs 1890, 311) While we might wonder what Briggs had in mind for his hundred comments, his observation, buried in the annals of pedagogical history, still rings true. For it is time or lack of it that so often manages to redeem us from the admission that we have corrected, circled, checked, and assigned points to our students' writing but forgotten, in the arduous and painful process, to listen to what they have been saying. But even with world enough and time, the sort of minute examination of the theme w%ich Briggs advocates might have done no more for the student's development than his teacher's formulaic approval. In addition to finding useful ways to ease the "pper load" through alternative response methods, we would do welt to study what we 10 Chris M. Anson new ways and can respond more openly and freely to it. Finally, Geoffrey Sirc offers a critique of computer programs that "respond" in one form or another to writing. Arguing that current programs are severely limited in what they can provide by way of response, Sirc offers an alternative perspective in which computer technology becomes an adjunct to, and not a replacement for, real human response. Part III draws together six studies of response to writing, ranging from descriptive analyses of teacher interactions with students to more empirical research on the relationship between response and the improvement of writing ability. In "Response to Writing as a Context for Learning to Write," Martin Nystrand and Deborah Brandt report results from their studies that the social, collaborative context of peer conferences not only helps students to revise more copiously but enhances their attitudes toward writing through the natural, functional reciprocity of discussion and review. As Cynthia Onore shows in "The Student, the Teacher, and the Text: Negotiating Meanings through Response and Revision," a curriculum based on inquiry can do much to help the teacher assume a more productive role in the classroom, closer to the role of the peer. The traditional classroom, with its established structure of authority and expectations, is unsuited to a model of growth which privileges both gains and losses in the nonlinear movement toward literacy. Drawing from case studies of three undergraduate students reacting to teacher responses, Onore reveals the complexity and idiosyncratic effects response can have on writers as they explore their way toward a finished text. By reinterpreting data that failed to reveal an expected relationship, Onore cautions us against the potential myopia of taking too empirical a view of our research. Keeping the focus on teachers, Glynda Hull and Susan Wall examine the implications of new theories of interpretation for responding to error. In a study of elementary, secondary, and college teachers' responses S1 an editing task, they suggest that the interpretation of error is often a function of the institutional and ideological contexts of the reader. To assume that hard-and-fast criteria exist even within the boundaries of easily identifiable features of texts is to ignore the functional, contextual, and interpersonal features of discourse. Next, Ann Matsuhashi, Alice Gil lam, Rance Conley, arid Beverly Moss widen traditional conceptions of :4.sponse by examining peer tutoring as a response event. Through a careful study of one beginning peer tutor's re, ronse process, the authors raise important questions about how students read texts in progress and how such readings reflect the development of writing ability. Clearly, no single method, no set theory, no specific research findings, no matter how conclusive, will provide everything we should know about or act upon in this rich and complicated process. But in gathering together the opinions, speculations, methods, and findings of twentytwo scholars and teachers who find the issue of response a constant source of interest and challenge, it is my hope that this volume moves us a little further toward a more complete understanding of the relationship between response and students' development as writers. References Briggs, L. B. R. 1890. The Correction of Bad English _as a Requirement for Admission to Harvard College. The Academy 5:302-12.