A Computer Program for Invention and Feedback (original) (raw)

Writing, thinking and computers

British Journal of Educational Technology, 1993

The author reviews the potential of word-processors for changing the ways in which writers process written text and, indeed, think about writing. He draws distinctions between different kinds of writing software and discusses whether changing the processes of writing necessarily leads to improvements in writing quality. The review also raises questions about the use of word-processors in examinations.

Issues of Integrating Computers into Writing Instruction

1984

Computers can pfovide four kinds-of, help to practicing writers: (1) data storage and retrieval, (2) computer-assisted instruction and text feedback-utility or word processing programs, an0,014) telecommunications capability. Writing teachers must incorporate certain values into the planning of computer applications in2the writing curriculum. The first value is freedom, in light of the limits of composition pedagogy and of the function of computersk as enforcers and controllers. Teachers must allow students opportunity to learn for themselves, whether by wrestling with a program thought to be too advanced for them (the way people learn to play computer games without instructions), by using programs that open-ended questions, or by-not using the computer at all if they so choose. A-second value involves honesty and humaneness in the feedback given by computers. Programs that evaluate style may be counter-productive, whereas feedback such as "that was my toughest question" gives the student pride in a right answer -or information for evaluating the test in he event of a wrong answer. A third value is playful creativity. For example, a computer "bulletin board" is a safe playgieound for trying out essays on other readers, and word processing programs-encourage risk-taking in revision.(NTH) I

An Annotated Bibliography of Writing Assessment: Machine Scoring and Evaluation of Essay-length Writing

Journal of Writing Assessment, 2012

This installment of the JWA annotated bibliography focuses on the phenomenon of machine scoring of whole essays composed by students and others. "Machine scoring" is defined as the rating of extended or essay writing by means of automated, computerized technology. Excluded is scoring of paragraph-sized free responses of the sort that occur in academic course examinations. Also excluded is software that checks only grammar, style, and spelling. Included, however, is software that provides other kinds of evaluative or diagnostic feedback along with a holistic score. While some entries in this bibliography describe, validate, and critique the ways computers "read" texts and generate scores and feedback, other sources critically examine how these results are used. The topic is timely, since the use of machine scoring of essays is rapidly growing in standardized testing, sorting of job and college applicants, admission to college, placement into and exit out of writing courses, content tests in academic courses, and value-added study of learning outcomes. This installment of the JWA annotated bibliography focuses on the phenomenon of machine scoring of whole essays composed by students and others. This bibliography extends Rich Haswell's bibliography on the topic, "A Bibliography of Machine Scoring of Student Writing, 1962-2005" in Machine Scoring of Student Essays: Truth and Consequences (Logan, UT: Utah State University Press, pp. 234-243). Here we define "machine scoring" as the rating of extended or essay writing by means of automated, computerized technology. We exclude scoring of paragraph-sized free responses of the sort that occur in academic course examinations. We also exclude software that checks only grammar, style, and spelling. But we include software that provides other kinds of evaluative or diagnostic feedback along with a holistic score. While some entries in this bibliography describe, validate, and critique the ways computers "read" texts and generate scores and feedback, other sources critically examine how these results are used. The topic is timely, since the use of machine scoring of essays is rapidly growing in standardized testing, sorting of job and college applicants, admission to college, placement into and exit out of writing courses, content tests in academic courses, and value-added study of learning outcomes. With this abridged collection of sources, our goal is to provide readers of JWA with a primer on the topic, or at least an introduction to the methods, jargon, and implications associated with computer-based writing evaluation. We have omitted pieces of historical interest, focusing mainly on developments in the last twenty years, although some pieces recount the forty-five years of machine scoring history. The scholarship should provide teachers of writing, writing program administrators, and writing assessment specialists a broad base of knowledge about how machine scoring is used and to what ends, as well as insights into some of the main theoretical and pedagogical issues current among those who advocate for or caution against machine scoring. At the least, we hope this collection of resources will provide readers with a sense of the conversation about machine scoring by experts in the enterprise of making and marketing the software and scholars in the field of writing studies and composition instruction. We all need to be better prepared to articulate the benefits, limitations, and problems of using machine scoring as a method of writing assessment and response. Attali, Yagli. (2004). Exploring the feedback and revision features of Criterion. Paper presented at the National Council of Measurement in Education, April 12-16, San Diego, CA. http://www.ets.org/Media/Research/pdf/erater\_NCME\_2004\_Attali\_B.pdf Reports on a large-scale, statistically based study of the changes in student essays, from grades 6-12, from the first to last submission to Educational Testing Services's Criterion. Evaluates "the effectiveness of the automated feedback" feature to decide whether "students understand the feedback provided to them and have the ability to attend to the comments" (p. 17). Concludes that statistical analysis demonstrates students were able to significantly lower the rate of specific errors and significantly increase the occurrence of certain desirable discourse elements (e.g., introduction, conclusion); therefore, students are able to revise and improve their essays using the automated feedback system. Although the study's research questions are framed in terms of how students understand the feedback and why they choose to revise, the comparison of first and last essay submissions is purely text based. The study doesn't present any data about how feedback was used by the students, if there was any teacher intervention, or if other factors external to the feedback could have influenced the students' revisions and final texts. Baron, Dennis. (1998). When professors get A's and machines get F's. The Chronicle of Higher Education (November 29), p. A56. Examines Intelligent Essay Assessor (IEA), arguing that while this assessment program claims to scan student essays consistently and objectively in seconds using what its developers call "latent semantic analysis," consistent, objective readings would not necessarily improve classroom assessments or teaching-if such readings could be achieved. Baron posits that consistency, whether in the grading or reading of student essays, is not a human trait. Breland, Hunter M. (1996). Computer-assisted writing assessment: The politics of science versus the humanities. In

Computers in writing instruction

International Journal of Educational Research, 1992

For computers to be useful in writing instruction, innovations should be valuable for students and feasible for teachers to implement. Research findings yield contradictory results in measuring the effects of different uses of computers in writing, in part because of the methodological complexity of such measurements. Yet the computer seems to be a promising toot in several new, theoretically based approaches to writing inst~ctio~. Research of these kinds of computer applications should continue, paying attention to context variables that infiuence the implementation process importantly.

The Effects of Computer Skills and Feedback on the Gains in Students' Overall Writing Quality in College Freshman Composition Courses

1999

This paper reports the results and educational implications of an experimental, comparative study evaluating the gains in overall writing quality in two groups of college freshmen composition students. The experimental group of 45 students learned to compose their first four of eight essays on the computer, while their professor intervened with feedback during their writing processes. The control group of 43 students did not receive this instructional treatment, as their instructor gave them feedback only after grading each of their eight handwritten products. The researchers hypothesized that feedback from the professor during the planning and composing processes, especially on the computer, was statistically more effective on students' writing quality than the other instructor's graded feedback on students' handwritten products. This study used a quantitative, pretest/post-test experimental design, with statistical analysis (paired T-tests). According to the results, the students' gains in overall writing quality, between pretest and post-test essays, were statistically significant for both comparison groups of college freshmen writers. However, statistically, there was a highly significant difference between the two groups in the students' post-test outcomes. The experimental group had significantly higher post-test scores and showed more significant gains in writing quality than the control group. In addition, students in the experimental group learned long-term skills in word processing for successful communication on any writing assignment. This study also suggests numerous variables or features in teaching writing for further research, helping educators realize that remaining "up close and personal" facilitates their students' planning, composing, and revising, especially on the computer, to produce much more effective college writers, as well as more successful graduates and employees. (Contains 20 references and 2 tables of data; an appendix presents a description of the essay scoring procedure.) (Author/EF) Reproductions supplied by EDRS are the best that can be made from the original document.

Chapter 2: Using Computers to Teach Writing 13 W 2 Using Computers to Teach Writing: Advantages and Disadvantages

W 2 Using Computers to Teach Writing: Advantages and Disadvantages hen considering whether to introduce computers into the writing classroom, one question stands above the rest: do computers improve the writing of students? The answer, surprisingly, is that we don't know. Wolfe et al. report that "researchers do not agree about the effects of using word processors on the quality of student writing" (270). Owston, Murphy, and Wideman inform us that "the results to date have been equivocal" (251). Joram et al. tell us that, in regard to the accepted belief that computers facilitate revision, "there is little research that directly tests this claim" (168). Collier and Werier find that "research on the qualitative changes effected in writing by word-processing systems have been either contradictory or inconclusive . . . for all population samples-experienced professional and academic writers, as well as several categories of inexperienced writers" (47).