Evaluation in the context of natural language generation (original) (raw)
Related papers
Towards evaluation in natural language generation
1998
What role should evaluation play in the development of natural language generation (NLG) techniques and systems? In this paper we begin by characterising the NLG task, and in the light of this characterisation we discuss how the problems of NLG evaluation are different from the problems of evaluating natural language understanding. We then suggest a way forward for evaluation of work in natural language generation that proceeds by looking more closely at the component problems that are addressed in research in the field.
This paper surveys the current state of the art in Natural Language Generation (nlg), defined as the task of generating text or speech from non-linguistic input. A survey of nlg is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of nlg technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in nlg and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between nlg and other areas of artificial intelligence; (c) draw attention to the challenges in nlg evaluation, relating them to similar challenges faced in other areas of nlp, with an emphasis on different evaluation methods and the relationships between them.
Building applied natural language generation systems
1997
In this article, we g i v e a n o verview of Natural Language Generation (nlg) from an applied system-building perspective. The article includes a discussion of when nlg techniques should be used suggestions for carrying out requirements analyses and a description of the basic nlg tasks of content determination, discourse planning, sentence aggregation, lexicalization, referring expression generation, and linguistic realisation. Throughout, the emphasis is on established techniques that can be used to build simple but practical working systems now. We also provide pointers to techniques in the literature that are appropriate for more complicated scenarios.
A Brief Survey on Natural Language Processing Based Text Generation and Evaluation Techniques
2022
Text Generation is a pressing topic of Natural Language Processing that involves the prediction of upcoming text. Applications like auto-complete, chatbots, auto-correct, and many others use text generation to meet certain communicative requirements. However more accurate text generation methods are needed to encapsulate all possibilities of natural language communication. In this survey, we present cutting-edge methods being adopted for text generation. These methods are divided into three broad categories i.e. 1) Sequence-to-Sequence models (Seq2Seq), 2) Generative Adversarial Networks (GAN), and 3) Miscellaneous. Sequence-to-Sequence involves supervised methods, while GANs are unsupervised, aimed at reducing the dependence of models on training data. After this, we also list a few other text generation methods. We also summarize some evaluation metrics available for text generation and their Performance
Handbook of Natural Language Processing, 2000
We report here on a significant new set of capabilities that we have incorporated into our language generation system MUMBLE. Their impact will be to greatly simplify the work of any text planner that uses MUMBLE as ita linguistics component since MUMBLE can now take on many of the planner's text organization and decision-making problems with markedly less hand-tailoring of algorithms in either component.
An Introduction to Natural Language Generation
2003
Topic 1 NLG Overview Course Objectives • to give a broad overview of the field of nlg • to show the state of the art in nlg • to give an overview of the more prominent nlg systems and approaches • to highlight the current major issues in nlg research
Natural Language Generation in Artificial Intelligence and Computational Linguistics
The Kluwer International Series in Engineering and Computer Science, 1991
A number of collections of papers from the field of natural language generation (NLG) have been published over the last few years: Kempen (1987), Zock and Sabah (1988), Dale, Mellish, and Zock (1990), and now the present volume. All have in common that they are derived in one way or another from workshops on the subject, and should therefore make available new and often exploratory research in a timely fashion. If such a book is to be more than a conference proceedings, it has to do a little more too, of course; it should present the research in more detail than a conference proceedings would, there should be greater cohesion amongst the papers, and it should be produced to an appropriate standard. The present book, like its predecessors, succeeds on some counts but fails on others. The papers in the book are organized into three strands, described in turn below: text planning, lexical choice, and grammatical resources. The balance between these is rather skewed, however: the first section contains eight papers, and the second and third contain only three papers each.
Natural Language Generation: Scope, Applications and Approaches
Natural Language Generation is a subfield of com- putational linguistic that is concerned with the computer systems which can produce understandable texts in some human lan- guages. The system uses machine understandable logical form as input and produces syntactically and semantically valid sentences in natural language. The different stages of NLG include Content selection, Lexical selection, Sentence structuring and Discourse planning. The applications of NLG include text summarization, machine translation and question answering. The effectiveness of the NLG depends on the efficiency of internal knowledge representation. An ontology based Knowledge representation will improve the output text quality. This work also discusses the scope of applying Karaka relations in language modeling for NLG.