An Introduction to Natural Language Generation (original) (raw)
Related papers
Natural Language Generation: Scope, Applications and Approaches
Natural Language Generation is a subfield of com- putational linguistic that is concerned with the computer systems which can produce understandable texts in some human lan- guages. The system uses machine understandable logical form as input and produces syntactically and semantically valid sentences in natural language. The different stages of NLG include Content selection, Lexical selection, Sentence structuring and Discourse planning. The applications of NLG include text summarization, machine translation and question answering. The effectiveness of the NLG depends on the efficiency of internal knowledge representation. An ontology based Knowledge representation will improve the output text quality. This work also discusses the scope of applying Karaka relations in language modeling for NLG.
Natural Language Generation in Artificial Intelligence and Computational Linguistics
The Kluwer International Series in Engineering and Computer Science, 1991
A number of collections of papers from the field of natural language generation (NLG) have been published over the last few years: Kempen (1987), Zock and Sabah (1988), Dale, Mellish, and Zock (1990), and now the present volume. All have in common that they are derived in one way or another from workshops on the subject, and should therefore make available new and often exploratory research in a timely fashion. If such a book is to be more than a conference proceedings, it has to do a little more too, of course; it should present the research in more detail than a conference proceedings would, there should be greater cohesion amongst the papers, and it should be produced to an appropriate standard. The present book, like its predecessors, succeeds on some counts but fails on others. The papers in the book are organized into three strands, described in turn below: text planning, lexical choice, and grammatical resources. The balance between these is rather skewed, however: the first section contains eight papers, and the second and third contain only three papers each.
Proceedings of the Linguistic Resources for Automatic Natural Language Generation - LiRA@NLG
2017
The Linguistic Resources for Automatic Natural Language Generation (LiRA@NLG) workshop of the International Natural Language Generation INLG2017 Conference held at Santiago de Compostela, September 4, 2017, brought together participants involved in developing large-coverage linguistic resources and researchers with an interest in expanding real-world Natural Language Generation (NLG) software. Linguists and developers of NLG software have been working separately for many years: NLG researchers are typically more focused on technical issues specific to text generation-where good performance (e.g. recall and precision) is crucial-whereas linguists tend to focus on problems related to the development of exhaustive and precise resources that are mainly 'neutral' visa -vis any NLP application (e.g. parsing or generating sentences), using various grammatical formalisms such as NooJ, TAG or HPSG. However, recent progress in both fields is reducing many of these differences, with largecoverage linguistic resources being more and more used by robust NLP software. For instance, NLG researchers can now use large dictionaries of multiword units and expressions, and several linguistic experiments have shown the feasibility of using large phrase-structure grammars (a priori used for text parsing) in 'generation' mode to automatically produce paraphrases of sentences that are described by these grammars. The eight papers presented at the LiRA@NLG workshop focused on the following questions: How do we develop 'neutral' linguistic resources (dictionaries, morphological, phrase-structure and transformational grammars) that can be used both to parse and generate texts automatically? Is it possible to generate grammatical sentences by using linguistic data alone, i.e. with no statistical methods to remove ambiguities? What are the limitations of rule-based systems, as opposed to stochastic ones? The common themes that these articles explore are: how to build large-coverage dictionaries and morphological grammars that can be used by NLG applications, how to integrate a linguistically-based Generation module into a Machine-Translation system, and how to construct a syntactic grammar that can be used by a transformational engine to perform paraphrase generation. Linguists as well as Computational Linguists who work on Automatic Generation based on linguistic methods will find advanced, up-to-the-minute studies on these topics in this volume: Max Silberztein's article, "Automatic Generation from FOAF to English: Linguistic Contribution to Web Semantics," presents an automatic system capable of generating a large number of English sentences from Friend Of A Friend (FOAF) statements in the RDF Turtle notation using NooJ's transformational engine both in Parse and Generation modes.
This paper surveys the current state of the art in Natural Language Generation (nlg), defined as the task of generating text or speech from non-linguistic input. A survey of nlg is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of nlg technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in nlg and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between nlg and other areas of artificial intelligence; (c) draw attention to the challenges in nlg evaluation, relating them to similar challenges faced in other areas of nlp, with an emphasis on different evaluation methods and the relationships between them.
Building applied natural language generation systems
1997
In this article, we g i v e a n o verview of Natural Language Generation (nlg) from an applied system-building perspective. The article includes a discussion of when nlg techniques should be used suggestions for carrying out requirements analyses and a description of the basic nlg tasks of content determination, discourse planning, sentence aggregation, lexicalization, referring expression generation, and linguistic realisation. Throughout, the emphasis is on established techniques that can be used to build simple but practical working systems now. We also provide pointers to techniques in the literature that are appropriate for more complicated scenarios.
INLG 2008 Fifth International Natural Language Generation Conference
2008
We are pleased to introduce the technical program of the Fifth International Natural Language Generation Conference (INLG 2008), the Biennial Meeting of SIGGEN, the ACL Special Interest Group in Natural Language Generation. INLG is the leading international conference on research into natural language generation. It has been held in Sydney (Australia) in 2006, at Brockenhurst (UK) in 2004, in Harriman (New York, USA) in 2002, and in Mitzpe Ramon (Israel) in 2000.
Building natural language generation systems
Natural Language Engineering, 1997
The book is about natural language generation (NLG), which is a sub®eld of arti®cial intelligence and computational linguistics that is concerned with building computer software systems that can produce meaningful texts in English or other human languages from some underlying non-linguistic representation of information. In the introduction, the ®eld of NLG is brie¯y characterized from research-and applicationoriented perspectives and illustrated by screen shots produced by several systems. Then, conditions for bene®cial uses of this technology are elaborated and contrasted with conditions where other techniques are more appropriate. Moreover, methods for determining the intended functionality of a system to be built are discussed. The main sections of the book are devoted to the prototypical architecture of application-oriented NLG systems and their major processing phases: document planning, microplanning and surface realization. Each of these three phases is illustrated by a number of detailed examples, demonstrating the successive re®nements of utterance speci®cations in the course of processing. In the ®nal section, embedding of the natural language processing technology is discussed featuring typography, combined uses with graphics and hypertext, as well as integration with speech. The methods are illustrated by a large number of examples Ð the book contains more than 120 ®gures on its 248 pages. At the end of each section, a number of useful references for further reading are related to the section topics. In the appendix, a table summarizing the 35 systems referred in the book is given.
Handbook of Natural Language Processing, 2000
We report here on a significant new set of capabilities that we have incorporated into our language generation system MUMBLE. Their impact will be to greatly simplify the work of any text planner that uses MUMBLE as ita linguistics component since MUMBLE can now take on many of the planner's text organization and decision-making problems with markedly less hand-tailoring of algorithms in either component.
A repository of data and evaluation resources for natural language generation
2012
Abstract Starting in 2007, the field of natural language generation (NLG) has organised shared-task evaluation events every year, under the Generation Challenges umbrella. In the course of these shared tasks, a wealth of data has been created, along with associated task definitions and evaluation regimes. In other contexts too, sharable NLG data is now being created.