Applying natural language processing techniques to augmentative communication systems (original) (raw)
Lecture-38 Parsing Algorithms In the last lecture, we started parsing, which is also called syntactic processing. Parsing we said was probably one of the most well understood areas of language processing, ((Refer Time: 00:36)) of algorithms have been designed for obtaining the structure of a sentence, and this phrases very important, because from the parse tree one then moves to the stage of semantic processing, where the semantic roles, disambiguated words, disambiguated names, co references, all these difficult problems are solved. However, the first crucial step to all these challenging tasks is syntactic processing or parsing. So, you would like to take a detailed look at, how parsing is done, today's topic will be parsing algorithms. Last time, we described top down parsing in detail, will mention this briefly then moved to bottom-up parsing, and a very famous algorithm called chart parsing, top-down bottom-up chart parsing. Then, we will discussed, what happens when a sentence is ambiguous, multiple parses as possible, for such a sentence. We described last time, that parsing in it required, even though the meaning is more or less understood based on the word senses, and their arrangements, it is still a critical task to obtain the parse tree of the sentence, that itself resolves a large amount of ambiguity. For example, if we take the sentence, I saw the boy with a ponytail, now with a ponytail should be attached to the boy, because it is a qualifier, for the boy, and when we do this parse tree construction, the tree would reveal, that the whole preposition face, with the ponytail has these, attachment with the boy. At this stage of syntactic processing, it is possible to find these, attachment now, we also said that, the parsing is again a critical requirement; we took a look at one replacement test. So, I want a white horse, he wants a brown one, so this is known as the one replacement phenomena in language processing, one has a enough for reference to horse. So, these kinds of phenomena need deep parse trees, we have to know the structure of sentence, in pretty good detail. So, parsing is definitely necessary, we now proceed to, the algorithms for parsing, starting with the slides.
Speech and Language Processing
The idea of giving computers the ability to process human language is as old as the idea of computers themselves. This book is about the implementation and implications of that exciting idea. We introduce a vibrant interdisciplinary field with many names corresponding to its many facets, names like speech and language processing, human language technology, natural language processing, computational linguistics, and speech recognition and synthesis. The goal of this new field is to get computers to perform useful tasks involving human language, tasks like enabling human-machine communication, improving human-human communication, or simply doing useful processing of text or speech.
Automatic Synthesis of Semantics for Context-free Grammars
We are investigating the mechanical transformation of an unambiguous context-free grammar (CFG) into a de nite-clause grammar (DCG) using a nite set of examples, each of which is a pair hs; mi, where s is a sentence belonging to the language de ned by the CFG and m is a semantic representation (meaning) of s. The resulting DCG would be such that it can be executed (by the interpreter of a logic programming language) to compute the semantics for every sentence of the original DCG. Three important assumptions underlie our approach: (i) the semantic representation language is the simply typed-calculus; (ii) the semantic representation of a sentence can be obtained from the semantic representations of its parts (compositionality); and (iii) the structure of the semantic representation determines its meaning (intensionality). The basic technique involves an enumeration of parse trees for sentences of increasing size; and, for each parse tree, a set of equations over (typed) function variables that represent the meanings of the constituent subtrees is formulated and solved by means of a higher-order uni cation procedure. The solutions for these function variables serve to augment the original grammar in order to derive the nal DCG. A technique called partial execution is used to convert, where possible, the generated higher-order DCG into a rst-order DCG, to facilitate e cient bidirectional execution. In the appendix, we provide detailed illustration of the use of such a system for storing and retrieving information contained in natural language sentences. Based on our experimentation, we conclude that an improved version of this system will facilitate rapid prototyping of natural language front-ends for various applications.