Dependency-based semantic analysis of natural-language text (original) (raw)
Semantic roles, logical relations such as AGENT or INSTRUMENT that hold between events and their participants and circumstances, need to be determined automatically by several types of applications in natural language processing. This process is referred to as semantic role labeling. This dissertation describes how to construct statistical models for semantic role labeling of English text, and how role semantics is related to surface syntax. It is generally agreed that the problem of semantic role labeling is closely tied to syntactic analysis. Most previous implementations of semantic role labelers have used constituents as the syntactic input, while dependency representations, in which the syntactic structure is viewed as a graph of labeled word-to-word relations, has received very little attention in comparison. Contrary to previous claims, this work demonstrates empirically that dependency representations can serve as the input for semantic role labelers and achieve similar results. This is important theoretically since it makes the syntactic-semantic interface conceptually simpler and more intuitive, but also has practical significance since there are languages for which constituent annotation is infeasible. The dissertation devotes considerable effort to investigating the relation between syntactic representation and semantic role labeling performance. Apart from the main result that dependency-based semantic role labeling rivals its constituent-based counterpart, the empirical experiments support two findings: First, that the dependencysyntactic representation has to be well-designed in order to achieve a good performance in semantic role labeling. Secondly, that the choice of syntactic representation affects the substages of the semantic role labeling task differently; above all, the role classification task, which relies strongly on lexical features, is shown to benefit from dependency representations. The systems presented in this work have been evaluated in two international open evaluations, in both of which they achieved the top result. My first and foremost thanks goes to Pierre Nugues, my supervisor, for introducing me to the topic of natural language processing. Pierre has a rare talent for instilling the qualitites that are needed by every researcher: enthusiasm and independent thinking. I am deeply grateful to Joakim Nivre for making use of my work in the 2007 and 2008 CoNLL Shared Tasks, and for including me in the organizing committee of the 2008 Shared Task. I also wish to thank the other members in that committee for stimulating discussions, from which I learned a lot: Mihai Surdeanu, Lluís Màrquez, and Adam Meyers. I would like to thank the students who carried out their thesis work with us: Anders Berglund, Magnus Danielsson, Jacob Persson, and Dan Thorin. All four theses resulted in substantial research results. The Department of Computer Science at LTH has provided a friendly and relaxed research environment. The Swedish Graduate School of Language Technology (GSLT) funded my research during 2006-2008. I was very lucky to be accepted to the graduate school, and their funding started at a critical point. It is very uncertain how this research would have turned out without it. The GSLT is also a very friendly and diverse group of people. During 2003 and 2004, my position was funded by grant number 2002-02380 from the Language Technology program of Vinnova. A significant part of the research described here concerns FrameNet, kindly provided by ICSI Berkeley via an academic license. My final acknowledgement goes to my family: my father Bo, mother Barbro, brother Robert, and to the Saar family: my partner Piret and her parents Henn and Svetlana.