Neta Kenneth - Academia.edu (original) (raw)

Related Authors

Lilja Maria Saeboe

Veronika Mitnik

Dmitry Nikolaev

Benoît Sagot

Benoît Sagot

Institut National de Recherche en Informatique et Automatique (INRIA)

Giorgio Satta

Giuseppe G. A. Celano

Kikuo Maekawa

National Institute for Japanese Language and Linguistics

Virach Sornlertlamvanich

Tommaso Petrolito

Uploads

Linguistics by Neta Kenneth

Research paper thumbnail of Morphosyntactic predictability of translationese

Linguistics Vanguard

It is often assumed that translated texts are easier for processing than original ones. However, ... more It is often assumed that translated texts are easier for processing than original ones. However, it has also been shown that translated texts contain evident traces of source-language morphosyntax, which should presumably make them less predictable and harder to process. We test these competing observations by measuring morphosyntactic entropies of original and translated texts in several languages and show that there may exist a categorical distinction between translations made from structurally-similar languages (which are more predictable than original texts) and those made from structurally-divergent languages (which are often non-idiomatic, involve structural transfer, and therefore are more entropic).

Research paper thumbnail of Fine-Grained Analysis of Cross-Linguistic Syntactic Divergences

Proceedings of ACL, 2020

The patterns in which the syntax of different languages converges and diverges are often used to ... more The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer. Nevertheless, little empirical work has been done on quantifying the prevalence of different syntactic divergences across language pairs. We propose a framework for extracting divergence patterns for any language pair from a parallel corpus, building on Universal Dependencies. We show that our framework provides a detailed picture of cross-language divergences, generalizes previous approaches, and lends itself to full automation. We further present a novel dataset, a manually word-aligned subset of the Parallel UD corpus in five languages, and use it to perform a detailed corpus study. We demonstrate the usefulness of the resulting analysis by showing that it can help account for performance patterns of a cross-lingual parser.

Research paper thumbnail of Morphosyntactic predictability of translationese

Linguistics Vanguard

It is often assumed that translated texts are easier for processing than original ones. However, ... more It is often assumed that translated texts are easier for processing than original ones. However, it has also been shown that translated texts contain evident traces of source-language morphosyntax, which should presumably make them less predictable and harder to process. We test these competing observations by measuring morphosyntactic entropies of original and translated texts in several languages and show that there may exist a categorical distinction between translations made from structurally-similar languages (which are more predictable than original texts) and those made from structurally-divergent languages (which are often non-idiomatic, involve structural transfer, and therefore are more entropic).

Research paper thumbnail of Fine-Grained Analysis of Cross-Linguistic Syntactic Divergences

Proceedings of ACL, 2020

The patterns in which the syntax of different languages converges and diverges are often used to ... more The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer. Nevertheless, little empirical work has been done on quantifying the prevalence of different syntactic divergences across language pairs. We propose a framework for extracting divergence patterns for any language pair from a parallel corpus, building on Universal Dependencies. We show that our framework provides a detailed picture of cross-language divergences, generalizes previous approaches, and lends itself to full automation. We further present a novel dataset, a manually word-aligned subset of the Parallel UD corpus in five languages, and use it to perform a detailed corpus study. We demonstrate the usefulness of the resulting analysis by showing that it can help account for performance patterns of a cross-lingual parser.

Log In