Milda Dailidėnaitė - Academia.edu (original) (raw)
Uploads
Papers by Milda Dailidėnaitė
arXiv (Cornell University), Nov 22, 2019
In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have ... more In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have to ensure that relations between words are reflected through distances in a high-dimensional numeric space. To compare the quality of different text embeddings, typically, we use benchmark datasets. We present a collection of such datasets for the word analogy task in nine languages: Croatian, English, Estonian, Finnish, Latvian, Lithuanian, Russian, Slovenian, and Swedish. We designed the monolingual analogy task to be much more culturally independent and also constructed cross-lingual analogy datasets for the involved languages. We present basic statistics of the created datasets and their initial evaluation using fastText embeddings.
arXiv (Cornell University), Nov 22, 2019
In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have ... more In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have to ensure that relations between words are reflected through distances in a high-dimensional numeric space. To compare the quality of different text embeddings, typically, we use benchmark datasets. We present a collection of such datasets for the word analogy task in nine languages: Croatian, English, Estonian, Finnish, Latvian, Lithuanian, Russian, Slovenian, and Swedish. We designed the monolingual analogy task to be much more culturally independent and also constructed cross-lingual analogy datasets for the involved languages. We present basic statistics of the created datasets and their initial evaluation using fastText embeddings.
Eesti ja soome-ugri keeleteaduse ajakiri, Sep 5, 2022
Alongside the imperative proper, Livonian has developed a secondary indirect imperative paradigm ... more Alongside the imperative proper, Livonian has developed a secondary indirect imperative paradigm referred to as the jussive. The category of person is the most contro versial category concerning imperatives. The scope of the functions of imperatives has also received a lot of attention. This study focuses on the distribution of the person forms of the Livonian jussive and the covariance between function and person. Jussive occurrences from two corpora have been analysed for person and function, cross-referenced, and analysed for prototypicality of function. The Livonian jussive is most frequently used in the third person, but all person forms are attested. All forms occur in prototypical and non-prototypical imperative functions, but the first-person forms are used more frequently for non-prototypical functions, while other forms are used more often for prototypical functions. The results suggest that prototypicality might be determined both by mood as well as person, meaning that prototypical imperative functions might be different for every person.
In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have ... more In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have to ensure that relations between words are reflected through distances in a high-dimensional numeric space. To compare the quality of different text embeddings, typically, we use benchmark datasets. We present a collection of such datasets for the word analogy task in nine languages: Croatian, English, Estonian, Finnish, Latvian, Lithuanian, Russian, Slovenian, and Swedish. We designed the monolingual analogy task to be much more culturally independent and also constructed cross-lingual analogy datasets for the involved languages. We present basic statistics of the created datasets and their initial evaluation using fastText embeddings.
arXiv (Cornell University), Nov 22, 2019
In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have ... more In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have to ensure that relations between words are reflected through distances in a high-dimensional numeric space. To compare the quality of different text embeddings, typically, we use benchmark datasets. We present a collection of such datasets for the word analogy task in nine languages: Croatian, English, Estonian, Finnish, Latvian, Lithuanian, Russian, Slovenian, and Swedish. We designed the monolingual analogy task to be much more culturally independent and also constructed cross-lingual analogy datasets for the involved languages. We present basic statistics of the created datasets and their initial evaluation using fastText embeddings.
arXiv (Cornell University), Nov 22, 2019
In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have ... more In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have to ensure that relations between words are reflected through distances in a high-dimensional numeric space. To compare the quality of different text embeddings, typically, we use benchmark datasets. We present a collection of such datasets for the word analogy task in nine languages: Croatian, English, Estonian, Finnish, Latvian, Lithuanian, Russian, Slovenian, and Swedish. We designed the monolingual analogy task to be much more culturally independent and also constructed cross-lingual analogy datasets for the involved languages. We present basic statistics of the created datasets and their initial evaluation using fastText embeddings.
Eesti ja soome-ugri keeleteaduse ajakiri, Sep 5, 2022
Alongside the imperative proper, Livonian has developed a secondary indirect imperative paradigm ... more Alongside the imperative proper, Livonian has developed a secondary indirect imperative paradigm referred to as the jussive. The category of person is the most contro versial category concerning imperatives. The scope of the functions of imperatives has also received a lot of attention. This study focuses on the distribution of the person forms of the Livonian jussive and the covariance between function and person. Jussive occurrences from two corpora have been analysed for person and function, cross-referenced, and analysed for prototypicality of function. The Livonian jussive is most frequently used in the third person, but all person forms are attested. All forms occur in prototypical and non-prototypical imperative functions, but the first-person forms are used more frequently for non-prototypical functions, while other forms are used more often for prototypical functions. The results suggest that prototypicality might be determined both by mood as well as person, meaning that prototypical imperative functions might be different for every person.
In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have ... more In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have to ensure that relations between words are reflected through distances in a high-dimensional numeric space. To compare the quality of different text embeddings, typically, we use benchmark datasets. We present a collection of such datasets for the word analogy task in nine languages: Croatian, English, Estonian, Finnish, Latvian, Lithuanian, Russian, Slovenian, and Swedish. We designed the monolingual analogy task to be much more culturally independent and also constructed cross-lingual analogy datasets for the involved languages. We present basic statistics of the created datasets and their initial evaluation using fastText embeddings.