Luca Molteni - Academia.edu (original) (raw)

Luca Molteni

Related Authors

David Novick

SEOLHWA LEE

lu lu

lu lu

Beijing Language and Culture University

David M Howcroft

Waseem Shahzad

Luisa Coheur

Johnny  Franchise

Thomas R Manzini

Uploads

Papers by Luca Molteni

Research paper thumbnail of Service registration chatbot: collecting and comparing dialogues from AMT workers and service’s users

Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)

Crowdsourcing is the go-to solution for data collection and annotation in the context of NLP task... more Crowdsourcing is the go-to solution for data collection and annotation in the context of NLP tasks. Nevertheless, crowdsourced data is noisy by nature; the source is often unknown and additional validation work is performed to guarantee the dataset's quality. In this article, we compare two crowdsourcing sources on a dialogue paraphrasing task revolving around a chatbot service. We observe that workers hired on crowdsourcing platforms produce lexically poorer and less diverse rewrites than service users engaged voluntarily. Notably enough, on dialogue clarity and optimality, the two paraphrase sources' human-perceived quality does not differ significantly. Furthermore, for the chatbot service, the combined crowdsourced data is enough to train a transformer-based Natural Language Generation (NLG) system. To enable similar services, we also release tools for collecting data and training the dialogue-act-based transformer-based NLG module 1 .

Research paper thumbnail of Service registration chatbot: collecting and comparing dialogues from AMT workers and service’s users

Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)

Crowdsourcing is the go-to solution for data collection and annotation in the context of NLP task... more Crowdsourcing is the go-to solution for data collection and annotation in the context of NLP tasks. Nevertheless, crowdsourced data is noisy by nature; the source is often unknown and additional validation work is performed to guarantee the dataset's quality. In this article, we compare two crowdsourcing sources on a dialogue paraphrasing task revolving around a chatbot service. We observe that workers hired on crowdsourcing platforms produce lexically poorer and less diverse rewrites than service users engaged voluntarily. Notably enough, on dialogue clarity and optimality, the two paraphrase sources' human-perceived quality does not differ significantly. Furthermore, for the chatbot service, the combined crowdsourced data is enough to train a transformer-based Natural Language Generation (NLG) system. To enable similar services, we also release tools for collecting data and training the dialogue-act-based transformer-based NLG module 1 .

Log In