Gabriele Pergola | University of Warwick (original) (raw)

Gabriele Pergola

Profile page:
https://warwick.ac.uk/fac/sci/dcs/people/research/u1898418

less

Andrey Kutuzov related author profile picture

Terry Szymanski related author profile picture

R. Lebret related author profile picture

Nam Trần related author profile picture

Zhiyuan (Brett)  Chen related author profile picture

Carlos Castillo related author profile picture

Kyrylo S Malakhov related author profile picture

Alexander Shchurov related author profile picture

Vitalii Velychko related author profile picture

Fabrizio Esposito related author profile picture

Uploads

Papers by Gabriele Pergola

Research paper thumbnail of Neural Topic Model with Reinforcement Learning

EMNLP 2019, 2019

In recent years, advances in neural variational inference have achieved many successes in text pr... more In recent years, advances in neural variational inference have achieved many successes in text processing. Examples include neural topic models which are typically built upon varia-tional autoencoder (VAE) with an objective of minimising the error of reconstructing original documents based on the learned latent topic vectors. However, minimising reconstruction errors does not necessarily lead to high quality topics. In this paper, we borrow the idea of reinforcement learning and incorporate topic coherence measures as reward signals to guide the learning of a VAE-based topic model. Furthermore , our proposed model is able to automatically separating background words dynamically from topic words, thus eliminating the pre-processing step of filtering infrequent and/or top frequent words, typically required for learning traditional topic models. Experimental results on the 20 Newsgroups and the NIPS datasets show superior performance both on perplexity and topic coherence measure compared to state-of-the-art neural topic models .

Research paper thumbnail of TDAM: a Topic-Dependent Attention Model for Sentiment Analysis

Information Processing and Management, 2019

We propose a topic-dependent attention model for sentiment classification and topic extraction. O... more We propose a topic-dependent attention model for sentiment classification and topic extraction. Our model assumes that a global topic embedding is shared across documents and employs an attention mechanism to derive local topic embedding for words and sentences. These are subsequently incorporated in a modified Gated Recurrent Unit (GRU) for sentiment classification and extraction of topics bearing different sentiment polarities. Those topics emerge from the words' local topic embeddings learned by the internal attention of the GRU cells in the context of a multi-task learning framework. In this paper, we present the hierarchical architecture, the new GRU unit and the experiments conducted on users' reviews which demonstrate classification performance on a par with the state-of-the-art methodologies for sentiment classification and topic coherence outperforming the current approaches for supervised topic extraction. In addition, our model is able to extract coherent aspect-sentiment clusters despite using no aspect-level annotations for training.

Research paper thumbnail of Topical Phrase Extraction from Clinical Reports by Incorporating both Local and Global Context

The 2nd AAAI Workshop on Health Intelligence (AAAI18), 2018

Making sense of words often requires to simultaneously examine the surrounding context of a term ... more Making sense of words often requires to simultaneously examine the surrounding context of a term as well as the global themes characterizing the overall corpus. Several topic models have already exploited word embeddings to recognize local context, however, it has been weakly combined with the global context during the topic inference. This paper proposes to extract topical phrases corroborating the word embedding information with the global context detected by Latent Semantic Analysis, and then combine them by means of the Pólya urn model. To highlight the effectiveness of this combined approach the model was assessed analyzing clinical reports , a challenging scenario characterized by technical jargon and a limited word statistics available. Results show it outperforms the state-of-the-art approaches in terms of both topic coherence and computational cost.

Research paper thumbnail of Neural Topic Model with Reinforcement Learning

EMNLP 2019, 2019

In recent years, advances in neural variational inference have achieved many successes in text pr... more In recent years, advances in neural variational inference have achieved many successes in text processing. Examples include neural topic models which are typically built upon varia-tional autoencoder (VAE) with an objective of minimising the error of reconstructing original documents based on the learned latent topic vectors. However, minimising reconstruction errors does not necessarily lead to high quality topics. In this paper, we borrow the idea of reinforcement learning and incorporate topic coherence measures as reward signals to guide the learning of a VAE-based topic model. Furthermore , our proposed model is able to automatically separating background words dynamically from topic words, thus eliminating the pre-processing step of filtering infrequent and/or top frequent words, typically required for learning traditional topic models. Experimental results on the 20 Newsgroups and the NIPS datasets show superior performance both on perplexity and topic coherence measure compared to state-of-the-art neural topic models .

Research paper thumbnail of TDAM: a Topic-Dependent Attention Model for Sentiment Analysis

Information Processing and Management, 2019

We propose a topic-dependent attention model for sentiment classification and topic extraction. O... more We propose a topic-dependent attention model for sentiment classification and topic extraction. Our model assumes that a global topic embedding is shared across documents and employs an attention mechanism to derive local topic embedding for words and sentences. These are subsequently incorporated in a modified Gated Recurrent Unit (GRU) for sentiment classification and extraction of topics bearing different sentiment polarities. Those topics emerge from the words' local topic embeddings learned by the internal attention of the GRU cells in the context of a multi-task learning framework. In this paper, we present the hierarchical architecture, the new GRU unit and the experiments conducted on users' reviews which demonstrate classification performance on a par with the state-of-the-art methodologies for sentiment classification and topic coherence outperforming the current approaches for supervised topic extraction. In addition, our model is able to extract coherent aspect-sentiment clusters despite using no aspect-level annotations for training.

Research paper thumbnail of Topical Phrase Extraction from Clinical Reports by Incorporating both Local and Global Context

The 2nd AAAI Workshop on Health Intelligence (AAAI18), 2018

Making sense of words often requires to simultaneously examine the surrounding context of a term ... more Making sense of words often requires to simultaneously examine the surrounding context of a term as well as the global themes characterizing the overall corpus. Several topic models have already exploited word embeddings to recognize local context, however, it has been weakly combined with the global context during the topic inference. This paper proposes to extract topical phrases corroborating the word embedding information with the global context detected by Latent Semantic Analysis, and then combine them by means of the Pólya urn model. To highlight the effectiveness of this combined approach the model was assessed analyzing clinical reports , a challenging scenario characterized by technical jargon and a limited word statistics available. Results show it outperforms the state-of-the-art approaches in terms of both topic coherence and computational cost.

Log In