Efficient Memory-Enhanced Transformer for Long-Document Summarization in Low-Resource Regimes (original) (raw)
Related papers
Globalizing BERT-based Transformer Architectures for Long Document Summarization
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Assessing the Efficacy of LSTM, Transformer, and RNN Architectures in Text Summarization
International Conference on Applied Engineering and Natural Sciences
Semantic Self-Segmentation for Abstractive Summarization of Long Documents in Low-Resource Regimes
Proceedings of the AAAI Conference on Artificial Intelligence
LongT5: Efficient Text-To-Text Transformer for Long Sequences
Findings of the Association for Computational Linguistics: NAACL 2022
Long Document Summarization in a Low Resource Setting using Pretrained Language Models
2021
Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, 2016
Summ^N: A Multi-Stage Summarization Framework for Long Input Dialogues and Documents
2021
DYLE: Dynamic Latent Extraction for Abstractive Long-Input Summarization
ArXiv, 2021
Read, Highlight and Summarize: A Hierarchical Neural Semantic Encoder-based Approach
2019
Generating Topic-Oriented Summaries Using Neural Attention
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Encoding Position Improves Recurrent Neural Text Summarizers
2019
ArXiv, 2018
Neural Natural Language Processing for Long Texts: A Survey of the State-of-the-Art
arXiv (Cornell University), 2023
Classify or Select: Neural Architectures for Extractive Document Summarization
ArXiv, 2016
An Optimized Abstractive Text Summarization Model Using Peephole Convolutional LSTM
Symmetry, 2019
2017
Cornell University - arXiv, 2022
Generating Multi-Sentence Abstractive Summaries of Interleaved Texts
2019
Multi-News: A Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Revisiting Transformer-based Models for Long Document Classification
ArXiv, 2022
2014
arXiv: Computation and Language, 2020
Journal of Applied Research and Technology
Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Abstractive Summarization with Efficient Transformer Based Approach
Dr. Dattatraya Vishnu Kodavade
International Journal on Recent and Innovation Trends in Computing and Communication
Enhancing a Text Summarization System with ELMo
2019
VAE-PGN based Abstractive Model in Multi-stage Architecture for Text Summarization
Proceedings of the 12th International Conference on Natural Language Generation, 2019
NUTS: Network for Unsupervised Telegraphic Summarization
2018
Recurrent Chunking Mechanisms for Long-Text Machine Reading Comprehension
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
Summarization of COVID-19 news documents deep learning-based using transformer architecture
TELKOMNIKA, 2021
Mathematical Problems in Engineering
Abstractive Text Summarization based on Language Model Conditioning and Locality Modeling
2020
SHEG: summarization and headline generation of news articles using deep learning
Neural Computing and Applications, 2020
Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021
Modelling, Visualising and Summarising Documents with a Single Convolutional Neural Network