Globalizing BERT-based Transformer Architectures for Long Document Summarization (original) (raw)
Related papers
Efficient Memory-Enhanced Transformer for Long-Document Summarization in Low-Resource Regimes
Sensors
Assessing the Efficacy of LSTM, Transformer, and RNN Architectures in Text Summarization
International Conference on Applied Engineering and Natural Sciences
LNLF-BERT: Transformer for Long Document Classification with Multiple Attention Levels
IEEE Access, 2024
LongT5: Efficient Text-To-Text Transformer for Long Sequences
Findings of the Association for Computational Linguistics: NAACL 2022
Summ^N: A Multi-Stage Summarization Framework for Long Input Dialogues and Documents
2021
Long Document Summarization in a Low Resource Setting using Pretrained Language Models
2021
Encoding Position Improves Recurrent Neural Text Summarizers
2019
An Optimized Abstractive Text Summarization Model Using Peephole Convolutional LSTM
Symmetry, 2019
Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, 2016
A Neural Attention Model for Sentence Summarization
2015
Summarization of COVID-19 news documents deep learning-based using transformer architecture
TELKOMNIKA, 2021
Abstractive Text Summarization based on Language Model Conditioning and Locality Modeling
2020
Performance Study on Extractive Text Summarization Using BERT Models
Information, 2022
Neural Attention Model for Abstractive Text Summarization Using Linguistic Feature Space
IEEE Access
Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021
Multi-News: A Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Generating Topic-Oriented Summaries Using Neural Attention
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
DYLE: Dynamic Latent Extraction for Abstractive Long-Input Summarization
ArXiv, 2021
2021
2017
Read, Highlight and Summarize: A Hierarchical Neural Semantic Encoder-based Approach
2019
Classify or Select: Neural Architectures for Extractive Document Summarization
ArXiv, 2016
Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
A Neural Attention Model for Abstractive Sentence Summarization
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015
Enhancing a Text Summarization System with ELMo
2019
Abstractive Summarization with Efficient Transformer Based Approach
Dr. Dattatraya Vishnu Kodavade
International Journal on Recent and Innovation Trends in Computing and Communication
VAE-PGN based Abstractive Model in Multi-stage Architecture for Text Summarization
Proceedings of the 12th International Conference on Natural Language Generation, 2019
Cornell University - arXiv, 2022
Interpretable Multi-headed Attention for Abstractive Summarization at Controllable Lengths
Proceedings of the 28th International Conference on Computational Linguistics
Fine Tuning Transformer Based BERT Model for Generating the Automatic Book Summary
International Journal on Recent and Innovation Trends in Computing and Communication
IRJET- Deep Learning Approach for Text Summarization
IRJET, 2020
Mathematical Problems in Engineering
The Effect of Pretraining on Extractive Summarization for Scientific Documents
Proceedings of the Second Workshop on Scholarly Document Processing, 2021