QuesBELM: A BERT based Ensemble Language Model for Natural Questions (original) (raw)
Related papers
2022
Large transformer models can highly improve Answer Sentence Selection (AS2) task, but their high computational costs prevent their use in many real world applications. In this paper, we explore the following research question: How can we make the AS2models more accurate without significantly increasing their model complexity? To address the question, we propose a Multiple Heads Student architecture (MHS), an efficient neural network designed to distill an ensemble of large transformers into a single smaller model. An MHS model consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; each of them is trained by distilling a different large transformer architecture. Unlike traditional distillation techniques, our approach leverages individual models in ensemble as teachers in a way that preserves the diversity of the ensemble members. The resulting model captures the knowledge of different types of transformer models by using ...
FAT ALBERT: Finding Answers in Large Texts using Semantic Similarity Attention Layer based on BERT
2020
Machine based text comprehension has always been a significant research field in natural language processing. Once a full understanding of the text context and semantics is achieved, a deep learning model can be trained to solve a large subset of tasks, e.g. text summarization, classification and question answering. In this paper we focus on the question answering problem, specifically the multiple choice type of questions. We develop a model based on BERT, a state-of-the-art transformer network. Moreover, we alleviate the ability of BERT to support large text corpus by extracting the highest influence sentences through a semantic similarity model. Evaluations of our proposed model demonstrate that it outperforms the leading models in the MovieQA challenge and we are currently ranked first in the leader board with test accuracy of 87.79%. Finally, we discuss the model shortcomings and suggest possible improvements to overcome these limitations.
Ensemble ALBERT and RoBERTa for Span Prediction in Question Answering
2021
Retrieving relevant answers from heterogeneous data formats, for given for questions, is a challenging problem. The process of pinpointing relevant information suitable to answer a question is further compounded in large document collections containing documents of substantial length. This paper presents the models designed as part of our submission to the DialDoc21 Shared Task (Document-grounded Dialogue and Conversational Question Answering) for span prediction in question answering. The proposed models leverage the superior predictive power of pretrained transformer models like RoBERTa, ALBERT and ELECTRA, to identify the most relevant information in an associated passage for the next agent turn. To further enhance the performance, the models were fine-tuned on different span selection based question answering datasets like SQuAD2.0 and Natural Questions (NQ) corpus. We also explored ensemble techniques for combining multiple models to achieve enhanced performance for the task. O...
BERT for Conversational Question Answering Systems Using Semantic Similarity Estimation
Computers, Materials & Continua
Most of the questions from users lack the context needed to thoroughly understand the problem at hand, thus making the questions impossible to answer. Semantic Similarity Estimation is based on relating user's questions to the context from previous Conversational Search Systems (CSS) to provide answers without requesting the user's context. It imposes constraints on the time needed to produce an answer for the user. The proposed model enables the use of contextual data associated with previous Conversational Searches (CS). While receiving a question in a new conversational search, the model determines the question that refers to more past CS. The model then infers past contextual data related to the given question and predicts an answer based on the context inferred without engaging in multi-turn interactions or requesting additional data from the user for context. This model shows the ability to use the limited information in user queries for best context inferences based on Closed-Domain-based CS and Bidirectional Encoder Representations from Transformers for textual representations.
Improving the BERT model for long text sequences in question answering domain
International Journal of Advances in Applied Sciences (IJAAS), 2023
The text-based question-answering (QA) system aims to answer natural language questions by querying the external knowledge base. It can be applied to real-world systems like medical documents, research papers, and crime-related documents. Using this system, users don't have to go through the documents manually the system will understand the knowledge base and find the answer based on the text and question given to the system. Earlier state-of-the-art natural language processing (NLP) was recurrent neural network (RNN) and long short-term memory (LSTM). As a result, these models are hard to parallelize and poor at retaining contextual relationships across long text inputs. Today, bidirectional encoder representations from transformers (BERT) are the contemporary algorithm for NLP. BERT is not capable of handling long text sequences; it can handle 512 tokens at a time which makes it difficult for long context. Smooth inverse frequency (SIF) and the BERT model will be incorporated together to solve this challenge. BERT trained on the Stanford question answering dataset (SQuAD) and SIF model demonstrates robustness and effectiveness on long text sequences from different domains. Experimental results suggest that the proposed approach is a promising solution for QA on long text sequences.
Techniques to Improve Q&A Accuracy with Transformer-based models on Large Complex Documents
ArXiv, 2020
This paper discusses the effectiveness of various text processing techniques, their combinations, and encodings to achieve a reduction of complexity and size in a given text corpus. The simplified text corpus is sent to BERT (or similar transformer based models) for question and answering and can produce more relevant responses to user queries. This paper takes a scientific approach to determine the benefits and effectiveness of various techniques and concludes a best-fit combination that produces a statistically significant improvement in accuracy.
Training Question Answering Models From Synthetic Data
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Question and answer generation is a data augmentation method that aims to improve question answering (QA) models given the limited amount of human labeled data. However, a considerable gap remains between synthetic and human-generated question-answer pairs. This work aims to narrow this gap by taking advantage of large language models and explores several factors such as model size, quality of pretrained models, scale of data synthesized, and algorithmic choices. On the SQUAD1.1 question answering task, we achieve higher accuracy using solely synthetic questions and answers than when using the SQUAD1.1 training set questions alone. Removing access to real Wikipedia data, we synthesize questions and answers from a synthetic corpus generated by an 8.3 billion parameter GPT-2 model. With no access to human supervision and only access to other models, we are able to train state of the art question answering networks on entirely model-generated data that achieve 88.4 Exact Match (EM) and 93.9 F1 score on the SQUAD1.1 dev set. We further apply our methodology to SQUAD2.0 and show a 2.8 absolute gain on EM score compared to prior work using synthetic data. Consistent with prior work (Alberti et al., 2019a; Dong et al., 2019), we use a 3-step modeling pipeline consisting of unconditional answer extraction from text, question generation, and question filtration. Our approach for training question generators on labeled data uses pretrained GPT-2 decoder models and a next-token-prediction language modeling objective, trained using a concatenation of context, answer, and question tokens. As demonstrated in sections 5.1 and 6.1, pretraining large generative transformer models up to 8.3B parameters improves the quality of generated questions. Additionally, we propose an overgenerate and filter approach to further improve question filtration. The quality of questions produced by this pipeline can be assessed quantitatively by finetuning QA models and evaluating results on the SQUAD dataset. We demonstrate generated questions to be comparable to supervised training with real data. For answerable SQUAD1.1
Question Answering Using Hierarchical Attention on Top of BERT Features
Proceedings of the 2nd Workshop on Machine Reading for Question Answering
Machine Comprehension (MC) tests the ability of the machine to answer a question about a given passage. It requires modeling complex interactions between the passage and the question. Recently, attention mechanisms have been successfully extended to machine comprehension. In this work, the question and passage are encoded using BERT language embeddings to better capture the respective representations at a semantic level. Then, attention and fusion are conducted horizontally and vertically across layers at different levels of granularity between question and paragraph. Our experiments were performed on the datasets provided in MRQA shared task 2019 1
Improving the BERT Model with Proposed Named Entity Recognition Method for Question Answering
6th International Conference on Computer Science and Engineering (UBMK), 2021
Recently, the analysis of textual data has gained importance due to the increase in comments made on web platforms and the need for ready-made answering systems. Therefore, there are many studies in the fields of natural language processing such as text summarization and question answering. In this paper, the accuracy of the BERT language model is analyzed for the question answering domain, which allows to automatically answer a question asked. Using SQuAD, one of the reading comprehension datasets, the answers to the questions that the BERT model cannot answer are researched with the proposed Named Entity Recognition method in natural language processing. The accuracy of BERT models used with the proposed Named Entity Recognition method increases between 1.7% and 2.7%. As a result of the analysis, it is shown that the BERT model doesn't use Named Entity Recognition technique sufficiently.
CLEF 2009 Question Answering Experiments at Tokyo Institute of Technology
2015
In this paper we describe the experiments carried out at Tokyo Institute of Technology for the CLEF 2009 Question Answering on Speech Transcriptions (QAST) task, where we participated in the English track. We apply a non-linguistic, data-driven approach to Question Answering (QA). Relevant sentences are rst retrieved from the supplied corpus, using a language model based sentence retrieval module. Our probabilistic answer extraction module then pinpoints exact answers in these sentences. In this year's QAST task the question set contains both factoid and non-factoid questions, where the non-factoid questions ask for denitions of given named entities. We do not make any adjustments of our factoid QA system to account for non-factoid questions. Moreover, we are presented with the challenge of searching for the right answer in a relatively small corpus. Our system is built to take advantage of redundant information in large corpora, however, in this task such redundancy is not ava...