sheetal shalini - Academia.edu (original) (raw)

sheetal shalini

Uploads

Papers by sheetal shalini

Research paper thumbnail of Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment

Proceedings of the 18th BioNLP Workshop and Shared Task, 2019

Parallel deep learning architectures like finetuned BERT and MT-DNN, have quickly become the stat... more Parallel deep learning architectures like finetuned BERT and MT-DNN, have quickly become the state of the art, bypassing previous deep and shallow learning methods by a large margin. More recently, pre-trained models from large related datasets have been able to perform well on many downstream tasks by just fine-tuning on domain-specific datasets (similar to transfer learning). However, using powerful models on nontrivial tasks, such as ranking and large document classification, still remains a challenge due to input size limitations 1 of parallel architecture and extremely small datasets (insufficient for fine-tuning). In this work, we introduce an end-to-end system, trained in a multi-task setting, to filter and re-rank answers in the medical domain. We use task-specific pre-trained models as deep feature extractors. Our model achieves the highest Spearman's Rho and Mean Reciprocal Rank of 0.338 and 0.9622 respectively, on the ACL-BioNLP workshop MediQA Question Answering shared-task. * * Equal contribution, randomly sorted. Karan and Shefali took ownership of the NLI module while Sheetal and Prashant worked on the RQE module. Hemant researched and implemented the Question-Answering system including baseline and multi-task learning. Sheetal and Hemant worked on scraping data from icliniq. Karan and Prashant helped with integration of NLI and RQE module respectively into the multi-task system.

Research paper thumbnail of Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment

Proceedings of the 18th BioNLP Workshop and Shared Task, 2019

Parallel deep learning architectures like finetuned BERT and MT-DNN, have quickly become the stat... more Parallel deep learning architectures like finetuned BERT and MT-DNN, have quickly become the state of the art, bypassing previous deep and shallow learning methods by a large margin. More recently, pre-trained models from large related datasets have been able to perform well on many downstream tasks by just fine-tuning on domain-specific datasets (similar to transfer learning). However, using powerful models on nontrivial tasks, such as ranking and large document classification, still remains a challenge due to input size limitations 1 of parallel architecture and extremely small datasets (insufficient for fine-tuning). In this work, we introduce an end-to-end system, trained in a multi-task setting, to filter and re-rank answers in the medical domain. We use task-specific pre-trained models as deep feature extractors. Our model achieves the highest Spearman's Rho and Mean Reciprocal Rank of 0.338 and 0.9622 respectively, on the ACL-BioNLP workshop MediQA Question Answering shared-task. * * Equal contribution, randomly sorted. Karan and Shefali took ownership of the NLI module while Sheetal and Prashant worked on the RQE module. Hemant researched and implemented the Question-Answering system including baseline and multi-task learning. Sheetal and Hemant worked on scraping data from icliniq. Karan and Prashant helped with integration of NLI and RQE module respectively into the multi-task system.

Log In