CS-NET at SemEval-2020 Task 4: Siamese BERT for ComVE (original) (raw)

UoR at SemEval-2020 Task 4: Pre-trained Sentence Transformer Models for Commonsense Validation and Explanation

Bhuvana Dhruva

Proceedings of the Fourteenth Workshop on Semantic Evaluation

View PDFchevron_right

CS-NLP Team at SemEval-2020 Task 4: Evaluation of State-of-the-art NLP Deep Learning Architectures on Commonsense Reasoning Task

sirwe saeedi

Proceedings of the Fourteenth Workshop on Semantic Evaluation

View PDFchevron_right

Commonsense Statements Identification and Explanation with Transformer-based Encoders

Sonia - Teodora Cibu

Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, 2020

View PDFchevron_right

KARNA at COIN Shared Task 1: Bidirectional Encoder Representations from Transformers with relational knowledge for machine comprehension with common sense

Yash Jain

Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing

View PDFchevron_right

SemEval-2020 Task 4: Commonsense Validation and Explanation

Shuailong Liang

2020

View PDFchevron_right

Distilling Task-Specific Knowledge from BERT into Simple Neural Networks

Melison Dylan

View PDFchevron_right

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Sench Galiedon

View PDFchevron_right

SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics

Kentaro Inui

2021

View PDFchevron_right

Yseop at SemEval-2020 Task 5: Cascaded BERT Language Model for Counterfactual Statement Analysis

Hannah Abi-akl, Hanna Abi Akl

2020

View PDFchevron_right

Mxgra at SemEval-2020 Task 4: Common Sense Making with Next Token Prediction

Heba Ahmed

Proceedings of the Fourteenth Workshop on Semantic Evaluation, 2020

View PDFchevron_right

Evaluating Deep Learning Techniques for Natural Language Inference

Petros Eleftheriadis

Applied Sciences

View PDFchevron_right

A logical-based corpus for cross-lingual evaluation

Roberto Hirata Jr

Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)

View PDFchevron_right

Condition Aware and Revise Transformer for Question Answering

haoming zhong

Proceedings of The Web Conference 2020, 2020

View PDFchevron_right

AbductionRules: Training Transformers to Explain Unexpected Inputs

Joshua Bensemann

Findings of the Association for Computational Linguistics: ACL 2022

View PDFchevron_right

A large annotated corpus for learning natural language inference

Christopher D Manning

Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015

View PDFchevron_right

BeaSku at CheckThat! 2021: Fine-Tuning Sentence BERT with Triplet Loss and Limited Data

Preslav Nakov

2021

View PDFchevron_right

Stress Test Evaluation of Transformer-based Models in Natural Language Understanding Tasks

Andres Carvallo

arXiv (Cornell University), 2020

View PDFchevron_right

To BERT or Not to BERT Dealing with Possible BERT Failures in an Entailment Task

Luisa Coheur

Communications in computer and information science, 2020

View PDFchevron_right

Paragraph-based Transformer Pre-training for Multi-Sentence Inference

Luca Di Liello

Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

View PDFchevron_right

A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference

Adina Williams

Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

View PDFchevron_right

Deep learning for conflicting statements detection in text

Mayukh Nair

2018

View PDFchevron_right

Multitask Learning of Negation and Speculation using Transformers

Aditya Khandelwal

Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis, 2020

View PDFchevron_right

Block-Skim: Efficient Question Answering for Transformer

Jingwen Leng

ArXiv, 2021

View PDFchevron_right

Structure-aware Sentence Encoder in Bert-Based Siamese Network

Julie Weeds

2021

View PDFchevron_right

Rethinking of BERT Sentence Embedding for Text Classification

mona farouk

Research Square (Research Square), 2024

View PDFchevron_right

ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for out-of-domain samples

Changki Lee

Proceedings of the 13th International Workshop on Semantic Evaluation

View PDFchevron_right

Adversarial Transformer Language Models for Contextual Commonsense Inference

Henry Lieberman

arXiv (Cornell University), 2023

View PDFchevron_right

IITP at MEDIQA 2019: Systems Report for Natural Language Inference, Question Entailment and Question Answering

Tanik Saikh

Proceedings of the 18th BioNLP Workshop and Shared Task

View PDFchevron_right

ConjNLI: Natural Language Inference Over Conjunctive Sentences

Swarnadeep Saha

Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020

View PDFchevron_right

Recursive Neural Networks Can Learn Logical Semantics

Christopher D Manning

Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, 2015

View PDFchevron_right

DL4NLP 2019 Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing

Sara Stymne

2019

View PDFchevron_right

The RepEval 2017 Shared Task: Multi-Genre Natural Language Inference with Sentence Representations

Adina Williams

Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP

View PDFchevron_right

IISERB Brains at SemEval-2022 Task 6: A Deep-learning Framework to Identify Intended Sarcasm in English

Tanuj Singh Shekhawat 19323

Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)

View PDFchevron_right