Strongly Generalizable Question Answering Dataset (original) (raw)
What is GrailQA?
Strongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot.
Explore GrailQAGrailQA paper (Gu et al. '20)Code
Why GrailQA?
GrailQA is by far the largest crowdsourced KBQA dataset with questions of high diversity (i.e., questions in GrailQA can have up to 4 relations and optionally have a function from counting, superlatives and comparatives). It also has the highest coverage over Freebase; it widely covers 3,720 relations and 86 domains from Freebase. Last but not least, our meticulous data split allows GrailQA to test not only i.i.d. generalization, but also compositional generalization and zero-shot generalization, which are critical for practical KBQA systems.
News
- _01/24/2021_We provide instructions on Freebase setup.
- _01/22/2021_We have updated our baseline performance based on a new entity linker with a recall of around 0.77 on the dev set, compared with the previous recall of around 0.46. More details about entity linking can be found in the updated paper.
- 01/18/2021 We will have some major update on our baseline model results with a new entity linker in this week. The numbers will be higher. Please stay tuned!
- 11/30/2020 We fix some minor error in the sparql_queries provided in our dataset.
Getting Started
We've built a few resources to help you get started with the dataset.
Download a copy of the dataset (distributed under the CC BY-SA 4.0 license):
To work with our dataset, we recommend you setting up a Virtuoso server for Freebase (feel free to choose your own way to index Freebase). Please find both a clean version of Freebase dump and instructions on setting up the server via:
To evaluate your models, we have also made available the evaluation script we will use for official evaluation, along with a sample prediction file that the script will take as input. Evaluating semantic-level exact match also depends on several preprocessed ontology files of Freebase. You can find all of them here. To run the evaluation, use python evaluate.py <path_to_dev> <path_to_predictions> --fb_roles <path_to_fb_roles> --fb_types <path_to_fb_types> --reverse_properties <path_to_reverse_properties>
.
Once you have a built a model that works to your expectations on the dev set, you submit it to get official scores on the dev and a hidden test set. To preserve the integrity of test results, we do not release the labels of test set to the public. Here's a tutorial walking you through official evaluation of your model:
Have Questions?
Send an email to gu.826@osu.edu, or create an issue in github.
Acknowledgement
We thank Pranav Rajpurkar and Robin Jia for giving us the permission to build this website based on SQuAD.