Datasets:
				
			
			
	
			
	
		
			
	
		
		Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -32,7 +32,7 @@ DRUID contains real-world (query, context) pairs to facilitate studies of contex | |
| 32 |  | 
| 33 | 
             
            ## Dataset Structure
         | 
| 34 |  | 
| 35 | 
            -
            We release two versions of the dataset: DRUID and DRUID+.  | 
| 36 |  | 
| 37 | 
             
            - id: A unique identifier for each dataset sample. It also indicates claim source.
         | 
| 38 | 
             
            - claim_id: A unique identifier for each claim in the dataset. One claim may correspond to multiple samples, corresponding to different evidence pieces retrieved from different webpages.
         | 
| @@ -48,13 +48,13 @@ We release two versions of the dataset: DRUID and DRUID+. \texttt{DRUID} is a hi | |
| 48 | 
             
            - relevant: Whether the evidence is relevant with respect to the given claim. Has been manually annotated for the DRUID samples.
         | 
| 49 | 
             
            - evidence_stance: The stance of the evidence, i.e. whether it supports the claim or not, or is insufficient. Has been manually annotated for the DRUID samples.
         | 
| 50 |  | 
| 51 | 
            -
            Samples based on claims from 'borderlines' follow a sligthly different structure, as the "claims" in this case are based on  | 
| 52 |  | 
| 53 | 
             
            ## Dataset Creation
         | 
| 54 |  | 
| 55 | 
             
            ### Claim Collection
         | 
| 56 |  | 
| 57 | 
            -
            We sample claims verified by fact-checkers using [Google's Factcheck API](https://developers.google.com/fact-check/tools/api/reference/rest). We only sample claims in English. The claims are collected from 7 diverse fact-checking sources, representing science, politics, Northern Ireland, Sri Lanka, the US, India, France, etc. All claims have been assessed by human fact-checkers.
         | 
| 58 |  | 
| 59 | 
             
            ### Evidence Collection
         | 
| 60 |  | 
|  | |
| 32 |  | 
| 33 | 
             
            ## Dataset Structure
         | 
| 34 |  | 
| 35 | 
            +
            We release two versions of the dataset: DRUID and DRUID+. DRUID is a high-quality (corresponding to top Cohere reranker scores) subset of DRUID+ manually annotated for evidence relevance and stance. The dataset contains the following columns:
         | 
| 36 |  | 
| 37 | 
             
            - id: A unique identifier for each dataset sample. It also indicates claim source.
         | 
| 38 | 
             
            - claim_id: A unique identifier for each claim in the dataset. One claim may correspond to multiple samples, corresponding to different evidence pieces retrieved from different webpages.
         | 
|  | |
| 48 | 
             
            - relevant: Whether the evidence is relevant with respect to the given claim. Has been manually annotated for the DRUID samples.
         | 
| 49 | 
             
            - evidence_stance: The stance of the evidence, i.e. whether it supports the claim or not, or is insufficient. Has been manually annotated for the DRUID samples.
         | 
| 50 |  | 
| 51 | 
            +
            Samples based on claims from 'borderlines' follow a sligthly different structure, as the "claims" in this case are based on the [borderlines](https://github.com/manestay/borderlines) dataset. Therefore, they have no corresponding claimant, claim source or fact-check verdict, etc.
         | 
| 52 |  | 
| 53 | 
             
            ## Dataset Creation
         | 
| 54 |  | 
| 55 | 
             
            ### Claim Collection
         | 
| 56 |  | 
| 57 | 
            +
            We sample claims verified by fact-checkers using [Google's Factcheck API](https://developers.google.com/fact-check/tools/api/reference/rest). We only sample claims in English. The claims are collected from 7 diverse fact-checking sources, representing science, politics, Northern Ireland, Sri Lanka, the US, India, France, etc. All claims have been assessed by human fact-checkers. We also collect claims based on the [borderlines](https://github.com/manestay/borderlines) dataset, for which we can expect there to be more frequent conflicts between contexts.
         | 
| 58 |  | 
| 59 | 
             
            ### Evidence Collection
         | 
| 60 |  | 
