lhoestq HF Staff commited on
Commit
edc3764
·
verified ·
1 Parent(s): d300241

Add 'sports_science' config data files

Browse files
README.md CHANGED
@@ -386,6 +386,14 @@ configs:
386
  path: professional_tour_guide/val-*
387
  - split: dev
388
  path: professional_tour_guide/dev-*
 
 
 
 
 
 
 
 
389
  dataset_info:
390
  - config_name: accountant
391
  features:
@@ -1797,6 +1805,36 @@ dataset_info:
1797
  num_examples: 5
1798
  download_size: 51538
1799
  dataset_size: 47504
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1800
  ---
1801
 
1802
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
 
386
  path: professional_tour_guide/val-*
387
  - split: dev
388
  path: professional_tour_guide/dev-*
389
+ - config_name: sports_science
390
+ data_files:
391
+ - split: test
392
+ path: sports_science/test-*
393
+ - split: val
394
+ path: sports_science/val-*
395
+ - split: dev
396
+ path: sports_science/dev-*
397
  dataset_info:
398
  - config_name: accountant
399
  features:
 
1805
  num_examples: 5
1806
  download_size: 51538
1807
  dataset_size: 47504
1808
+ - config_name: sports_science
1809
+ features:
1810
+ - name: id
1811
+ dtype: int32
1812
+ - name: question
1813
+ dtype: string
1814
+ - name: A
1815
+ dtype: string
1816
+ - name: B
1817
+ dtype: string
1818
+ - name: C
1819
+ dtype: string
1820
+ - name: D
1821
+ dtype: string
1822
+ - name: answer
1823
+ dtype: string
1824
+ - name: explanation
1825
+ dtype: string
1826
+ splits:
1827
+ - name: test
1828
+ num_bytes: 32527
1829
+ num_examples: 180
1830
+ - name: val
1831
+ num_bytes: 3493
1832
+ num_examples: 19
1833
+ - name: dev
1834
+ num_bytes: 4182
1835
+ num_examples: 5
1836
+ download_size: 44846
1837
+ dataset_size: 40202
1838
  ---
1839
 
1840
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
sports_science/dev-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68cd9a9efbc24cc91b3f17662d28eff79456aeaf701213bff818c88903de4ce8
3
+ size 10792
sports_science/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e84f54e74f39560b795b496ce88fafca0fd7cc2378056c30dac087dca46d5318
3
+ size 27023
sports_science/val-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c7ef158b24787ce1e10465571c485869f4fbc3f700f7758ca3a4f403f204a4a
3
+ size 7031