lhoestq HF Staff commited on
Commit
20b8596
·
verified ·
1 Parent(s): b3a4e16

Add 'education_science' config data files

Browse files
README.md CHANGED
@@ -130,6 +130,14 @@ configs:
130
  path: discrete_mathematics/val-*
131
  - split: dev
132
  path: discrete_mathematics/dev-*
 
 
 
 
 
 
 
 
133
  dataset_info:
134
  - config_name: accountant
135
  features:
@@ -581,6 +589,36 @@ dataset_info:
581
  num_examples: 5
582
  download_size: 42941
583
  dataset_size: 41471
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
584
  ---
585
 
586
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
 
130
  path: discrete_mathematics/val-*
131
  - split: dev
132
  path: discrete_mathematics/dev-*
133
+ - config_name: education_science
134
+ data_files:
135
+ - split: test
136
+ path: education_science/test-*
137
+ - split: val
138
+ path: education_science/val-*
139
+ - split: dev
140
+ path: education_science/dev-*
141
  dataset_info:
142
  - config_name: accountant
143
  features:
 
589
  num_examples: 5
590
  download_size: 42941
591
  dataset_size: 41471
592
+ - config_name: education_science
593
+ features:
594
+ - name: id
595
+ dtype: int32
596
+ - name: question
597
+ dtype: string
598
+ - name: A
599
+ dtype: string
600
+ - name: B
601
+ dtype: string
602
+ - name: C
603
+ dtype: string
604
+ - name: D
605
+ dtype: string
606
+ - name: answer
607
+ dtype: string
608
+ - name: explanation
609
+ dtype: string
610
+ splits:
611
+ - name: test
612
+ num_bytes: 55753
613
+ num_examples: 270
614
+ - name: val
615
+ num_bytes: 5519
616
+ num_examples: 29
617
+ - name: dev
618
+ num_bytes: 3093
619
+ num_examples: 5
620
+ download_size: 60878
621
+ dataset_size: 64365
622
  ---
623
 
624
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
education_science/dev-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1907cdc9405b013e4e9d59bd27fbd888b0e37267ac25c66befeec17d0a050554
3
+ size 9828
education_science/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:320e5352c443d9269ce89caf0d8ddb624a20789917baf8a1e1aabb0e62c0393f
3
+ size 42261
education_science/val-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f3a48fc3b60c03e44b2f32b1fdb214967e5112de3ba3631c9e3df24d1e77fe6
3
+ size 8789