lhoestq HF Staff commited on
Commit
78678a5
·
verified ·
1 Parent(s): e897e7b

Add 'basic_medicine' config data files

Browse files
README.md CHANGED
@@ -34,6 +34,14 @@ configs:
34
  path: art_studies/val-*
35
  - split: dev
36
  path: art_studies/dev-*
 
 
 
 
 
 
 
 
37
  dataset_info:
38
  - config_name: accountant
39
  features:
@@ -125,6 +133,36 @@ dataset_info:
125
  num_examples: 5
126
  download_size: 46524
127
  dataset_size: 47247
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
128
  ---
129
 
130
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
 
34
  path: art_studies/val-*
35
  - split: dev
36
  path: art_studies/dev-*
37
+ - config_name: basic_medicine
38
+ data_files:
39
+ - split: test
40
+ path: basic_medicine/test-*
41
+ - split: val
42
+ path: basic_medicine/val-*
43
+ - split: dev
44
+ path: basic_medicine/dev-*
45
  dataset_info:
46
  - config_name: accountant
47
  features:
 
133
  num_examples: 5
134
  download_size: 46524
135
  dataset_size: 47247
136
+ - config_name: basic_medicine
137
+ features:
138
+ - name: id
139
+ dtype: int32
140
+ - name: question
141
+ dtype: string
142
+ - name: A
143
+ dtype: string
144
+ - name: B
145
+ dtype: string
146
+ - name: C
147
+ dtype: string
148
+ - name: D
149
+ dtype: string
150
+ - name: answer
151
+ dtype: string
152
+ - name: explanation
153
+ dtype: string
154
+ splits:
155
+ - name: test
156
+ num_bytes: 28820
157
+ num_examples: 175
158
+ - name: val
159
+ num_bytes: 2627
160
+ num_examples: 19
161
+ - name: dev
162
+ num_bytes: 1825
163
+ num_examples: 5
164
+ download_size: 37360
165
+ dataset_size: 33272
166
  ---
167
 
168
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
basic_medicine/dev-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab699967a442f895b6a6381b192e5094f870c6d29d77947c1d8975c8e26fc8de
3
+ size 7105
basic_medicine/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7be18eb211ebe9cc4d9e3255412bfab3125217de1f5027bef35914324c45cc6
3
+ size 24148
basic_medicine/val-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41395a18f51c704a7233046aca1961a999f784f05576de180bea58a5b2abbec0
3
+ size 6107