lhoestq HF Staff commited on
Commit
47e9b6d
·
verified ·
1 Parent(s): d4cfe28

Add 'clinical_medicine' config data files

Browse files
README.md CHANGED
@@ -66,6 +66,14 @@ configs:
66
  path: civil_servant/val-*
67
  - split: dev
68
  path: civil_servant/dev-*
 
 
 
 
 
 
 
 
69
  dataset_info:
70
  - config_name: accountant
71
  features:
@@ -277,6 +285,36 @@ dataset_info:
277
  num_examples: 5
278
  download_size: 179936
279
  dataset_size: 207353
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
280
  ---
281
 
282
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
 
66
  path: civil_servant/val-*
67
  - split: dev
68
  path: civil_servant/dev-*
69
+ - config_name: clinical_medicine
70
+ data_files:
71
+ - split: test
72
+ path: clinical_medicine/test-*
73
+ - split: val
74
+ path: clinical_medicine/val-*
75
+ - split: dev
76
+ path: clinical_medicine/dev-*
77
  dataset_info:
78
  - config_name: accountant
79
  features:
 
285
  num_examples: 5
286
  download_size: 179936
287
  dataset_size: 207353
288
+ - config_name: clinical_medicine
289
+ features:
290
+ - name: id
291
+ dtype: int32
292
+ - name: question
293
+ dtype: string
294
+ - name: A
295
+ dtype: string
296
+ - name: B
297
+ dtype: string
298
+ - name: C
299
+ dtype: string
300
+ - name: D
301
+ dtype: string
302
+ - name: answer
303
+ dtype: string
304
+ - name: explanation
305
+ dtype: string
306
+ splits:
307
+ - name: test
308
+ num_bytes: 42161
309
+ num_examples: 200
310
+ - name: val
311
+ num_bytes: 4167
312
+ num_examples: 22
313
+ - name: dev
314
+ num_bytes: 1951
315
+ num_examples: 5
316
+ download_size: 48689
317
+ dataset_size: 48279
318
  ---
319
 
320
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
clinical_medicine/dev-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aef59d638d0dbacead75c42a579f9522e0a89d03caf734950a5c2c0938b4945f
3
+ size 6866
clinical_medicine/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:954dedfbfd3d4acfccceb11a12d3887623146e21a27d915fdd5566bd7f3e72b3
3
+ size 34274
clinical_medicine/val-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8d691d83dc21f5e4de2356dcbef544dec7196c43af164e4d2b497c4eae15bb6
3
+ size 7549