lhoestq HF Staff commited on
Commit
19cd970
·
verified ·
1 Parent(s): 70c3fa4

Add 'law' config data files

Browse files
README.md CHANGED
@@ -234,6 +234,14 @@ configs:
234
  path: ideological_and_moral_cultivation/val-*
235
  - split: dev
236
  path: ideological_and_moral_cultivation/dev-*
 
 
 
 
 
 
 
 
237
  dataset_info:
238
  - config_name: accountant
239
  features:
@@ -1075,6 +1083,36 @@ dataset_info:
1075
  num_examples: 5
1076
  download_size: 41532
1077
  dataset_size: 39852
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1078
  ---
1079
 
1080
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
 
234
  path: ideological_and_moral_cultivation/val-*
235
  - split: dev
236
  path: ideological_and_moral_cultivation/dev-*
237
+ - config_name: law
238
+ data_files:
239
+ - split: test
240
+ path: law/test-*
241
+ - split: val
242
+ path: law/val-*
243
+ - split: dev
244
+ path: law/dev-*
245
  dataset_info:
246
  - config_name: accountant
247
  features:
 
1083
  num_examples: 5
1084
  download_size: 41532
1085
  dataset_size: 39852
1086
+ - config_name: law
1087
+ features:
1088
+ - name: id
1089
+ dtype: int32
1090
+ - name: question
1091
+ dtype: string
1092
+ - name: A
1093
+ dtype: string
1094
+ - name: B
1095
+ dtype: string
1096
+ - name: C
1097
+ dtype: string
1098
+ - name: D
1099
+ dtype: string
1100
+ - name: answer
1101
+ dtype: string
1102
+ - name: explanation
1103
+ dtype: string
1104
+ splits:
1105
+ - name: test
1106
+ num_bytes: 79782
1107
+ num_examples: 221
1108
+ - name: val
1109
+ num_bytes: 8119
1110
+ num_examples: 24
1111
+ - name: dev
1112
+ num_bytes: 4142
1113
+ num_examples: 5
1114
+ download_size: 83562
1115
+ dataset_size: 92043
1116
  ---
1117
 
1118
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
law/dev-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa946c425664b8bce7ead5f23ec903a70abffaba46bb89e140ed61e03ee748fe
3
+ size 10760
law/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4bcba44a15c11c791f62a7534d0f9eb4c492cb06a5eeed61800f0907cc6050b
3
+ size 60676
law/val-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc940e6ca738861c9204507f613feef75447d909c70996b94c8f61624c4b0d6d
3
+ size 12126