Datasets:
Add 'logic' config data files
Browse files- README.md +38 -0
- logic/dev-00000-of-00001.parquet +3 -0
- logic/test-00000-of-00001.parquet +3 -0
- logic/val-00000-of-00001.parquet +3 -0
README.md
CHANGED
@@ -250,6 +250,14 @@ configs:
|
|
250 |
path: legal_professional/val-*
|
251 |
- split: dev
|
252 |
path: legal_professional/dev-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
253 |
dataset_info:
|
254 |
- config_name: accountant
|
255 |
features:
|
@@ -1151,6 +1159,36 @@ dataset_info:
|
|
1151 |
num_examples: 5
|
1152 |
download_size: 125081
|
1153 |
dataset_size: 141174
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1154 |
---
|
1155 |
|
1156 |
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
|
|
|
250 |
path: legal_professional/val-*
|
251 |
- split: dev
|
252 |
path: legal_professional/dev-*
|
253 |
+
- config_name: logic
|
254 |
+
data_files:
|
255 |
+
- split: test
|
256 |
+
path: logic/test-*
|
257 |
+
- split: val
|
258 |
+
path: logic/val-*
|
259 |
+
- split: dev
|
260 |
+
path: logic/dev-*
|
261 |
dataset_info:
|
262 |
- config_name: accountant
|
263 |
features:
|
|
|
1159 |
num_examples: 5
|
1160 |
download_size: 125081
|
1161 |
dataset_size: 141174
|
1162 |
+
- config_name: logic
|
1163 |
+
features:
|
1164 |
+
- name: id
|
1165 |
+
dtype: int32
|
1166 |
+
- name: question
|
1167 |
+
dtype: string
|
1168 |
+
- name: A
|
1169 |
+
dtype: string
|
1170 |
+
- name: B
|
1171 |
+
dtype: string
|
1172 |
+
- name: C
|
1173 |
+
dtype: string
|
1174 |
+
- name: D
|
1175 |
+
dtype: string
|
1176 |
+
- name: answer
|
1177 |
+
dtype: string
|
1178 |
+
- name: explanation
|
1179 |
+
dtype: string
|
1180 |
+
splits:
|
1181 |
+
- name: test
|
1182 |
+
num_bytes: 144246
|
1183 |
+
num_examples: 204
|
1184 |
+
- name: val
|
1185 |
+
num_bytes: 15561
|
1186 |
+
num_examples: 22
|
1187 |
+
- name: dev
|
1188 |
+
num_bytes: 5641
|
1189 |
+
num_examples: 5
|
1190 |
+
download_size: 141258
|
1191 |
+
dataset_size: 165448
|
1192 |
---
|
1193 |
|
1194 |
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
|
logic/dev-00000-of-00001.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4bcad55d78cf043dd89047a82c611c12c8162cba60db9b529314cd332a1434c9
|
3 |
+
size 15004
|
logic/test-00000-of-00001.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cace2b72c299464eace54d215f357c365b510705ebbd588b7d046943337c264f
|
3 |
+
size 106698
|
logic/val-00000-of-00001.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e6729770934549cd4d038a6c7ae9a1448728237a262e30db87225c775aaddd05
|
3 |
+
size 19556
|