lhoestq HF Staff commited on
Commit
f565e5e
·
verified ·
1 Parent(s): 840ac9c

Add 'metrology_engineer' config data files

Browse files
README.md CHANGED
@@ -274,6 +274,14 @@ configs:
274
  path: marxism/val-*
275
  - split: dev
276
  path: marxism/dev-*
 
 
 
 
 
 
 
 
277
  dataset_info:
278
  - config_name: accountant
279
  features:
@@ -1265,6 +1273,36 @@ dataset_info:
1265
  num_examples: 5
1266
  download_size: 45030
1267
  dataset_size: 45055
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1268
  ---
1269
 
1270
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
 
274
  path: marxism/val-*
275
  - split: dev
276
  path: marxism/dev-*
277
+ - config_name: metrology_engineer
278
+ data_files:
279
+ - split: test
280
+ path: metrology_engineer/test-*
281
+ - split: val
282
+ path: metrology_engineer/val-*
283
+ - split: dev
284
+ path: metrology_engineer/dev-*
285
  dataset_info:
286
  - config_name: accountant
287
  features:
 
1273
  num_examples: 5
1274
  download_size: 45030
1275
  dataset_size: 45055
1276
+ - config_name: metrology_engineer
1277
+ features:
1278
+ - name: id
1279
+ dtype: int32
1280
+ - name: question
1281
+ dtype: string
1282
+ - name: A
1283
+ dtype: string
1284
+ - name: B
1285
+ dtype: string
1286
+ - name: C
1287
+ dtype: string
1288
+ - name: D
1289
+ dtype: string
1290
+ - name: answer
1291
+ dtype: string
1292
+ - name: explanation
1293
+ dtype: string
1294
+ splits:
1295
+ - name: test
1296
+ num_bytes: 47484
1297
+ num_examples: 219
1298
+ - name: val
1299
+ num_bytes: 6116
1300
+ num_examples: 24
1301
+ - name: dev
1302
+ num_bytes: 2485
1303
+ num_examples: 5
1304
+ download_size: 55033
1305
+ dataset_size: 56085
1306
  ---
1307
 
1308
  C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
metrology_engineer/dev-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fed7f56dd7cb99faf446c103123ce8b84ffe107f4bf49f31c63c5d8e1f2af65
3
+ size 8780
metrology_engineer/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2db3ba8dca39c83e9a1a0d67efc5839a09c1dec049575288056388c5f34ba077
3
+ size 36605
metrology_engineer/val-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa22568373b0394011b9984098bb4c818e2d687855ebb11722b8655f88a254d2
3
+ size 9648