teruo6939 commited on
Commit
6ea472a
·
verified ·
1 Parent(s): a2c11fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -19
README.md CHANGED
@@ -41,28 +41,24 @@ dataset_info:
41
 
42
  ## Dataset Summary
43
 
44
- 日本の文化や風習といった日本固有の知識に特化した多肢選択式(4択)の質問応答ベンチマークです。
45
- 問題は既存の英語ベンチマークからの翻訳でなく、すべて人手でスクラッチから作成しました。
46
-
47
- JamC-QA は 8カテゴリ、2,341問を含みます。
48
- カテゴリごとの問題数の内訳は以下の通りです。
49
-
50
- | カテゴリ | 問題数 |
51
- | ---- | ---- |
52
- | 文化 | 644 |
53
- | 風習 | 204 |
54
- | 風土 | 401 |
55
- | 地理 | 276 |
56
- | 日本史 | 347 |
57
- | 行政 | 114 |
58
- | 法律 | 303 |
59
- | 医療 | 52 |
60
- | 合計 | 2,341 |
61
-
62
- 詳細はこちらのテックブログ(URL未設定)をご確認ください。
63
 
64
  ## Supported Tasks and Leaderboards
65
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ## Languages
67
 
68
  Japanese
 
41
 
42
  ## Dataset Summary
43
 
44
+ This benchmark test evaluates cultural knowledge related to Japan-specific topics, such as culture and customs, using multiple-choice questions.
45
+ This test includes questions across eight categories: Japanese culture, custom, climate, geography, history, government, law and healthcare.
46
+ To achieve high accuracy on this test, the model must possess extensive knowledge about Japanese culture.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ## Supported Tasks and Leaderboards
49
 
50
+ | Model | Authors | Micro-average | culture | custom | climate | geography | history | government | law | healthcare |
51
+ | [sarashina2-8x70b](https://huggingface.co/sbintuitions/sarashina2-8x70b) | SB Intuitions Inc., 2024 | 0.7364 | 0.722 | 0.8088 | 0.7855 | 0.6522 | 0.7839 | 0.7719 | 0.6436 | 0.8462 |
52
+ | [sarashina2-70b](https://huggingface.co/sbintuitions/sarashina2-70b) | SB Intuitions Inc., 2024 | 0.7245 | 0.6988 | 0.7892 | 0.7556 | 0.6558 | 0.7781 | 0.7544 | 0.6733 | 0.7885 |
53
+ | [Llama-3.3-Swallow-70B-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-v0.4) | Fujii et al., 2024 | 0.695 | 0.6894 | 0.7353 | 0.6185 | 0.5688 | 0.7781 | 0.7719 0.7459 | 0.8462 |
54
+ | [RakutenAI-2.0-8x7B](https://huggingface.co/Rakuten/RakutenAI-2.0-8x7B) | Rakuten Group, Inc., 2025 | 0.616 | 0.6056 | 0.6814 | 0.6160 | 0.4855 | 0.6888 | 0.6754 | 0.5941 | 0.6923 |
55
+ | [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) | Mistral AI, 2023 | 0.2772 | 0.2671 | 0.2892 | 0.2618 | 0.2355 | 0.2767 | 0.3509 | 0.3102 | 0.3462 |
56
+ | [plamo-100b](https://huggingface.co/pfnet/plamo-100b) | | 0.5908 | 0.6102 | 0.6422 | 0.6384 | 0.4565 | 0.6398 | 0.5526 | 0.5182 | 0.6731 |
57
+ | [llm-jp-3.1-8x13b](https://huggingface.co/llm-jp/llm-jp-3-8x13b) | | 0.5737 | 0.5839 | 0.6275 | 0.606 | 0.4674 | 0.6110 | 0.6404 | 0.4884 | 0.6538 |
58
+ | [Meta-Llama-3.1-405B](https://huggingface.co/meta-llama/Llama-3.1-405B) | | 0.5724 | 0.5699 | 0.5245 | 0.4688 | 0.5435 | 0.6571 | 0.6579 | 0.6403 | 0.5962 |
59
+ | [Nemotron-4-340B-Base](https://huggingface.co/mgoin/Nemotron-4-340B-Base-hf) | | 0.5600 | 0.5761 | 0.6176 | 0.5062 | 0.4601 | 0.5821 | 0.6491 | 0.5776 | 0.6346 |
60
+ | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | | 0.5421 | 0.5419 | 0.6324 | 0.4763 | 0.4746 | 0.5677 | 0.6053 | 0.5644 | 0.6154 |
61
+
62
  ## Languages
63
 
64
  Japanese