teruo6939 commited on
Commit
03f91f1
·
verified ·
1 Parent(s): 1ce272d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -60,7 +60,7 @@ In this multiple-choice question answering task, the LLM outputs the option stri
60
  and accuracy is calculated as the proportion of questions whose output exactly matches the gold correct option string.
61
 
62
  | Model | All | culture | custom | regional_identity | geography | history | government | law | healthcare |
63
- |:---|---:|---:|---:|---:|---:|---:|---:|---:|---:|
64
  | [sarashina2-8x70b](https://huggingface.co/sbintuitions/sarashina2-8x70b) | **0.725** | 0.714 | **0.775** | **0.761** | 0.654 | **0.784** | 0.736 | 0.632 | **0.917** |
65
  | [sarashina2-70b](https://huggingface.co/sbintuitions/sarashina2-70b) | **0.725** | **0.719** | 0.745 | 0.736 | **0.673** | 0.764 | 0.764 | 0.666 | **0.917** |
66
  | [Llama-3.3-Swallow-70B-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-v0.4) | 0.697 | 0.689 | **0.775** | 0.589 | 0.566 | 0.776 | **0.773** | **0.783** | 0.854 |
@@ -163,7 +163,9 @@ Dataset({
163
 
164
  ## Evaluation with FlexEval
165
 
166
- You can easily use [Flexeval](https://github.com/sbintuitions/flexeval) (version 0.13.3 or later) to evaluate the JamC-QA score by simply replacing `commonsense_qa` with `jamcqa` in the [Quickstart](https://github.com/sbintuitions/flexeval?tab=readme-ov-file#quick-start) guide.
 
 
167
 
168
  ```python
169
  flexeval_lm \
 
60
  and accuracy is calculated as the proportion of questions whose output exactly matches the gold correct option string.
61
 
62
  | Model | All | culture | custom | regional_identity | geography | history | government | law | healthcare |
63
+ |:---|----|---:|---:|---:|---:|---:|---:|---:|---:|
64
  | [sarashina2-8x70b](https://huggingface.co/sbintuitions/sarashina2-8x70b) | **0.725** | 0.714 | **0.775** | **0.761** | 0.654 | **0.784** | 0.736 | 0.632 | **0.917** |
65
  | [sarashina2-70b](https://huggingface.co/sbintuitions/sarashina2-70b) | **0.725** | **0.719** | 0.745 | 0.736 | **0.673** | 0.764 | 0.764 | 0.666 | **0.917** |
66
  | [Llama-3.3-Swallow-70B-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-v0.4) | 0.697 | 0.689 | **0.775** | 0.589 | 0.566 | 0.776 | **0.773** | **0.783** | 0.854 |
 
163
 
164
  ## Evaluation with FlexEval
165
 
166
+ You can easily use [Flexeval](https://github.com/sbintuitions/flexeval) (version 0.13.3 or later)
167
+ to evaluate the JamC-QA score by simply replacing `commonsense_qa` with `jamcqa` in the
168
+ [Quickstart](https://github.com/sbintuitions/flexeval?tab=readme-ov-file#quick-start) guide.
169
 
170
  ```python
171
  flexeval_lm \