Update README.md
Browse files
README.md
CHANGED
@@ -166,18 +166,17 @@ Dataset({
|
|
166 |
## Flexeval
|
167 |
|
168 |
You can easily use [Flexeval](https://github.com/sbintuitions/flexeval) (version 0.13.3 or later) from SB Intuitons to evaluate the JamC-QA score by simply replacing `commonsense_qa` with `jamcqa` in the [Quickstart](https://github.com/sbintuitions/flexeval?tab=readme-ov-file#quick-start) guide.
|
|
|
169 |
|
170 |
```python
|
171 |
flexeval_lm \
|
172 |
--language_model HuggingFaceLM \
|
173 |
--language_model.model "sbintuitions/tiny-lm" \
|
|
|
174 |
--eval_setup "jamcqa" \
|
175 |
--save_dir "results/commonsense_qa"
|
176 |
```
|
177 |
|
178 |
-
When using greedy search, specify the option `--language_model.default_gen_kwargs "{ do_sample: false }"`.
|
179 |
-
|
180 |
-
|
181 |
# Citation Information
|
182 |
```
|
183 |
@inproceedings{Oka2025,
|
|
|
166 |
## Flexeval
|
167 |
|
168 |
You can easily use [Flexeval](https://github.com/sbintuitions/flexeval) (version 0.13.3 or later) from SB Intuitons to evaluate the JamC-QA score by simply replacing `commonsense_qa` with `jamcqa` in the [Quickstart](https://github.com/sbintuitions/flexeval?tab=readme-ov-file#quick-start) guide.
|
169 |
+
When using greedy search, specify the option `--language_model.default_gen_kwargs "{ do_sample: false }"`.
|
170 |
|
171 |
```python
|
172 |
flexeval_lm \
|
173 |
--language_model HuggingFaceLM \
|
174 |
--language_model.model "sbintuitions/tiny-lm" \
|
175 |
+
--language_model.default_gen_kwargs "{ do_sample: false }" \
|
176 |
--eval_setup "jamcqa" \
|
177 |
--save_dir "results/commonsense_qa"
|
178 |
```
|
179 |
|
|
|
|
|
|
|
180 |
# Citation Information
|
181 |
```
|
182 |
@inproceedings{Oka2025,
|