yoonshik1205 nielsr HF Staff commited on
Commit
b7104c9
·
verified ·
1 Parent(s): a9a19d7

Add Hugging Face paper link and refine task categories (#2)

Browse files

- Add Hugging Face paper link and refine task categories (950a7bb7d2c518b1e03eb959db56ea61fd54db01)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -1,17 +1,18 @@
1
  ---
2
- license: apache-2.0
3
- task_categories:
4
- - visual-question-answering
5
- - question-answering
6
  language:
7
  - ko
 
8
  size_categories:
9
  - n<1K
 
 
 
 
10
  ---
11
 
12
  # About this data
13
- [KOFFVQA: An Objectively Evaluated Free-form VQA Benchmark for Large Vision-Language Models in the Korean Language](https://arxiv.org/abs/2503.23730)
14
 
15
  KOFFVQA is a general-purpose VLM benchmark in the Korean language. For more information, refer to [our leaderboard page](https://huggingface.co/spaces/maum-ai/KOFFVQA-Leaderboard) and the official [evaluation code](https://github.com/maum-ai/KOFFVQA).
16
 
17
- This contains the data for the benchmark consisting of images, their corresponding questions, and response grading criteria.
 
1
  ---
 
 
 
 
2
  language:
3
  - ko
4
+ license: apache-2.0
5
  size_categories:
6
  - n<1K
7
+ task_categories:
8
+ - visual-question-answering
9
+ - question-answering
10
+ - image-text-to-text
11
  ---
12
 
13
  # About this data
14
+ [KOFFVQA: An Objectively Evaluated Free-form VQA Benchmark for Large Vision-Language Models in the Korean Language](https://huggingface.co/papers/2503.23730)
15
 
16
  KOFFVQA is a general-purpose VLM benchmark in the Korean language. For more information, refer to [our leaderboard page](https://huggingface.co/spaces/maum-ai/KOFFVQA-Leaderboard) and the official [evaluation code](https://github.com/maum-ai/KOFFVQA).
17
 
18
+ This contains the data for the benchmark consisting of images, their corresponding questions, and response grading criteria. The benchmark focuses on free-form visual question answering, evaluating the ability of large vision-language models to generate comprehensive and accurate text responses to questions about images.