Add pipeline tag and library name
Browse filesThis PR adds the `pipeline_tag` and `library_name` to the model card to improve discoverability and usability. The pipeline tag `image-text-to-text` reflects the model's input and output format. The library name `transformers` is used in the provided code example.
README.md
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
arxiv: 2408.07246
|
|
|
|
|
4 |
---
|
5 |
|
6 |
## Citation
|
@@ -200,4 +202,4 @@ This project is released under the MIT license.
|
|
200 |
|
201 |
ChemVLM is built on [InternVL](https://github.com/OpenGVLab/InternVL).
|
202 |
|
203 |
-
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
arxiv: 2408.07246
|
4 |
+
library_name: transformers
|
5 |
+
pipeline_tag: image-text-to-text
|
6 |
---
|
7 |
|
8 |
## Citation
|
|
|
202 |
|
203 |
ChemVLM is built on [InternVL](https://github.com/OpenGVLab/InternVL).
|
204 |
|
205 |
+
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
|