Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- README.md +25 -184
- chat_template.jinja +1 -0
- config.json +248 -0
- generation_config.json +7 -0
- mergekit_config.yml +10 -0
- model-00001-of-00007.safetensors +3 -0
- model-00002-of-00007.safetensors +3 -0
- model-00003-of-00007.safetensors +3 -0
- model-00004-of-00007.safetensors +3 -0
- model-00005-of-00007.safetensors +3 -0
- model-00006-of-00007.safetensors +3 -0
- model-00007-of-00007.safetensors +3 -0
- model.safetensors.index.json +1 -0
- preprocessor_config.json +171 -0
- processor_config.json +8 -0
- special_tokens_map.json +34 -0
- tokenizer.json +3 -0
- tokenizer_config.json +316 -0
- video_preprocessor_config.json +37 -0
.gitattributes
CHANGED
@@ -35,3 +35,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
Gimbap_Example-1-20250709-032708.png filter=lfs diff=lfs merge=lfs -text
|
37 |
ocr.jpg filter=lfs diff=lfs merge=lfs -text
|
|
|
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
Gimbap_Example-1-20250709-032708.png filter=lfs diff=lfs merge=lfs -text
|
37 |
ocr.jpg filter=lfs diff=lfs merge=lfs -text
|
38 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,199 +1,40 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
base_model:
|
4 |
-
- Qwen/Qwen3-14B
|
5 |
-
- google/siglip2-so400m-patch16-384
|
6 |
library_name: transformers
|
7 |
tags:
|
8 |
-
-
|
9 |
-
-
|
10 |
-
- ncsoft
|
11 |
-
- ncai
|
12 |
-
- varco
|
13 |
-
pipeline_tag: image-text-to-text
|
14 |
-
language:
|
15 |
-
- en
|
16 |
-
- ko
|
17 |
-
---
|
18 |
-
|
19 |
-
# VARCO-VISION-2.0-14B
|
20 |
-
|
21 |
-
## Introduction
|
22 |
-
**VARCO-VISION-2.0** is a multimodal AI model capable of understanding both images and text to answer user queries. It supports multi-image inputs, enabling effective processing of complex content such as documents, tables, and charts. The model demonstrates strong comprehension in both Korean and English, with significantly improved text generation capabilities and a deeper understanding of Korean cultural context. Compared to its predecessor, performance has been notably enhanced across various benchmarks, and its usability in real-world scenarios—such as everyday Q&A and information summarization—has also improved.
|
23 |
-
|
24 |
-
In addition to the 14B full-scale model, a lightweight 1.7B version is available for on-device use, making it accessible on personal devices such as smartphones and PCs. VARCO-VISION-2.0 is a powerful open-source AI model built for Korean users and is freely available for a wide range of applications.
|
25 |
-
|
26 |
-
## 🚨News🎙️
|
27 |
-
- 👀 We are going to release VARCO-VISION-2.0-1.7B-OCR soon!
|
28 |
-
- 👀 We are going to release VARCO-VISION-2.0-1.7B soon!
|
29 |
-
- 📰 2025-07-16: We released VARCO-VISION-2.0-14B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B)
|
30 |
-
- 📰 2025-07-16: We released GME-VARCO-VISION-Embedding at [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding)
|
31 |
-
|
32 |
-
## Key Features
|
33 |
-
- **Multi-image Understanding**: Newly added support for multi-image inputs enables the model to analyze multiple images simultaneously and make more holistic and context-aware decisions.
|
34 |
-
- **Korean Language Specialization**: The model is further specialized for Korean, with a deeper understanding of Korean language, context, and culture. Korean text generation has been significantly improved, resulting in more natural, fluent, and accurate responses.
|
35 |
-
- **OCR with Text Localization**: Unlike typical models that only recognize and generate text from images, VARCO-VISION-2.0 can also identify the position of the text and provide bounding boxes around it. This makes it especially useful for document understanding, signage interpretation, and structured visual data.
|
36 |
-
- **Enhanced Safety**: Improved robustness and filtering to ensure safer handling of harmful or sexually explicit content.
|
37 |
-
|
38 |
-
<div align="center">
|
39 |
-
<img src="./Gimbap_Example-1-20250709-032708.png" width="100%" />
|
40 |
-
</div>
|
41 |
-
|
42 |
-
## VARCO-VISION-2.0 Family
|
43 |
-
| Model Name | Base Models (Vision / Language) | HF Link |
|
44 |
-
| :------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: |
|
45 |
-
| VARCO-VISION-2.0-1.7B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B) |
|
46 |
-
| VARCO-VISION-2.0-14B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-14B ](https://huggingface.co/Qwen/Qwen3-14B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B) |
|
47 |
-
| VARCO-VISION-2.0-1.7B-OCR | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B-OCR) |
|
48 |
-
| GME-VARCO-VISION-Embedding | [Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) | [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding) |
|
49 |
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
## Evaluation
|
54 |
-
We adopted benchmark scores directly from [OpenVLM Leaderboard](https://huggingface.co/spaces/opencompass/open_vlm_leaderboard) where available, and conducted our own evaluations for benchmarks not included in OpenVLM Leaderboard, comparing results against various open-source models to provide a fair and comprehensive evaluation.
|
55 |
-
Please note that for certain benchmarks involving LLM-based evaluation (e.g., LLaVABench), results may not be exactly reproducible due to variations in the underlying LLM behavior.
|
56 |
-
|
57 |
-
### English Benchmark
|
58 |
-
| Benchmark | InternVL3-14B | Ovis2-16B | Qwen2.5-VL-7B |VARCO-VISION-2.0-14B |
|
59 |
-
| :-----------: | :-----------: | :-------: | :-----------: |:------------------: |
|
60 |
-
| MMStar | **68.9** | *67.2* | 64.1 | 64.8 |
|
61 |
-
| SEEDBench_IMG | 77.5 | *77.7* | 77.0 | **78.3** |
|
62 |
-
| LLaVABench | 84.4 | **93.0** | *91.0* | 90.0 |
|
63 |
-
| OCRBench | 877 | *879* | **888** | 863 |
|
64 |
-
|
65 |
-
### Korean Benchmark
|
66 |
-
| Benchmark | InternVL3-14B | Ovis2-16B | Qwen2.5-VL-7B | VARCO-VISION-2.0-14B |
|
67 |
-
| :----------: | :-----------: | :-------: | :-----------: | :------------------: |
|
68 |
-
| K-MMStar | **64.9** | 29.7 | 49.3 | *63.3* |
|
69 |
-
| K-SEED | **78.2** | 73.2 | 75.7 | *77.4* |
|
70 |
-
| K-LLaVABench | 80.9 | 86.3 | *94.1* | **95.1** |
|
71 |
-
| K-DTCBench | **87.9** | 81.7 | *82.1* | 79.6 |
|
72 |
-
|
73 |
-
### Korean Cultural Benchmark
|
74 |
-
| Benchmark | InternVL3-14B | Ovis2-16B | Qwen2.5-VL-7B | VARCO-VISION-2.0-14B |
|
75 |
-
| :--------------: | :-----------: | :-------: | :-----------: | :------------------: |
|
76 |
-
| K-Viscuit | 71.7 | **77.0** | 70.9 | *72.9* |
|
77 |
-
| PangeaBench (ko) | **77.2** | *76.9* | 76.6 | 75.2 |
|
78 |
-
|
79 |
-
### Text-only Benchmark
|
80 |
-
| Benchmark | InternVL3-14B | Ovis2-16B | Qwen2.5-VL-7B | VARCO-VISION-2.0-14B |
|
81 |
-
| :--------: | :-----------: | :-------: | :-----------: | :------------------: |
|
82 |
-
| MMLU | **78.5** | *78.4* | 4.6 | 77.7 |
|
83 |
-
| MT-Bench | **8.93** | 8.59 | 8.07 | *8.88* |
|
84 |
-
| KMMLU | *51.4* | 49.3 | 39.6 | **57.4** |
|
85 |
-
| KoMT-Bench | 7.01 | *7.91* | 6.84 | **7.95** |
|
86 |
-
| LogicKor | 7.00 | **7.94** | 6.55 | *7.86* |
|
87 |
|
88 |
-
|
89 |
|
90 |
-
|
91 |
-
|
92 |
-
| :-------: | :-------: | :------------------: |
|
93 |
-
| CORD | *91.4* | **93.3** |
|
94 |
-
| ICDAR2013 | *92.0* | **93.2** |
|
95 |
-
| ICDAR2015 | *73.7* | **82.7** |
|
96 |
|
97 |
-
|
98 |
-
To use this model, we recommend installing `transformers` version **4.53.1 or higher**. While it may work with earlier versions, using **4.53.1 or above is strongly recommended**, especially to ensure optimal performance for the **multi-image feature**.
|
99 |
|
100 |
-
|
101 |
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
import torch
|
106 |
-
from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration
|
107 |
|
108 |
-
|
109 |
-
model = LlavaOnevisionForConditionalGeneration.from_pretrained(
|
110 |
-
model_name,
|
111 |
-
torch_dtype=torch.float16,
|
112 |
-
attn_implementation="sdpa",
|
113 |
-
device_map="auto",
|
114 |
-
)
|
115 |
-
processor = AutoProcessor.from_pretrained(model_name)
|
116 |
|
117 |
-
|
118 |
-
{
|
119 |
-
"role": "user",
|
120 |
-
"content": [
|
121 |
-
{"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"},
|
122 |
-
{"type": "text", "text": "What is shown in this image?"},
|
123 |
-
],
|
124 |
-
},
|
125 |
-
{
|
126 |
-
"role": "assistant",
|
127 |
-
"content": [
|
128 |
-
{"type": "text", "text": "There is a red stop sign in the image."},
|
129 |
-
],
|
130 |
-
},
|
131 |
-
{
|
132 |
-
"role": "user",
|
133 |
-
"content": [
|
134 |
-
{"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
|
135 |
-
{"type": "text", "text": "What about this image? How many cats do you see?"},
|
136 |
-
],
|
137 |
-
},
|
138 |
-
]
|
139 |
-
conversation_2 = [
|
140 |
-
{
|
141 |
-
"role": "user",
|
142 |
-
"content": [
|
143 |
-
{"type": "image", "url": "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"},
|
144 |
-
{"type": "text", "text": "이 이미지에는 무엇이 보이나요?"},
|
145 |
-
],
|
146 |
-
},
|
147 |
-
]
|
148 |
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
|
|
|
|
157 |
|
158 |
-
generate_ids = model.generate(**inputs, max_new_tokens=1024, do_sample=False)
|
159 |
-
outputs = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
160 |
-
print(outputs)
|
161 |
-
```
|
162 |
-
The following shows the input required for using OCR with text localization, along with the corresponding output:
|
163 |
-
```python
|
164 |
-
# INPUT
|
165 |
-
image_file = "./assets/ocr.jpg"
|
166 |
-
raw_image = Image.open(image_file)
|
167 |
-
conversation = [
|
168 |
-
{
|
169 |
-
"role": "user",
|
170 |
-
"content": [
|
171 |
-
{"type": "text", "text": ""},
|
172 |
-
{"type": "image"},
|
173 |
-
],
|
174 |
-
},
|
175 |
-
]
|
176 |
|
177 |
-
# OUTPUT
|
178 |
-
"""
|
179 |
-
<char>백범로</char><bbox>0.172, 0.266, 0.328, 0.341</bbox>
|
180 |
-
<char>124번길</char><bbox>0.347, 0.266, 0.512, 0.341</bbox>
|
181 |
-
<char>Baekbeom-ro</char><bbox>0.171, 0.337, 0.433, 0.392</bbox>
|
182 |
-
<char>124</char><bbox>0.444, 0.341, 0.508, 0.392</bbox>
|
183 |
-
<char>만수주공아파트</char><bbox>0.109, 0.531, 0.335, 0.601</bbox>
|
184 |
-
<char>시흥</char><bbox>0.443, 0.518, 0.522, 0.581</bbox>
|
185 |
-
<char>시청</char><bbox>0.711, 0.521, 0.811, 0.594</bbox>
|
186 |
-
<char>Mansu</char><bbox>0.102, 0.601, 0.181, 0.648</bbox>
|
187 |
-
<char>Jugong</char><bbox>0.186, 0.601, 0.273, 0.658</bbox>
|
188 |
-
<char>Apt</char><bbox>0.28, 0.601, 0.327, 0.651</bbox>
|
189 |
-
<char>42</char><bbox>0.377, 0.601, 0.416, 0.648</bbox>
|
190 |
-
<char>Shieung</char><bbox>0.445, 0.578, 0.53, 0.625</bbox>
|
191 |
-
<char>인천대공원</char><bbox>0.43, 0.621, 0.609, 0.684</bbox>
|
192 |
-
<char>모래내시장역</char><bbox>0.651, 0.59, 0.873, 0.665</bbox>
|
193 |
-
<char>IncheonGrand</char><bbox>0.432, 0.681, 0.561, 0.723</bbox>
|
194 |
-
<char>Park</char><bbox>0.564, 0.681, 0.611, 0.723</bbox>
|
195 |
-
"""
|
196 |
```
|
197 |
-
<div align="center">
|
198 |
-
<img src="./ocr.jpg" width="100%" />
|
199 |
-
</div>
|
|
|
1 |
---
|
2 |
+
base_model: []
|
|
|
|
|
|
|
3 |
library_name: transformers
|
4 |
tags:
|
5 |
+
- mergekit
|
6 |
+
- merge
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
+
---
|
9 |
+
# vv21_llava_qwen3_linear_250711_15
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
12 |
|
13 |
+
## Merge Details
|
14 |
+
### Merge Method
|
|
|
|
|
|
|
|
|
15 |
|
16 |
+
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
|
|
|
17 |
|
18 |
+
### Models Merged
|
19 |
|
20 |
+
The following models were included in the merge:
|
21 |
+
* /home/work/.varco_mllm/checkpoints-v2d1/training/vv2d1-llava-qwen3-14b-st4-250708/checkpoint-1400_hf
|
22 |
+
* /home/work/.varco_mllm/checkpoints-v2d1/training/vv2d1-llava-qwen3-14b-st4-250708/checkpoint-1548_hf
|
|
|
|
|
23 |
|
24 |
+
### Configuration
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
+
The following YAML configuration was used to produce this model:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
+
```yaml
|
29 |
+
models:
|
30 |
+
- model: /home/work/.varco_mllm/checkpoints-v2d1/training/vv2d1-llava-qwen3-14b-st4-250708/checkpoint-1400_hf
|
31 |
+
parameters:
|
32 |
+
weight: 1.0
|
33 |
+
- model: /home/work/.varco_mllm/checkpoints-v2d1/training/vv2d1-llava-qwen3-14b-st4-250708/checkpoint-1548_hf
|
34 |
+
parameters:
|
35 |
+
weight: 4.0
|
36 |
+
merge_method: linear
|
37 |
+
dtype: float16
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
```
|
|
|
|
|
|
chat_template.jinja
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{% if messages[0]['role'] == 'system' %}{{'<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n'}}{% else %}{{'<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n'}}{% endif %}{% for message in messages %}{% if message['role'] == 'user' or message['role'] == 'system' and not loop.first or message['role'] == 'assistant' %}{{'<|im_start|>' + message['role'] + '\n'}}{# Render all images first #}{% for content in message['content'] | selectattr('type', 'equalto', 'image') %}{{ '<image>\n' }}{% endfor %}{# Render all video then #}{% for content in message['content'] | selectattr('type', 'equalto', 'video') %}{{ '<video>\n' }}{% endfor %}{# Render all text next #}{% if message['role'] != 'assistant' %}{% for content in message['content'] | selectattr('type', 'equalto', 'text') %}{{ content['text'] }}{% endfor %}{% else %}{% for content in message['content'] | selectattr('type', 'equalto', 'text') %}{% generation %}{{ content['text'] }}{% endgeneration %}{% endfor %}{% endif %}{{'<|im_end|>' + '\n'}}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}
|
config.json
ADDED
@@ -0,0 +1,248 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"LlavaOnevisionForConditionalGeneration"
|
4 |
+
],
|
5 |
+
"image_grid_pinpoints": [
|
6 |
+
[
|
7 |
+
384,
|
8 |
+
384
|
9 |
+
],
|
10 |
+
[
|
11 |
+
384,
|
12 |
+
768
|
13 |
+
],
|
14 |
+
[
|
15 |
+
384,
|
16 |
+
1152
|
17 |
+
],
|
18 |
+
[
|
19 |
+
384,
|
20 |
+
1536
|
21 |
+
],
|
22 |
+
[
|
23 |
+
384,
|
24 |
+
1920
|
25 |
+
],
|
26 |
+
[
|
27 |
+
384,
|
28 |
+
2304
|
29 |
+
],
|
30 |
+
[
|
31 |
+
768,
|
32 |
+
384
|
33 |
+
],
|
34 |
+
[
|
35 |
+
768,
|
36 |
+
768
|
37 |
+
],
|
38 |
+
[
|
39 |
+
768,
|
40 |
+
1152
|
41 |
+
],
|
42 |
+
[
|
43 |
+
768,
|
44 |
+
1536
|
45 |
+
],
|
46 |
+
[
|
47 |
+
768,
|
48 |
+
1920
|
49 |
+
],
|
50 |
+
[
|
51 |
+
768,
|
52 |
+
2304
|
53 |
+
],
|
54 |
+
[
|
55 |
+
1152,
|
56 |
+
384
|
57 |
+
],
|
58 |
+
[
|
59 |
+
1152,
|
60 |
+
768
|
61 |
+
],
|
62 |
+
[
|
63 |
+
1152,
|
64 |
+
1152
|
65 |
+
],
|
66 |
+
[
|
67 |
+
1152,
|
68 |
+
1536
|
69 |
+
],
|
70 |
+
[
|
71 |
+
1152,
|
72 |
+
1920
|
73 |
+
],
|
74 |
+
[
|
75 |
+
1152,
|
76 |
+
2304
|
77 |
+
],
|
78 |
+
[
|
79 |
+
1536,
|
80 |
+
384
|
81 |
+
],
|
82 |
+
[
|
83 |
+
1536,
|
84 |
+
768
|
85 |
+
],
|
86 |
+
[
|
87 |
+
1536,
|
88 |
+
1152
|
89 |
+
],
|
90 |
+
[
|
91 |
+
1536,
|
92 |
+
1536
|
93 |
+
],
|
94 |
+
[
|
95 |
+
1536,
|
96 |
+
1920
|
97 |
+
],
|
98 |
+
[
|
99 |
+
1536,
|
100 |
+
2304
|
101 |
+
],
|
102 |
+
[
|
103 |
+
1920,
|
104 |
+
384
|
105 |
+
],
|
106 |
+
[
|
107 |
+
1920,
|
108 |
+
768
|
109 |
+
],
|
110 |
+
[
|
111 |
+
1920,
|
112 |
+
1152
|
113 |
+
],
|
114 |
+
[
|
115 |
+
1920,
|
116 |
+
1536
|
117 |
+
],
|
118 |
+
[
|
119 |
+
1920,
|
120 |
+
1920
|
121 |
+
],
|
122 |
+
[
|
123 |
+
1920,
|
124 |
+
2304
|
125 |
+
],
|
126 |
+
[
|
127 |
+
2304,
|
128 |
+
384
|
129 |
+
],
|
130 |
+
[
|
131 |
+
2304,
|
132 |
+
768
|
133 |
+
],
|
134 |
+
[
|
135 |
+
2304,
|
136 |
+
1152
|
137 |
+
],
|
138 |
+
[
|
139 |
+
2304,
|
140 |
+
1536
|
141 |
+
],
|
142 |
+
[
|
143 |
+
2304,
|
144 |
+
1920
|
145 |
+
],
|
146 |
+
[
|
147 |
+
2304,
|
148 |
+
2304
|
149 |
+
]
|
150 |
+
],
|
151 |
+
"image_token_index": 151679,
|
152 |
+
"model_type": "llava_onevision",
|
153 |
+
"multimodal_projector_bias": true,
|
154 |
+
"projector_hidden_act": "gelu",
|
155 |
+
"text_config": {
|
156 |
+
"_name_or_path": "Qwen/Qwen3-14B",
|
157 |
+
"architectures": [
|
158 |
+
"Qwen3ForCausalLM"
|
159 |
+
],
|
160 |
+
"attention_bias": false,
|
161 |
+
"attention_dropout": 0.0,
|
162 |
+
"bos_token_id": 151643,
|
163 |
+
"eos_token_id": 151645,
|
164 |
+
"head_dim": 128,
|
165 |
+
"hidden_act": "silu",
|
166 |
+
"hidden_size": 5120,
|
167 |
+
"initializer_range": 0.02,
|
168 |
+
"intermediate_size": 17408,
|
169 |
+
"layer_types": [
|
170 |
+
"full_attention",
|
171 |
+
"full_attention",
|
172 |
+
"full_attention",
|
173 |
+
"full_attention",
|
174 |
+
"full_attention",
|
175 |
+
"full_attention",
|
176 |
+
"full_attention",
|
177 |
+
"full_attention",
|
178 |
+
"full_attention",
|
179 |
+
"full_attention",
|
180 |
+
"full_attention",
|
181 |
+
"full_attention",
|
182 |
+
"full_attention",
|
183 |
+
"full_attention",
|
184 |
+
"full_attention",
|
185 |
+
"full_attention",
|
186 |
+
"full_attention",
|
187 |
+
"full_attention",
|
188 |
+
"full_attention",
|
189 |
+
"full_attention",
|
190 |
+
"full_attention",
|
191 |
+
"full_attention",
|
192 |
+
"full_attention",
|
193 |
+
"full_attention",
|
194 |
+
"full_attention",
|
195 |
+
"full_attention",
|
196 |
+
"full_attention",
|
197 |
+
"full_attention",
|
198 |
+
"full_attention",
|
199 |
+
"full_attention",
|
200 |
+
"full_attention",
|
201 |
+
"full_attention",
|
202 |
+
"full_attention",
|
203 |
+
"full_attention",
|
204 |
+
"full_attention",
|
205 |
+
"full_attention",
|
206 |
+
"full_attention",
|
207 |
+
"full_attention",
|
208 |
+
"full_attention",
|
209 |
+
"full_attention"
|
210 |
+
],
|
211 |
+
"max_position_embeddings": 40960,
|
212 |
+
"max_window_layers": 40,
|
213 |
+
"model_type": "qwen3",
|
214 |
+
"num_attention_heads": 40,
|
215 |
+
"num_hidden_layers": 40,
|
216 |
+
"num_key_value_heads": 8,
|
217 |
+
"rms_norm_eps": 1e-06,
|
218 |
+
"rope_scaling": null,
|
219 |
+
"rope_theta": 1000000,
|
220 |
+
"sliding_window": null,
|
221 |
+
"torch_dtype": "bfloat16",
|
222 |
+
"use_cache": true,
|
223 |
+
"use_sliding_window": false,
|
224 |
+
"vocab_size": 151681
|
225 |
+
},
|
226 |
+
"tie_word_embeddings": false,
|
227 |
+
"torch_dtype": "float16",
|
228 |
+
"transformers_version": "4.53.1",
|
229 |
+
"use_image_newline_parameter": true,
|
230 |
+
"video_token_index": 151680,
|
231 |
+
"vision_aspect_ratio": "anyres_max_9",
|
232 |
+
"vision_config": {
|
233 |
+
"attention_dropout": 0.0,
|
234 |
+
"hidden_act": "gelu_pytorch_tanh",
|
235 |
+
"hidden_size": 1152,
|
236 |
+
"image_size": 384,
|
237 |
+
"intermediate_size": 4304,
|
238 |
+
"layer_norm_eps": 1e-06,
|
239 |
+
"model_type": "siglip_vision_model",
|
240 |
+
"num_attention_heads": 16,
|
241 |
+
"num_channels": 3,
|
242 |
+
"num_hidden_layers": 26,
|
243 |
+
"patch_size": 16,
|
244 |
+
"vision_use_head": false
|
245 |
+
},
|
246 |
+
"vision_feature_layer": -1,
|
247 |
+
"vision_feature_select_strategy": "full"
|
248 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 151643,
|
4 |
+
"eos_token_id": 151645,
|
5 |
+
"transformers_version": "4.52.4",
|
6 |
+
"use_cache": false
|
7 |
+
}
|
mergekit_config.yml
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
models:
|
2 |
+
- model: /home/work/.varco_mllm/checkpoints-v2d1/training/vv2d1-llava-qwen3-14b-st4-250708/checkpoint-1400_hf
|
3 |
+
parameters:
|
4 |
+
weight: 1.0
|
5 |
+
- model: /home/work/.varco_mllm/checkpoints-v2d1/training/vv2d1-llava-qwen3-14b-st4-250708/checkpoint-1548_hf
|
6 |
+
parameters:
|
7 |
+
weight: 4.0
|
8 |
+
merge_method: linear
|
9 |
+
dtype: float16
|
10 |
+
|
model-00001-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a940e57cae456a0bac7b3190b98a26bc9dd1645fb314a9216edd137815932b34
|
3 |
+
size 4972969200
|
model-00002-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:81eb0fc125f253200a5bfab214528a8cabde4db56326da27ce1ede44a43cdefe
|
3 |
+
size 4917989656
|
model-00003-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:21317351c7fb2118fa8f7cb3c6a1f2fe3cdd0a37b3def5e3fd99b04baa2d6631
|
3 |
+
size 4991389856
|
model-00004-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a0667c87945df19afac5a810536d63b8703a60dc13996809df75a5dc0b80aee6
|
3 |
+
size 4917989648
|
model-00005-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9ad8d93739b2f42ec501cd251c15daf478afac0e02fc7d088a914e48ca72b2f0
|
3 |
+
size 4991389864
|
model-00006-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:449903cafa5ee84ba85a7ede77e2e438a3e772e59fd6af8da5a828d2fddac755
|
3 |
+
size 4999901720
|
model-00007-of-00007.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7570475431596713193ac2affde6f7e3ccac654abe02582b761783ea0005a5da
|
3 |
+
size 599690752
|
model.safetensors.index.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"metadata": {"mergekit_version": "0.1.1"}, "weight_map": {"image_newline": "model-00001-of-00007.safetensors", "language_model.lm_head.weight": "model-00001-of-00007.safetensors", "language_model.model.embed_tokens.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.0.input_layernorm.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.0.mlp.down_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.0.mlp.gate_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.0.mlp.up_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.0.post_attention_layernorm.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.0.self_attn.k_norm.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.0.self_attn.k_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.0.self_attn.o_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.0.self_attn.q_norm.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.0.self_attn.q_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.0.self_attn.v_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.1.input_layernorm.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.1.mlp.down_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.1.mlp.gate_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.1.mlp.up_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.1.post_attention_layernorm.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.1.self_attn.k_norm.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.1.self_attn.k_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.1.self_attn.o_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.1.self_attn.q_norm.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.1.self_attn.q_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.1.self_attn.v_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.10.input_layernorm.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.10.mlp.down_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.10.mlp.gate_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.10.mlp.up_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.10.post_attention_layernorm.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.10.self_attn.k_norm.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.10.self_attn.k_proj.weight": "model-00001-of-00007.safetensors", "language_model.model.layers.10.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.10.self_attn.q_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.10.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.10.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.11.input_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.11.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.11.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.11.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.11.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.11.self_attn.k_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.11.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.11.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.11.self_attn.q_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.11.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.11.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.12.input_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.12.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.12.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.12.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.12.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.12.self_attn.k_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.12.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.12.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.12.self_attn.q_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.12.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.12.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.13.input_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.13.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.13.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.13.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.13.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.13.self_attn.k_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.13.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.13.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.13.self_attn.q_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.13.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.13.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.14.input_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.14.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.14.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.14.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.14.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.14.self_attn.k_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.14.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.14.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.14.self_attn.q_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.14.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.14.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.15.input_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.15.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.15.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.15.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.15.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.15.self_attn.k_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.15.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.15.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.15.self_attn.q_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.15.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.15.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.16.input_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.16.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.16.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.16.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.16.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.16.self_attn.k_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.16.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.16.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.16.self_attn.q_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.16.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.16.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.17.input_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.17.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.17.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.17.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.17.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.17.self_attn.k_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.17.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.17.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.17.self_attn.q_norm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.17.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.17.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.18.input_layernorm.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.18.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "language_model.model.layers.18.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.18.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.18.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.18.self_attn.k_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.18.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.18.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.18.self_attn.q_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.18.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.18.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.19.input_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.19.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.19.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.19.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.19.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.19.self_attn.k_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.19.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.19.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.19.self_attn.q_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.19.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.19.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.2.input_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.2.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.2.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.2.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.2.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.2.self_attn.k_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.2.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.2.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.2.self_attn.q_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.2.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.2.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.20.input_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.20.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.20.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.20.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.20.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.20.self_attn.k_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.20.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.20.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.20.self_attn.q_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.20.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.20.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.21.input_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.21.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.21.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.21.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.21.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.21.self_attn.k_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.21.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.21.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.21.self_attn.q_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.21.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.21.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.22.input_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.22.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.22.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.22.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.22.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.22.self_attn.k_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.22.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.22.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.22.self_attn.q_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.22.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.22.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.23.input_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.23.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.23.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.23.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.23.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.23.self_attn.k_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.23.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.23.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.23.self_attn.q_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.23.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.23.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.24.input_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.24.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.24.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.24.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.24.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.24.self_attn.k_norm.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.24.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "language_model.model.layers.24.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.24.self_attn.q_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.24.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.24.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.25.input_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.25.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.25.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.25.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.25.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.25.self_attn.k_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.25.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.25.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.25.self_attn.q_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.25.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.25.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.26.input_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.26.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.26.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.26.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.26.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.26.self_attn.k_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.26.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.26.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.26.self_attn.q_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.26.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.26.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.27.input_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.27.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.27.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.27.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.27.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.27.self_attn.k_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.27.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.27.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.27.self_attn.q_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.27.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.27.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.28.input_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.28.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.28.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.28.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.28.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.28.self_attn.k_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.28.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.28.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.28.self_attn.q_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.28.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.28.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.29.input_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.29.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.29.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.29.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.29.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.29.self_attn.k_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.29.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.29.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.29.self_attn.q_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.29.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.29.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.3.input_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.3.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.3.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.3.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.3.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.3.self_attn.k_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.3.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.3.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.3.self_attn.q_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.3.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.3.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.30.input_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.30.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.30.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.30.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.30.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.30.self_attn.k_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.30.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.30.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.30.self_attn.q_norm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.30.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.30.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.31.input_layernorm.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.31.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "language_model.model.layers.31.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.31.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.31.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.31.self_attn.k_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.31.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.31.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.31.self_attn.q_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.31.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.31.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.32.input_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.32.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.32.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.32.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.32.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.32.self_attn.k_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.32.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.32.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.32.self_attn.q_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.32.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.32.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.33.input_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.33.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.33.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.33.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.33.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.33.self_attn.k_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.33.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.33.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.33.self_attn.q_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.33.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.33.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.34.input_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.34.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.34.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.34.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.34.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.34.self_attn.k_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.34.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.34.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.34.self_attn.q_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.34.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.34.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.35.input_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.35.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.35.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.35.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.35.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.35.self_attn.k_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.35.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.35.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.35.self_attn.q_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.35.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.35.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.36.input_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.36.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.36.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.36.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.36.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.36.self_attn.k_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.36.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.36.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.36.self_attn.q_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.36.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.36.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.37.input_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.37.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.37.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.37.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.37.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.37.self_attn.k_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.37.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.37.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.37.self_attn.q_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.37.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.37.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.38.input_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.38.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.38.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.38.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.38.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.38.self_attn.k_norm.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.38.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "language_model.model.layers.38.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.38.self_attn.q_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.38.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.38.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.39.input_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.39.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.39.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.39.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.39.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.39.self_attn.k_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.39.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.39.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.39.self_attn.q_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.39.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.39.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.4.input_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.4.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.4.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.4.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.4.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.4.self_attn.k_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.4.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.4.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.4.self_attn.q_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.4.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.4.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.5.input_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.5.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.5.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.5.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.5.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.5.self_attn.k_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.5.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.5.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.5.self_attn.q_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.5.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.5.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.6.input_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.6.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.6.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.6.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.6.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.6.self_attn.k_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.6.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.6.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.6.self_attn.q_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.6.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.6.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.7.input_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.7.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.7.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.7.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.7.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.7.self_attn.k_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.7.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.7.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.7.self_attn.q_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.7.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.7.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.8.input_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.8.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.8.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.8.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.8.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.8.self_attn.k_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.8.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.8.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.8.self_attn.q_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.8.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.8.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.9.input_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.9.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.9.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.9.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.9.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.9.self_attn.k_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.9.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.9.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.9.self_attn.q_norm.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.9.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.layers.9.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "language_model.model.norm.weight": "model-00006-of-00007.safetensors", "multi_modal_projector.linear_1.bias": "model-00006-of-00007.safetensors", "multi_modal_projector.linear_1.weight": "model-00006-of-00007.safetensors", "multi_modal_projector.linear_2.bias": "model-00006-of-00007.safetensors", "multi_modal_projector.linear_2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.embeddings.patch_embedding.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.embeddings.patch_embedding.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.embeddings.position_embedding.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.layer_norm1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.layer_norm1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.layer_norm2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.layer_norm2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.mlp.fc1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.mlp.fc1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.mlp.fc2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.mlp.fc2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.self_attn.k_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.self_attn.out_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.self_attn.out_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.self_attn.q_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.self_attn.v_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.0.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.layer_norm1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.layer_norm1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.layer_norm2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.layer_norm2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.mlp.fc1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.mlp.fc1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.mlp.fc2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.mlp.fc2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.self_attn.k_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.self_attn.out_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.self_attn.out_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.self_attn.q_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.self_attn.v_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.1.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.layer_norm1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.layer_norm1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.layer_norm2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.layer_norm2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.mlp.fc1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.mlp.fc1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.mlp.fc2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.mlp.fc2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.self_attn.k_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.self_attn.out_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.self_attn.out_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.self_attn.q_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.self_attn.v_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.10.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.layer_norm1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.layer_norm1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.layer_norm2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.layer_norm2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.mlp.fc1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.mlp.fc1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.mlp.fc2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.mlp.fc2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.self_attn.k_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.self_attn.out_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.self_attn.out_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.self_attn.q_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.self_attn.v_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.11.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.layer_norm1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.layer_norm1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.layer_norm2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.layer_norm2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.mlp.fc1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.mlp.fc1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.mlp.fc2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.mlp.fc2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.self_attn.k_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.self_attn.out_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.self_attn.out_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.self_attn.q_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.self_attn.v_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.12.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.layer_norm1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.layer_norm1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.layer_norm2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.layer_norm2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.mlp.fc1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.mlp.fc1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.mlp.fc2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.mlp.fc2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.self_attn.k_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.self_attn.out_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.self_attn.out_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.self_attn.q_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.self_attn.v_proj.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.13.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.layer_norm1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.layer_norm1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.layer_norm2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.layer_norm2.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.mlp.fc1.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.mlp.fc1.weight": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.mlp.fc2.bias": "model-00006-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.14.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.15.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.16.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.17.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.18.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.19.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.2.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.20.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.21.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.22.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.23.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.24.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.25.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.3.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.4.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.5.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.6.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.7.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.8.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.layer_norm1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.layer_norm1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.layer_norm2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.layer_norm2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.mlp.fc1.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.mlp.fc1.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.mlp.fc2.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.mlp.fc2.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.self_attn.k_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.self_attn.out_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.self_attn.out_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.self_attn.q_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.self_attn.v_proj.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.encoder.layers.9.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "vision_tower.vision_model.post_layernorm.bias": "model-00007-of-00007.safetensors", "vision_tower.vision_model.post_layernorm.weight": "model-00007-of-00007.safetensors"}}
|
preprocessor_config.json
ADDED
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"do_convert_rgb": null,
|
3 |
+
"do_normalize": true,
|
4 |
+
"do_pad": true,
|
5 |
+
"do_rescale": true,
|
6 |
+
"do_resize": true,
|
7 |
+
"image_grid_pinpoints": [
|
8 |
+
[
|
9 |
+
384,
|
10 |
+
384
|
11 |
+
],
|
12 |
+
[
|
13 |
+
384,
|
14 |
+
768
|
15 |
+
],
|
16 |
+
[
|
17 |
+
384,
|
18 |
+
1152
|
19 |
+
],
|
20 |
+
[
|
21 |
+
384,
|
22 |
+
1536
|
23 |
+
],
|
24 |
+
[
|
25 |
+
384,
|
26 |
+
1920
|
27 |
+
],
|
28 |
+
[
|
29 |
+
384,
|
30 |
+
2304
|
31 |
+
],
|
32 |
+
[
|
33 |
+
768,
|
34 |
+
384
|
35 |
+
],
|
36 |
+
[
|
37 |
+
768,
|
38 |
+
768
|
39 |
+
],
|
40 |
+
[
|
41 |
+
768,
|
42 |
+
1152
|
43 |
+
],
|
44 |
+
[
|
45 |
+
768,
|
46 |
+
1536
|
47 |
+
],
|
48 |
+
[
|
49 |
+
768,
|
50 |
+
1920
|
51 |
+
],
|
52 |
+
[
|
53 |
+
768,
|
54 |
+
2304
|
55 |
+
],
|
56 |
+
[
|
57 |
+
1152,
|
58 |
+
384
|
59 |
+
],
|
60 |
+
[
|
61 |
+
1152,
|
62 |
+
768
|
63 |
+
],
|
64 |
+
[
|
65 |
+
1152,
|
66 |
+
1152
|
67 |
+
],
|
68 |
+
[
|
69 |
+
1152,
|
70 |
+
1536
|
71 |
+
],
|
72 |
+
[
|
73 |
+
1152,
|
74 |
+
1920
|
75 |
+
],
|
76 |
+
[
|
77 |
+
1152,
|
78 |
+
2304
|
79 |
+
],
|
80 |
+
[
|
81 |
+
1536,
|
82 |
+
384
|
83 |
+
],
|
84 |
+
[
|
85 |
+
1536,
|
86 |
+
768
|
87 |
+
],
|
88 |
+
[
|
89 |
+
1536,
|
90 |
+
1152
|
91 |
+
],
|
92 |
+
[
|
93 |
+
1536,
|
94 |
+
1536
|
95 |
+
],
|
96 |
+
[
|
97 |
+
1536,
|
98 |
+
1920
|
99 |
+
],
|
100 |
+
[
|
101 |
+
1536,
|
102 |
+
2304
|
103 |
+
],
|
104 |
+
[
|
105 |
+
1920,
|
106 |
+
384
|
107 |
+
],
|
108 |
+
[
|
109 |
+
1920,
|
110 |
+
768
|
111 |
+
],
|
112 |
+
[
|
113 |
+
1920,
|
114 |
+
1152
|
115 |
+
],
|
116 |
+
[
|
117 |
+
1920,
|
118 |
+
1536
|
119 |
+
],
|
120 |
+
[
|
121 |
+
1920,
|
122 |
+
1920
|
123 |
+
],
|
124 |
+
[
|
125 |
+
1920,
|
126 |
+
2304
|
127 |
+
],
|
128 |
+
[
|
129 |
+
2304,
|
130 |
+
384
|
131 |
+
],
|
132 |
+
[
|
133 |
+
2304,
|
134 |
+
768
|
135 |
+
],
|
136 |
+
[
|
137 |
+
2304,
|
138 |
+
1152
|
139 |
+
],
|
140 |
+
[
|
141 |
+
2304,
|
142 |
+
1536
|
143 |
+
],
|
144 |
+
[
|
145 |
+
2304,
|
146 |
+
1920
|
147 |
+
],
|
148 |
+
[
|
149 |
+
2304,
|
150 |
+
2304
|
151 |
+
]
|
152 |
+
],
|
153 |
+
"image_mean": [
|
154 |
+
0.5,
|
155 |
+
0.5,
|
156 |
+
0.5
|
157 |
+
],
|
158 |
+
"image_processor_type": "LlavaOnevisionImageProcessor",
|
159 |
+
"image_std": [
|
160 |
+
0.5,
|
161 |
+
0.5,
|
162 |
+
0.5
|
163 |
+
],
|
164 |
+
"processor_class": "LlavaOnevisionProcessor",
|
165 |
+
"resample": 2,
|
166 |
+
"rescale_factor": 0.00392156862745098,
|
167 |
+
"size": {
|
168 |
+
"height": 384,
|
169 |
+
"width": 384
|
170 |
+
}
|
171 |
+
}
|
processor_config.json
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"image_token": "<image>",
|
3 |
+
"num_image_tokens": 576,
|
4 |
+
"processor_class": "LlavaOnevisionProcessor",
|
5 |
+
"video_token": "<video>",
|
6 |
+
"vision_aspect_ratio": "anyres_max_9",
|
7 |
+
"vision_feature_select_strategy": "full"
|
8 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
"<gro>",
|
4 |
+
"<ocr>",
|
5 |
+
"<char>",
|
6 |
+
"</char>",
|
7 |
+
"<obj>",
|
8 |
+
"</obj>",
|
9 |
+
"<bbox>",
|
10 |
+
"</bbox>",
|
11 |
+
"<delim>"
|
12 |
+
],
|
13 |
+
"eos_token": {
|
14 |
+
"content": "<|im_end|>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false
|
19 |
+
},
|
20 |
+
"pad_token": {
|
21 |
+
"content": "[UNK]",
|
22 |
+
"lstrip": false,
|
23 |
+
"normalized": false,
|
24 |
+
"rstrip": false,
|
25 |
+
"single_word": false
|
26 |
+
},
|
27 |
+
"unk_token": {
|
28 |
+
"content": "[UNK]",
|
29 |
+
"lstrip": false,
|
30 |
+
"normalized": false,
|
31 |
+
"rstrip": false,
|
32 |
+
"single_word": false
|
33 |
+
}
|
34 |
+
}
|
tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:be6a8990f1e9afb195f92b5408eb1ccc3c1a7baf263fe638b5a375b24b310524
|
3 |
+
size 11424851
|
tokenizer_config.json
ADDED
@@ -0,0 +1,316 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": false,
|
3 |
+
"add_prefix_space": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"151643": {
|
6 |
+
"content": "<|endoftext|>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"151644": {
|
14 |
+
"content": "<|im_start|>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"151645": {
|
22 |
+
"content": "<|im_end|>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": false,
|
26 |
+
"single_word": false,
|
27 |
+
"special": true
|
28 |
+
},
|
29 |
+
"151646": {
|
30 |
+
"content": "<|object_ref_start|>",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": true
|
36 |
+
},
|
37 |
+
"151647": {
|
38 |
+
"content": "<|object_ref_end|>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false,
|
43 |
+
"special": true
|
44 |
+
},
|
45 |
+
"151648": {
|
46 |
+
"content": "<|box_start|>",
|
47 |
+
"lstrip": false,
|
48 |
+
"normalized": false,
|
49 |
+
"rstrip": false,
|
50 |
+
"single_word": false,
|
51 |
+
"special": true
|
52 |
+
},
|
53 |
+
"151649": {
|
54 |
+
"content": "<|box_end|>",
|
55 |
+
"lstrip": false,
|
56 |
+
"normalized": false,
|
57 |
+
"rstrip": false,
|
58 |
+
"single_word": false,
|
59 |
+
"special": true
|
60 |
+
},
|
61 |
+
"151650": {
|
62 |
+
"content": "<|quad_start|>",
|
63 |
+
"lstrip": false,
|
64 |
+
"normalized": false,
|
65 |
+
"rstrip": false,
|
66 |
+
"single_word": false,
|
67 |
+
"special": true
|
68 |
+
},
|
69 |
+
"151651": {
|
70 |
+
"content": "<|quad_end|>",
|
71 |
+
"lstrip": false,
|
72 |
+
"normalized": false,
|
73 |
+
"rstrip": false,
|
74 |
+
"single_word": false,
|
75 |
+
"special": true
|
76 |
+
},
|
77 |
+
"151652": {
|
78 |
+
"content": "<|vision_start|>",
|
79 |
+
"lstrip": false,
|
80 |
+
"normalized": false,
|
81 |
+
"rstrip": false,
|
82 |
+
"single_word": false,
|
83 |
+
"special": true
|
84 |
+
},
|
85 |
+
"151653": {
|
86 |
+
"content": "<|vision_end|>",
|
87 |
+
"lstrip": false,
|
88 |
+
"normalized": false,
|
89 |
+
"rstrip": false,
|
90 |
+
"single_word": false,
|
91 |
+
"special": true
|
92 |
+
},
|
93 |
+
"151654": {
|
94 |
+
"content": "<|vision_pad|>",
|
95 |
+
"lstrip": false,
|
96 |
+
"normalized": false,
|
97 |
+
"rstrip": false,
|
98 |
+
"single_word": false,
|
99 |
+
"special": true
|
100 |
+
},
|
101 |
+
"151655": {
|
102 |
+
"content": "<|image_pad|>",
|
103 |
+
"lstrip": false,
|
104 |
+
"normalized": false,
|
105 |
+
"rstrip": false,
|
106 |
+
"single_word": false,
|
107 |
+
"special": true
|
108 |
+
},
|
109 |
+
"151656": {
|
110 |
+
"content": "<|video_pad|>",
|
111 |
+
"lstrip": false,
|
112 |
+
"normalized": false,
|
113 |
+
"rstrip": false,
|
114 |
+
"single_word": false,
|
115 |
+
"special": true
|
116 |
+
},
|
117 |
+
"151657": {
|
118 |
+
"content": "<tool_call>",
|
119 |
+
"lstrip": false,
|
120 |
+
"normalized": false,
|
121 |
+
"rstrip": false,
|
122 |
+
"single_word": false,
|
123 |
+
"special": false
|
124 |
+
},
|
125 |
+
"151658": {
|
126 |
+
"content": "</tool_call>",
|
127 |
+
"lstrip": false,
|
128 |
+
"normalized": false,
|
129 |
+
"rstrip": false,
|
130 |
+
"single_word": false,
|
131 |
+
"special": false
|
132 |
+
},
|
133 |
+
"151659": {
|
134 |
+
"content": "<|fim_prefix|>",
|
135 |
+
"lstrip": false,
|
136 |
+
"normalized": false,
|
137 |
+
"rstrip": false,
|
138 |
+
"single_word": false,
|
139 |
+
"special": false
|
140 |
+
},
|
141 |
+
"151660": {
|
142 |
+
"content": "<|fim_middle|>",
|
143 |
+
"lstrip": false,
|
144 |
+
"normalized": false,
|
145 |
+
"rstrip": false,
|
146 |
+
"single_word": false,
|
147 |
+
"special": false
|
148 |
+
},
|
149 |
+
"151661": {
|
150 |
+
"content": "<|fim_suffix|>",
|
151 |
+
"lstrip": false,
|
152 |
+
"normalized": false,
|
153 |
+
"rstrip": false,
|
154 |
+
"single_word": false,
|
155 |
+
"special": false
|
156 |
+
},
|
157 |
+
"151662": {
|
158 |
+
"content": "<|fim_pad|>",
|
159 |
+
"lstrip": false,
|
160 |
+
"normalized": false,
|
161 |
+
"rstrip": false,
|
162 |
+
"single_word": false,
|
163 |
+
"special": false
|
164 |
+
},
|
165 |
+
"151663": {
|
166 |
+
"content": "<|repo_name|>",
|
167 |
+
"lstrip": false,
|
168 |
+
"normalized": false,
|
169 |
+
"rstrip": false,
|
170 |
+
"single_word": false,
|
171 |
+
"special": false
|
172 |
+
},
|
173 |
+
"151664": {
|
174 |
+
"content": "<|file_sep|>",
|
175 |
+
"lstrip": false,
|
176 |
+
"normalized": false,
|
177 |
+
"rstrip": false,
|
178 |
+
"single_word": false,
|
179 |
+
"special": false
|
180 |
+
},
|
181 |
+
"151665": {
|
182 |
+
"content": "<tool_response>",
|
183 |
+
"lstrip": false,
|
184 |
+
"normalized": false,
|
185 |
+
"rstrip": false,
|
186 |
+
"single_word": false,
|
187 |
+
"special": false
|
188 |
+
},
|
189 |
+
"151666": {
|
190 |
+
"content": "</tool_response>",
|
191 |
+
"lstrip": false,
|
192 |
+
"normalized": false,
|
193 |
+
"rstrip": false,
|
194 |
+
"single_word": false,
|
195 |
+
"special": false
|
196 |
+
},
|
197 |
+
"151667": {
|
198 |
+
"content": "<think>",
|
199 |
+
"lstrip": false,
|
200 |
+
"normalized": false,
|
201 |
+
"rstrip": false,
|
202 |
+
"single_word": false,
|
203 |
+
"special": false
|
204 |
+
},
|
205 |
+
"151668": {
|
206 |
+
"content": "</think>",
|
207 |
+
"lstrip": false,
|
208 |
+
"normalized": false,
|
209 |
+
"rstrip": false,
|
210 |
+
"single_word": false,
|
211 |
+
"special": false
|
212 |
+
},
|
213 |
+
"151669": {
|
214 |
+
"content": "[UNK]",
|
215 |
+
"lstrip": false,
|
216 |
+
"normalized": false,
|
217 |
+
"rstrip": false,
|
218 |
+
"single_word": false,
|
219 |
+
"special": true
|
220 |
+
},
|
221 |
+
"151670": {
|
222 |
+
"content": "<gro>",
|
223 |
+
"lstrip": false,
|
224 |
+
"normalized": false,
|
225 |
+
"rstrip": false,
|
226 |
+
"single_word": false,
|
227 |
+
"special": true
|
228 |
+
},
|
229 |
+
"151671": {
|
230 |
+
"content": "<ocr>",
|
231 |
+
"lstrip": false,
|
232 |
+
"normalized": false,
|
233 |
+
"rstrip": false,
|
234 |
+
"single_word": false,
|
235 |
+
"special": true
|
236 |
+
},
|
237 |
+
"151672": {
|
238 |
+
"content": "<char>",
|
239 |
+
"lstrip": false,
|
240 |
+
"normalized": false,
|
241 |
+
"rstrip": false,
|
242 |
+
"single_word": false,
|
243 |
+
"special": true
|
244 |
+
},
|
245 |
+
"151673": {
|
246 |
+
"content": "</char>",
|
247 |
+
"lstrip": false,
|
248 |
+
"normalized": false,
|
249 |
+
"rstrip": false,
|
250 |
+
"single_word": false,
|
251 |
+
"special": true
|
252 |
+
},
|
253 |
+
"151674": {
|
254 |
+
"content": "<obj>",
|
255 |
+
"lstrip": false,
|
256 |
+
"normalized": false,
|
257 |
+
"rstrip": false,
|
258 |
+
"single_word": false,
|
259 |
+
"special": true
|
260 |
+
},
|
261 |
+
"151675": {
|
262 |
+
"content": "</obj>",
|
263 |
+
"lstrip": false,
|
264 |
+
"normalized": false,
|
265 |
+
"rstrip": false,
|
266 |
+
"single_word": false,
|
267 |
+
"special": true
|
268 |
+
},
|
269 |
+
"151676": {
|
270 |
+
"content": "<bbox>",
|
271 |
+
"lstrip": false,
|
272 |
+
"normalized": false,
|
273 |
+
"rstrip": false,
|
274 |
+
"single_word": false,
|
275 |
+
"special": true
|
276 |
+
},
|
277 |
+
"151677": {
|
278 |
+
"content": "</bbox>",
|
279 |
+
"lstrip": false,
|
280 |
+
"normalized": false,
|
281 |
+
"rstrip": false,
|
282 |
+
"single_word": false,
|
283 |
+
"special": true
|
284 |
+
},
|
285 |
+
"151678": {
|
286 |
+
"content": "<delim>",
|
287 |
+
"lstrip": false,
|
288 |
+
"normalized": false,
|
289 |
+
"rstrip": false,
|
290 |
+
"single_word": false,
|
291 |
+
"special": true
|
292 |
+
}
|
293 |
+
},
|
294 |
+
"additional_special_tokens": [
|
295 |
+
"<gro>",
|
296 |
+
"<ocr>",
|
297 |
+
"<char>",
|
298 |
+
"</char>",
|
299 |
+
"<obj>",
|
300 |
+
"</obj>",
|
301 |
+
"<bbox>",
|
302 |
+
"</bbox>",
|
303 |
+
"<delim>"
|
304 |
+
],
|
305 |
+
"bos_token": null,
|
306 |
+
"clean_up_tokenization_spaces": false,
|
307 |
+
"eos_token": "<|im_end|>",
|
308 |
+
"errors": "replace",
|
309 |
+
"extra_special_tokens": {},
|
310 |
+
"model_max_length": 9216,
|
311 |
+
"pad_token": "[UNK]",
|
312 |
+
"padding_side": "right",
|
313 |
+
"split_special_tokens": false,
|
314 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
315 |
+
"unk_token": "[UNK]"
|
316 |
+
}
|
video_preprocessor_config.json
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"crop_size": null,
|
3 |
+
"data_format": "channels_first",
|
4 |
+
"default_to_square": false,
|
5 |
+
"device": null,
|
6 |
+
"do_center_crop": null,
|
7 |
+
"do_convert_rgb": null,
|
8 |
+
"do_normalize": true,
|
9 |
+
"do_pad": null,
|
10 |
+
"do_rescale": true,
|
11 |
+
"do_resize": true,
|
12 |
+
"do_sample_frames": false,
|
13 |
+
"fps": null,
|
14 |
+
"image_mean": [
|
15 |
+
0.5,
|
16 |
+
0.5,
|
17 |
+
0.5
|
18 |
+
],
|
19 |
+
"image_processor_type": "SiglipImageProcessor",
|
20 |
+
"image_std": [
|
21 |
+
0.5,
|
22 |
+
0.5,
|
23 |
+
0.5
|
24 |
+
],
|
25 |
+
"input_data_format": null,
|
26 |
+
"num_frames": null,
|
27 |
+
"processor_class": "LlavaOnevisionProcessor",
|
28 |
+
"resample": 2,
|
29 |
+
"rescale_factor": 0.00392156862745098,
|
30 |
+
"size": {
|
31 |
+
"height": 384,
|
32 |
+
"width": 384
|
33 |
+
},
|
34 |
+
"size_divisor": null,
|
35 |
+
"video_metadata": null,
|
36 |
+
"video_processor_type": "LlavaOnevisionVideoProcessor"
|
37 |
+
}
|