Image-Text-to-Text
Transformers
Safetensors
Cosmos
English
qwen2_5_vl
nvidia
unsloth
conversational
text-generation-inference
danielhanchen commited on
Commit
43d2fab
·
verified ·
1 Parent(s): 16c8639

Add files using upload-large-folder tool

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: >-
5
+ https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license
6
+ datasets:
7
+ - nvidia/Cosmos-Reason1-SFT-Dataset
8
+ - nvidia/Cosmos-Reason1-RL-Dataset
9
+ - nvidia/Cosmos-Reason1-Benchmark
10
+ library_name: transformers
11
+ language:
12
+ - en
13
+ base_model:
14
+ - nvidia/Cosmos-Reason1-7B
15
+ tags:
16
+ - nvidia
17
+ - unsloth
18
+ - cosmos
19
+ ---
20
+ <div>
21
+ <p style="margin-top: 0;margin-bottom: 0;">
22
+ <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
23
+ </p>
24
+ <div style="display: flex; gap: 5px; align-items: center; ">
25
+ <a href="https://github.com/unslothai/unsloth/">
26
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
27
+ </a>
28
+ <a href="https://discord.gg/unsloth">
29
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
30
+ </a>
31
+ <a href="https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune">
32
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
33
+ </a>
34
+ </div>
35
+ </div>
36
+
37
+
38
+
39
+ # **Cosmos-Reason1: Physical AI Common Sense and Embodied Reasoning Models**
40
+
41
+ [**Cosmos**](https://huggingface.co/collections/nvidia/cosmos-reason1-67c9e926206426008f1da1b7) | [**Code**](https://github.com/nvidia-cosmos/cosmos-reason1) | [**Paper**](https://arxiv.org/abs/2503.15558) | [**Paper Website**](https://research.nvidia.com/labs/dir/cosmos-reason1)
42
+
43
+ # Model Overview
44
+
45
+ ## Description:
46
+
47
+ **Cosmos-Reason1 Models**: Physical AI models understand physical common sense and generate appropriate embodied decisions in natural language through long chain-of-thought reasoning processes.
48
+
49
+ The Cosmos-Reason1 models are post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. These are Physical AI models that can understand space, time, and fundamental physics, and can serve as planning models to reason about the next steps of an embodied agent.
50
+
51
+ The models are ready for commercial use.
52
+
53
+ **Model Developer**: NVIDIA
54
+
55
+ ## Model Versions
56
+
57
+ The Cosmos-Reason1 includes the following model:
58
+
59
+ - [Cosmos-Reason1-7B](https://huggingface.co/nvidia/Cosmos-Reason1-7B): Given a text prompt and an input video, think and generate the answer with respect to the input text prompt and video.
60
+
61
+ ### License:
62
+
63
+ This model is released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). For a custom license, please contact [[email protected]](mailto:[email protected]).
64
+
65
+ Under the NVIDIA Open Model License, NVIDIA confirms:
66
+
67
+ * Models are commercially usable.
68
+ * You are free to create and distribute Derivative Models.
69
+ * NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
70
+
71
+ **Important Note**: If You bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism (collectively “Guardrail”) contained in the Model without a substantially similar Guardrail appropriate for your use case, your rights under this Agreement [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) will automatically terminate.
72
+
73
+ ### Deployment Geography:
74
+
75
+ Global
76
+
77
+ ### Use Case:
78
+
79
+ Physical AI: Space, time, fundamental physics understanding and embodied reasoning, encompassing robotics, and autonomous vehicles (AV).
80
+
81
+ ### Release Date:
82
+
83
+ * Github: [05/17/2025](https://github.com/nvidia-cosmos/cosmos-reason1)
84
+ * Huggingface: [05/17/2025](https://huggingface.co/collections/nvidia/cosmos-reason1-67c9e926206426008f1da1b7)
85
+
86
+ ## Model Architecture:
87
+
88
+ Architecture Type: A Multi-modal LLM consists of a Vision Transformer (ViT) for vision encoder and a Dense Transformer model for LLM.
89
+ Network Architecture: Qwen2.5-VL-7B-Instruct.
90
+
91
+ Cosmos-Reason-7B is post-trained based on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) and follows the same model architecture.
92
+
93
+
94
+ ## Input
95
+
96
+ **Input Type(s)**: Text+Video/Image
97
+
98
+ **Input Format(s)**:
99
+ * Text: String
100
+ * Video: mp4
101
+ * Image: jpg
102
+
103
+ **Input Parameters**:
104
+ * Text: One-dimensional (1D)
105
+ * Video: Three-dimensional (3D)
106
+ * Image: Two-dimensional (2D)
107
+
108
+ **Other Properties Related to Input**:
109
+ * Use `FPS=4` for input video to match the training setup.
110
+ * Append `Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>.` in the system prompt to encourage long chain-of-thought reasoning response.
111
+
112
+ ## Output
113
+
114
+ **Output Type(s)**: Text
115
+
116
+ **Output Format**: String
117
+
118
+ **Output Parameters**: Text: One-dimensional (1D)
119
+
120
+ **Other Properties Related to Output**:
121
+ * Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response.
122
+ * Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
123
+
124
+
125
+ ## Software Integration
126
+
127
+ **Runtime Engine(s):**
128
+
129
+ * [vLLM](https://github.com/vllm-project/vllm)
130
+
131
+ **Supported Hardware Microarchitecture Compatibility:**
132
+
133
+ * NVIDIA Blackwell
134
+ * NVIDIA Hopper
135
+
136
+ **Note**: We have only tested doing inference with BF16 precision.
137
+
138
+ **Operating System(s):**
139
+
140
+ * Linux (We have not tested on other operating systems.)
141
+
142
+
143
+ # Usage
144
+
145
+ See [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) for details.
146
+ * Post Training: [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) provides examples of supervised fine-tuning and reinforcement learning on embodied reasoning datasets.
147
+
148
+ # Evaluation
149
+
150
+ Please see our [technical paper](https://arxiv.org/pdf/2503.15558) for detailed evaluations on physical common sense and embodied reasoning. Part of the evaluation datasets are released under [Cosmos-Reason1-Benchmark](https://huggingface.co/datasets/nvidia/Cosmos-Reason1-Benchmark). The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data. The AV dataset is collected and annotated by NVIDIA.
151
+ All datasets go through the data annotation process described in the technical paper to prepare training and evaluation data and annotations.
152
+
153
+ **Data Collection Method**:
154
+ * RoboVQA: Hybrid: Automatic/Sensors
155
+ * BridgeDataV2: Automatic/Sensors
156
+ * AgiBot: Automatic/Sensors
157
+ * RoboFail: Automatic/Sensors
158
+ * HoloAssist: Human
159
+ * AV: Automatic/Sensors
160
+
161
+ **Labeling Method**:
162
+ * RoboVQA: Hybrid: Human,Automated
163
+ * BridgeDataV2: Hybrid: Human,Automated
164
+ * AgiBot: Hybrid: Human,Automated
165
+ * RoboFail: Hybrid: Human,Automated
166
+ * HoloAssist: Hybrid: Human,Automated
167
+ * AV: Hybrid: Human,Automated
168
+
169
+ **Metrics**:
170
+ We report the model accuracy on the embodied reasoning benchmark introduced in [Cosmos-Reason1](https://arxiv.org/abs/2503.15558). The results differ from those presented in Table 9 due to additional training aimed at supporting a broader range of Physical AI tasks beyond the benchmark.
171
+ | | [RoboVQA](https://robovqa.github.io/) | AV | [BridgeDataV2](https://rail-berkeley.github.io/bridgedata/)| [Agibot](https://github.com/OpenDriveLab/AgiBot-World)| [HoloAssist](https://holoassist.github.io/) | [RoboFail](https://robot-reflect.github.io/) | Average |
172
+ |--------------------|---------------------------------------------|----------|------------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|
173
+ | **Accuracy** | 87.3 | 70.8 | 63.7 | 48.9 | 62.7 | 57.2 | 65.1 |
174
+
175
+ ## Dataset Format
176
+ Modality: Video (mp4) and Text
177
+
178
+ ## Dataset Quantification
179
+ We release the embodied reasoning data and benchmarks. Each data sample is a pair of video and text. The text annotations include understanding and reasoning annotations described in the Cosmos-Reason1 paper. Each video may have multiple text annotations. The quantity of the video and text pairs is described in the table below.
180
+ **The AV data is currently unavailable and will be uploaded soon!**
181
+
182
+ | | [RoboVQA](https://robovqa.github.io/) | AV | [BridgeDataV2](https://rail-berkeley.github.io/bridgedata/)| [Agibot](https://github.com/OpenDriveLab/AgiBot-World)| [HoloAssist](https://holoassist.github.io/) | [RoboFail](https://robot-reflect.github.io/) | Total Storage Size |
183
+ |--------------------|---------------------------------------------|----------|------------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|--------------------|
184
+ | **SFT Data** | 1.14m | 24.7k | 258k | 38.9k | 273k | N/A | **300.6GB** |
185
+ | **RL Data** | 252 | 200 | 240 | 200 | 200 | N/A | **2.6GB** |
186
+ | **Benchmark Data** | 110 | 100 | 100 | 100 | 100 | 100 | **1.5GB** |
187
+
188
+
189
+
190
+ We release text annotations for all embodied reasoning datasets and videos for RoboVQA and AV datasets. For other datasets, users may download the source videos from the original data source and find corresponding video sources via the video names. The held-out RoboFail benchmark is released for measuring the generalization capability.
191
+
192
+
193
+ ## Inference:
194
+ **Acceleration Engine:** PyTorch, flash attention <br>
195
+ **Test Hardware:** H100, A100, GB200 <br>
196
+ * Minimum 2 GPU cards, multi nodes require Infiniband / ROCE connection <br>
197
+
198
+ ## Ethical Considerations
199
+
200
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
201
+
202
+ Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
203
+
204
+ For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below.
205
+
206
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
207
+
208
+ ### Plus Plus (++) Promise
209
+
210
+ We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:
211
+
212
+ * Verified to comply with current applicable disclosure laws, regulations, and industry standards.
213
+ * Verified to comply with applicable privacy labeling requirements.
214
+ * Annotated to describe the collector/source (NVIDIA or a third-party).
215
+ * Characterized for technical limitations.
216
+ * Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
217
+ * Reviewed before release.
218
+ * Tagged for known restrictions and potential safety implications.
219
+
220
+ ### Bias
221
+
222
+ | Field | Response |
223
+ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- |
224
+ | Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
225
+ | Measures taken to mitigate against unwanted bias: | The training video sources contain multiple physical embodiments and environments including human, car, single arm robot, bimanual robot in indoor and outdoor environments. By training on numerous and various physical interactions and curated datasets, we strive to provide a model that does not possess biases towards certain embodiments or environments. |
226
+
227
+ ### Explainability
228
+
229
+ | Field | Response |
230
+ | :-------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------- |
231
+ | Intended Application & Domain: | Physical AI Reasoning |
232
+ | Model Type: | Transformer |
233
+ | Intended Users: | Physical AI developers |
234
+ | Output: | Text |
235
+ | Describe how the model works: | Generates text answers based on input text prompt and video |
236
+ | Technical Limitations: | The model may not follow the video or text input accurately in challenging cases, where the input video shows complex scene composition and temporal dynamics. Examples of challenging scenes include: fast camera movements, overlapping human-object interactions, low lighting with high motion blur, and multiple people performing different actions simultaneously. |
237
+ | Verified to have met prescribed NVIDIA quality standards: | Yes |
238
+ | Performance Metrics: | Quantitative and Qualitative Evaluation. Cosmos-Reason1 proposes the embodied reasoning benchmark and physical common sense benchmark to evaluate accuracy with visual question answering. |
239
+ | Potential Known Risks: | The model's output can generate all forms of texts, including what may be considered toxic, offensive, or indecent. |
240
+ | Licensing: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
241
+
242
+ ### Privacy
243
+
244
+ | Field | Response |
245
+ | :------------------------------------------------------------------ | :------------- |
246
+ | Generatable or reverse engineerable personal information? | None Known |
247
+ | Protected class data used to create this model? | None Known |
248
+ | Was consent obtained for any personal data used? | None Known |
249
+ | How often is dataset reviewed? | Before Release |
250
+ | Is there provenance for all datasets used in training? | Yes |
251
+ | Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
252
+ | Applicable Privacy Policy | [NVIDIA Privacy Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy) |
253
+
254
+
255
+ ### Safety
256
+
257
+ | Field | Response |
258
+ | :---------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
259
+ | Model Application(s): | Physical AI common sense understanding and embodied reasoning |
260
+ | Describe the life critical impact (if present). | None Known |
261
+ | Use Case Restrictions: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
262
+ | Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. |
added_tokens.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|file_sep|>": 151664,
8
+ "<|fim_middle|>": 151660,
9
+ "<|fim_pad|>": 151662,
10
+ "<|fim_prefix|>": 151659,
11
+ "<|fim_suffix|>": 151661,
12
+ "<|im_end|>": 151645,
13
+ "<|im_start|>": 151644,
14
+ "<|image_pad|>": 151655,
15
+ "<|object_ref_end|>": 151647,
16
+ "<|object_ref_start|>": 151646,
17
+ "<|quad_end|>": 151651,
18
+ "<|quad_start|>": 151650,
19
+ "<|repo_name|>": 151663,
20
+ "<|video_pad|>": 151656,
21
+ "<|vision_end|>": 151653,
22
+ "<|vision_pad|>": 151654,
23
+ "<|vision_start|>": 151652
24
+ }
chat_template.jinja ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system
2
+ You are a helpful assistant.<|im_end|>
3
+ {% endif %}<|im_start|>{{ message['role'] }}
4
+ {% if message['content'] is string %}{{ message['content'] }}<|im_end|>
5
+ {% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>
6
+ {% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant
7
+ {% endif %}
config.json ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen2_5_VLForConditionalGeneration"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "eos_token_id": 151645,
7
+ "hidden_act": "silu",
8
+ "hidden_size": 3584,
9
+ "image_token_id": 151655,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 18944,
12
+ "max_position_embeddings": 128000,
13
+ "max_window_layers": 28,
14
+ "model_type": "qwen2_5_vl",
15
+ "num_attention_heads": 28,
16
+ "num_hidden_layers": 28,
17
+ "num_key_value_heads": 4,
18
+ "pad_token_id": 151654,
19
+ "rms_norm_eps": 1e-06,
20
+ "rope_scaling": {
21
+ "mrope_section": [
22
+ 16,
23
+ 24,
24
+ 24
25
+ ],
26
+ "rope_type": "default",
27
+ "type": "default"
28
+ },
29
+ "rope_theta": 1000000.0,
30
+ "sliding_window": 32768,
31
+ "text_config": {
32
+ "architectures": [
33
+ "Qwen2_5_VLForConditionalGeneration"
34
+ ],
35
+ "attention_dropout": 0.0,
36
+ "bos_token_id": 151643,
37
+ "eos_token_id": 151645,
38
+ "hidden_act": "silu",
39
+ "hidden_size": 3584,
40
+ "image_token_id": null,
41
+ "initializer_range": 0.02,
42
+ "intermediate_size": 18944,
43
+ "max_position_embeddings": 128000,
44
+ "max_window_layers": 28,
45
+ "model_type": "qwen2_5_vl_text",
46
+ "num_attention_heads": 28,
47
+ "num_hidden_layers": 28,
48
+ "num_key_value_heads": 4,
49
+ "rms_norm_eps": 1e-06,
50
+ "rope_scaling": {
51
+ "mrope_section": [
52
+ 16,
53
+ 24,
54
+ 24
55
+ ],
56
+ "rope_type": "default",
57
+ "type": "default"
58
+ },
59
+ "rope_theta": 1000000.0,
60
+ "sliding_window": 32768,
61
+ "torch_dtype": "bfloat16",
62
+ "use_cache": true,
63
+ "use_sliding_window": false,
64
+ "video_token_id": null,
65
+ "vision_end_token_id": 151653,
66
+ "vision_start_token_id": 151652,
67
+ "vision_token_id": 151654,
68
+ "vocab_size": 152064
69
+ },
70
+ "tie_word_embeddings": false,
71
+ "torch_dtype": "bfloat16",
72
+ "transformers_version": "4.52.3",
73
+ "unsloth_fixed": true,
74
+ "use_cache": true,
75
+ "use_sliding_window": false,
76
+ "video_token_id": 151656,
77
+ "vision_config": {
78
+ "depth": 32,
79
+ "fullatt_block_indexes": [
80
+ 7,
81
+ 15,
82
+ 23,
83
+ 31
84
+ ],
85
+ "hidden_act": "silu",
86
+ "hidden_size": 1280,
87
+ "in_channels": 3,
88
+ "in_chans": 3,
89
+ "initializer_range": 0.02,
90
+ "intermediate_size": 3420,
91
+ "model_type": "qwen2_5_vl",
92
+ "num_heads": 16,
93
+ "out_hidden_size": 3584,
94
+ "patch_size": 14,
95
+ "spatial_merge_size": 2,
96
+ "spatial_patch_size": 14,
97
+ "temporal_patch_size": 2,
98
+ "tokens_per_second": 2,
99
+ "torch_dtype": "bfloat16",
100
+ "window_size": 112
101
+ },
102
+ "vision_end_token_id": 151653,
103
+ "vision_start_token_id": 151652,
104
+ "vision_token_id": 151654,
105
+ "vocab_size": 152064
106
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "max_length": 128000,
9
+ "pad_token_id": 151654,
10
+ "repetition_penalty": 1.05,
11
+ "temperature": 1e-06,
12
+ "transformers_version": "4.52.3"
13
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e2251aa8f91e15bd345ad7cada85f83e4fe087d41f43d968f0c566a04696073
3
+ size 4968243304
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:503b87e7e15d34830d47bca272f734b08bfc6af3a36d489e500261807fb1e13e
3
+ size 4991495816
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a1d8b031246eee79a6a0a6fc99f526bad78f69e16a9555633fae0d0dc3fa999
3
+ size 4932751040
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc041a7ce8ead8b9a4e218e484d94164ce945dcb68799814737dd0f57d969f21
3
+ size 1691924384
model.safetensors.index.json ADDED
@@ -0,0 +1,736 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 16584333312
4
+ },
5
+ "weight_map": {
6
+ "visual.patch_embed.proj.weight": "model-00001-of-00004.safetensors",
7
+ "visual.blocks.0.norm1.weight": "model-00001-of-00004.safetensors",
8
+ "visual.blocks.0.norm2.weight": "model-00001-of-00004.safetensors",
9
+ "visual.blocks.0.attn.qkv.weight": "model-00001-of-00004.safetensors",
10
+ "visual.blocks.0.attn.qkv.bias": "model-00001-of-00004.safetensors",
11
+ "visual.blocks.0.attn.proj.weight": "model-00001-of-00004.safetensors",
12
+ "visual.blocks.0.attn.proj.bias": "model-00001-of-00004.safetensors",
13
+ "visual.blocks.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
14
+ "visual.blocks.0.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
15
+ "visual.blocks.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
16
+ "visual.blocks.0.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
17
+ "visual.blocks.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
18
+ "visual.blocks.0.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
19
+ "visual.blocks.1.norm1.weight": "model-00001-of-00004.safetensors",
20
+ "visual.blocks.1.norm2.weight": "model-00001-of-00004.safetensors",
21
+ "visual.blocks.1.attn.qkv.weight": "model-00001-of-00004.safetensors",
22
+ "visual.blocks.1.attn.qkv.bias": "model-00001-of-00004.safetensors",
23
+ "visual.blocks.1.attn.proj.weight": "model-00001-of-00004.safetensors",
24
+ "visual.blocks.1.attn.proj.bias": "model-00001-of-00004.safetensors",
25
+ "visual.blocks.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
26
+ "visual.blocks.1.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
27
+ "visual.blocks.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
28
+ "visual.blocks.1.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
29
+ "visual.blocks.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
30
+ "visual.blocks.1.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
31
+ "visual.blocks.2.norm1.weight": "model-00001-of-00004.safetensors",
32
+ "visual.blocks.2.norm2.weight": "model-00001-of-00004.safetensors",
33
+ "visual.blocks.2.attn.qkv.weight": "model-00001-of-00004.safetensors",
34
+ "visual.blocks.2.attn.qkv.bias": "model-00001-of-00004.safetensors",
35
+ "visual.blocks.2.attn.proj.weight": "model-00001-of-00004.safetensors",
36
+ "visual.blocks.2.attn.proj.bias": "model-00001-of-00004.safetensors",
37
+ "visual.blocks.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
38
+ "visual.blocks.2.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
39
+ "visual.blocks.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
40
+ "visual.blocks.2.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
41
+ "visual.blocks.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
42
+ "visual.blocks.2.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
43
+ "visual.blocks.3.norm1.weight": "model-00001-of-00004.safetensors",
44
+ "visual.blocks.3.norm2.weight": "model-00001-of-00004.safetensors",
45
+ "visual.blocks.3.attn.qkv.weight": "model-00001-of-00004.safetensors",
46
+ "visual.blocks.3.attn.qkv.bias": "model-00001-of-00004.safetensors",
47
+ "visual.blocks.3.attn.proj.weight": "model-00001-of-00004.safetensors",
48
+ "visual.blocks.3.attn.proj.bias": "model-00001-of-00004.safetensors",
49
+ "visual.blocks.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
50
+ "visual.blocks.3.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
51
+ "visual.blocks.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
52
+ "visual.blocks.3.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
53
+ "visual.blocks.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
54
+ "visual.blocks.3.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
55
+ "visual.blocks.4.norm1.weight": "model-00001-of-00004.safetensors",
56
+ "visual.blocks.4.norm2.weight": "model-00001-of-00004.safetensors",
57
+ "visual.blocks.4.attn.qkv.weight": "model-00001-of-00004.safetensors",
58
+ "visual.blocks.4.attn.qkv.bias": "model-00001-of-00004.safetensors",
59
+ "visual.blocks.4.attn.proj.weight": "model-00001-of-00004.safetensors",
60
+ "visual.blocks.4.attn.proj.bias": "model-00001-of-00004.safetensors",
61
+ "visual.blocks.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
62
+ "visual.blocks.4.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
63
+ "visual.blocks.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
64
+ "visual.blocks.4.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
65
+ "visual.blocks.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
66
+ "visual.blocks.4.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
67
+ "visual.blocks.5.norm1.weight": "model-00001-of-00004.safetensors",
68
+ "visual.blocks.5.norm2.weight": "model-00001-of-00004.safetensors",
69
+ "visual.blocks.5.attn.qkv.weight": "model-00001-of-00004.safetensors",
70
+ "visual.blocks.5.attn.qkv.bias": "model-00001-of-00004.safetensors",
71
+ "visual.blocks.5.attn.proj.weight": "model-00001-of-00004.safetensors",
72
+ "visual.blocks.5.attn.proj.bias": "model-00001-of-00004.safetensors",
73
+ "visual.blocks.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
74
+ "visual.blocks.5.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
75
+ "visual.blocks.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
76
+ "visual.blocks.5.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
77
+ "visual.blocks.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
78
+ "visual.blocks.5.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
79
+ "visual.blocks.6.norm1.weight": "model-00001-of-00004.safetensors",
80
+ "visual.blocks.6.norm2.weight": "model-00001-of-00004.safetensors",
81
+ "visual.blocks.6.attn.qkv.weight": "model-00001-of-00004.safetensors",
82
+ "visual.blocks.6.attn.qkv.bias": "model-00001-of-00004.safetensors",
83
+ "visual.blocks.6.attn.proj.weight": "model-00001-of-00004.safetensors",
84
+ "visual.blocks.6.attn.proj.bias": "model-00001-of-00004.safetensors",
85
+ "visual.blocks.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
86
+ "visual.blocks.6.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
87
+ "visual.blocks.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
88
+ "visual.blocks.6.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
89
+ "visual.blocks.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
90
+ "visual.blocks.6.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
91
+ "visual.blocks.7.norm1.weight": "model-00001-of-00004.safetensors",
92
+ "visual.blocks.7.norm2.weight": "model-00001-of-00004.safetensors",
93
+ "visual.blocks.7.attn.qkv.weight": "model-00001-of-00004.safetensors",
94
+ "visual.blocks.7.attn.qkv.bias": "model-00001-of-00004.safetensors",
95
+ "visual.blocks.7.attn.proj.weight": "model-00001-of-00004.safetensors",
96
+ "visual.blocks.7.attn.proj.bias": "model-00001-of-00004.safetensors",
97
+ "visual.blocks.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
98
+ "visual.blocks.7.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
99
+ "visual.blocks.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
100
+ "visual.blocks.7.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
101
+ "visual.blocks.7.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
102
+ "visual.blocks.7.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
103
+ "visual.blocks.8.norm1.weight": "model-00001-of-00004.safetensors",
104
+ "visual.blocks.8.norm2.weight": "model-00001-of-00004.safetensors",
105
+ "visual.blocks.8.attn.qkv.weight": "model-00001-of-00004.safetensors",
106
+ "visual.blocks.8.attn.qkv.bias": "model-00001-of-00004.safetensors",
107
+ "visual.blocks.8.attn.proj.weight": "model-00001-of-00004.safetensors",
108
+ "visual.blocks.8.attn.proj.bias": "model-00001-of-00004.safetensors",
109
+ "visual.blocks.8.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
110
+ "visual.blocks.8.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
111
+ "visual.blocks.8.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
112
+ "visual.blocks.8.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
113
+ "visual.blocks.8.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
114
+ "visual.blocks.8.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
115
+ "visual.blocks.9.norm1.weight": "model-00001-of-00004.safetensors",
116
+ "visual.blocks.9.norm2.weight": "model-00001-of-00004.safetensors",
117
+ "visual.blocks.9.attn.qkv.weight": "model-00001-of-00004.safetensors",
118
+ "visual.blocks.9.attn.qkv.bias": "model-00001-of-00004.safetensors",
119
+ "visual.blocks.9.attn.proj.weight": "model-00001-of-00004.safetensors",
120
+ "visual.blocks.9.attn.proj.bias": "model-00001-of-00004.safetensors",
121
+ "visual.blocks.9.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
122
+ "visual.blocks.9.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
123
+ "visual.blocks.9.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
124
+ "visual.blocks.9.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
125
+ "visual.blocks.9.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
126
+ "visual.blocks.9.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
127
+ "visual.blocks.10.norm1.weight": "model-00001-of-00004.safetensors",
128
+ "visual.blocks.10.norm2.weight": "model-00001-of-00004.safetensors",
129
+ "visual.blocks.10.attn.qkv.weight": "model-00001-of-00004.safetensors",
130
+ "visual.blocks.10.attn.qkv.bias": "model-00001-of-00004.safetensors",
131
+ "visual.blocks.10.attn.proj.weight": "model-00001-of-00004.safetensors",
132
+ "visual.blocks.10.attn.proj.bias": "model-00001-of-00004.safetensors",
133
+ "visual.blocks.10.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
134
+ "visual.blocks.10.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
135
+ "visual.blocks.10.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
136
+ "visual.blocks.10.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
137
+ "visual.blocks.10.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
138
+ "visual.blocks.10.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
139
+ "visual.blocks.11.norm1.weight": "model-00001-of-00004.safetensors",
140
+ "visual.blocks.11.norm2.weight": "model-00001-of-00004.safetensors",
141
+ "visual.blocks.11.attn.qkv.weight": "model-00001-of-00004.safetensors",
142
+ "visual.blocks.11.attn.qkv.bias": "model-00001-of-00004.safetensors",
143
+ "visual.blocks.11.attn.proj.weight": "model-00001-of-00004.safetensors",
144
+ "visual.blocks.11.attn.proj.bias": "model-00001-of-00004.safetensors",
145
+ "visual.blocks.11.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
146
+ "visual.blocks.11.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
147
+ "visual.blocks.11.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
148
+ "visual.blocks.11.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
149
+ "visual.blocks.11.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
150
+ "visual.blocks.11.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
151
+ "visual.blocks.12.norm1.weight": "model-00001-of-00004.safetensors",
152
+ "visual.blocks.12.norm2.weight": "model-00001-of-00004.safetensors",
153
+ "visual.blocks.12.attn.qkv.weight": "model-00001-of-00004.safetensors",
154
+ "visual.blocks.12.attn.qkv.bias": "model-00001-of-00004.safetensors",
155
+ "visual.blocks.12.attn.proj.weight": "model-00001-of-00004.safetensors",
156
+ "visual.blocks.12.attn.proj.bias": "model-00001-of-00004.safetensors",
157
+ "visual.blocks.12.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
158
+ "visual.blocks.12.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
159
+ "visual.blocks.12.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
160
+ "visual.blocks.12.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
161
+ "visual.blocks.12.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
162
+ "visual.blocks.12.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
163
+ "visual.blocks.13.norm1.weight": "model-00001-of-00004.safetensors",
164
+ "visual.blocks.13.norm2.weight": "model-00001-of-00004.safetensors",
165
+ "visual.blocks.13.attn.qkv.weight": "model-00001-of-00004.safetensors",
166
+ "visual.blocks.13.attn.qkv.bias": "model-00001-of-00004.safetensors",
167
+ "visual.blocks.13.attn.proj.weight": "model-00001-of-00004.safetensors",
168
+ "visual.blocks.13.attn.proj.bias": "model-00001-of-00004.safetensors",
169
+ "visual.blocks.13.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
170
+ "visual.blocks.13.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
171
+ "visual.blocks.13.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
172
+ "visual.blocks.13.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
173
+ "visual.blocks.13.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
174
+ "visual.blocks.13.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
175
+ "visual.blocks.14.norm1.weight": "model-00001-of-00004.safetensors",
176
+ "visual.blocks.14.norm2.weight": "model-00001-of-00004.safetensors",
177
+ "visual.blocks.14.attn.qkv.weight": "model-00001-of-00004.safetensors",
178
+ "visual.blocks.14.attn.qkv.bias": "model-00001-of-00004.safetensors",
179
+ "visual.blocks.14.attn.proj.weight": "model-00001-of-00004.safetensors",
180
+ "visual.blocks.14.attn.proj.bias": "model-00001-of-00004.safetensors",
181
+ "visual.blocks.14.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
182
+ "visual.blocks.14.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
183
+ "visual.blocks.14.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
184
+ "visual.blocks.14.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
185
+ "visual.blocks.14.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
186
+ "visual.blocks.14.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
187
+ "visual.blocks.15.norm1.weight": "model-00001-of-00004.safetensors",
188
+ "visual.blocks.15.norm2.weight": "model-00001-of-00004.safetensors",
189
+ "visual.blocks.15.attn.qkv.weight": "model-00001-of-00004.safetensors",
190
+ "visual.blocks.15.attn.qkv.bias": "model-00001-of-00004.safetensors",
191
+ "visual.blocks.15.attn.proj.weight": "model-00001-of-00004.safetensors",
192
+ "visual.blocks.15.attn.proj.bias": "model-00001-of-00004.safetensors",
193
+ "visual.blocks.15.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
194
+ "visual.blocks.15.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
195
+ "visual.blocks.15.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
196
+ "visual.blocks.15.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
197
+ "visual.blocks.15.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
198
+ "visual.blocks.15.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
199
+ "visual.blocks.16.norm1.weight": "model-00001-of-00004.safetensors",
200
+ "visual.blocks.16.norm2.weight": "model-00001-of-00004.safetensors",
201
+ "visual.blocks.16.attn.qkv.weight": "model-00001-of-00004.safetensors",
202
+ "visual.blocks.16.attn.qkv.bias": "model-00001-of-00004.safetensors",
203
+ "visual.blocks.16.attn.proj.weight": "model-00001-of-00004.safetensors",
204
+ "visual.blocks.16.attn.proj.bias": "model-00001-of-00004.safetensors",
205
+ "visual.blocks.16.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
206
+ "visual.blocks.16.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
207
+ "visual.blocks.16.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
208
+ "visual.blocks.16.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
209
+ "visual.blocks.16.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
210
+ "visual.blocks.16.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
211
+ "visual.blocks.17.norm1.weight": "model-00001-of-00004.safetensors",
212
+ "visual.blocks.17.norm2.weight": "model-00001-of-00004.safetensors",
213
+ "visual.blocks.17.attn.qkv.weight": "model-00001-of-00004.safetensors",
214
+ "visual.blocks.17.attn.qkv.bias": "model-00001-of-00004.safetensors",
215
+ "visual.blocks.17.attn.proj.weight": "model-00001-of-00004.safetensors",
216
+ "visual.blocks.17.attn.proj.bias": "model-00001-of-00004.safetensors",
217
+ "visual.blocks.17.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
218
+ "visual.blocks.17.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
219
+ "visual.blocks.17.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
220
+ "visual.blocks.17.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
221
+ "visual.blocks.17.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
222
+ "visual.blocks.17.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
223
+ "visual.blocks.18.norm1.weight": "model-00001-of-00004.safetensors",
224
+ "visual.blocks.18.norm2.weight": "model-00001-of-00004.safetensors",
225
+ "visual.blocks.18.attn.qkv.weight": "model-00001-of-00004.safetensors",
226
+ "visual.blocks.18.attn.qkv.bias": "model-00001-of-00004.safetensors",
227
+ "visual.blocks.18.attn.proj.weight": "model-00001-of-00004.safetensors",
228
+ "visual.blocks.18.attn.proj.bias": "model-00001-of-00004.safetensors",
229
+ "visual.blocks.18.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
230
+ "visual.blocks.18.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
231
+ "visual.blocks.18.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
232
+ "visual.blocks.18.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
233
+ "visual.blocks.18.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
234
+ "visual.blocks.18.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
235
+ "visual.blocks.19.norm1.weight": "model-00001-of-00004.safetensors",
236
+ "visual.blocks.19.norm2.weight": "model-00001-of-00004.safetensors",
237
+ "visual.blocks.19.attn.qkv.weight": "model-00001-of-00004.safetensors",
238
+ "visual.blocks.19.attn.qkv.bias": "model-00001-of-00004.safetensors",
239
+ "visual.blocks.19.attn.proj.weight": "model-00001-of-00004.safetensors",
240
+ "visual.blocks.19.attn.proj.bias": "model-00001-of-00004.safetensors",
241
+ "visual.blocks.19.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
242
+ "visual.blocks.19.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
243
+ "visual.blocks.19.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
244
+ "visual.blocks.19.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
245
+ "visual.blocks.19.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
246
+ "visual.blocks.19.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
247
+ "visual.blocks.20.norm1.weight": "model-00001-of-00004.safetensors",
248
+ "visual.blocks.20.norm2.weight": "model-00001-of-00004.safetensors",
249
+ "visual.blocks.20.attn.qkv.weight": "model-00001-of-00004.safetensors",
250
+ "visual.blocks.20.attn.qkv.bias": "model-00001-of-00004.safetensors",
251
+ "visual.blocks.20.attn.proj.weight": "model-00001-of-00004.safetensors",
252
+ "visual.blocks.20.attn.proj.bias": "model-00001-of-00004.safetensors",
253
+ "visual.blocks.20.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
254
+ "visual.blocks.20.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
255
+ "visual.blocks.20.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
256
+ "visual.blocks.20.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
257
+ "visual.blocks.20.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
258
+ "visual.blocks.20.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
259
+ "visual.blocks.21.norm1.weight": "model-00001-of-00004.safetensors",
260
+ "visual.blocks.21.norm2.weight": "model-00001-of-00004.safetensors",
261
+ "visual.blocks.21.attn.qkv.weight": "model-00001-of-00004.safetensors",
262
+ "visual.blocks.21.attn.qkv.bias": "model-00001-of-00004.safetensors",
263
+ "visual.blocks.21.attn.proj.weight": "model-00001-of-00004.safetensors",
264
+ "visual.blocks.21.attn.proj.bias": "model-00001-of-00004.safetensors",
265
+ "visual.blocks.21.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
266
+ "visual.blocks.21.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
267
+ "visual.blocks.21.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
268
+ "visual.blocks.21.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
269
+ "visual.blocks.21.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
270
+ "visual.blocks.21.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
271
+ "visual.blocks.22.norm1.weight": "model-00001-of-00004.safetensors",
272
+ "visual.blocks.22.norm2.weight": "model-00001-of-00004.safetensors",
273
+ "visual.blocks.22.attn.qkv.weight": "model-00001-of-00004.safetensors",
274
+ "visual.blocks.22.attn.qkv.bias": "model-00001-of-00004.safetensors",
275
+ "visual.blocks.22.attn.proj.weight": "model-00001-of-00004.safetensors",
276
+ "visual.blocks.22.attn.proj.bias": "model-00001-of-00004.safetensors",
277
+ "visual.blocks.22.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
278
+ "visual.blocks.22.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
279
+ "visual.blocks.22.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
280
+ "visual.blocks.22.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
281
+ "visual.blocks.22.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
282
+ "visual.blocks.22.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
283
+ "visual.blocks.23.norm1.weight": "model-00001-of-00004.safetensors",
284
+ "visual.blocks.23.norm2.weight": "model-00001-of-00004.safetensors",
285
+ "visual.blocks.23.attn.qkv.weight": "model-00001-of-00004.safetensors",
286
+ "visual.blocks.23.attn.qkv.bias": "model-00001-of-00004.safetensors",
287
+ "visual.blocks.23.attn.proj.weight": "model-00001-of-00004.safetensors",
288
+ "visual.blocks.23.attn.proj.bias": "model-00001-of-00004.safetensors",
289
+ "visual.blocks.23.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
290
+ "visual.blocks.23.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
291
+ "visual.blocks.23.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
292
+ "visual.blocks.23.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
293
+ "visual.blocks.23.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
294
+ "visual.blocks.23.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
295
+ "visual.blocks.24.norm1.weight": "model-00001-of-00004.safetensors",
296
+ "visual.blocks.24.norm2.weight": "model-00001-of-00004.safetensors",
297
+ "visual.blocks.24.attn.qkv.weight": "model-00001-of-00004.safetensors",
298
+ "visual.blocks.24.attn.qkv.bias": "model-00001-of-00004.safetensors",
299
+ "visual.blocks.24.attn.proj.weight": "model-00001-of-00004.safetensors",
300
+ "visual.blocks.24.attn.proj.bias": "model-00001-of-00004.safetensors",
301
+ "visual.blocks.24.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
302
+ "visual.blocks.24.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
303
+ "visual.blocks.24.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
304
+ "visual.blocks.24.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
305
+ "visual.blocks.24.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
306
+ "visual.blocks.24.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
307
+ "visual.blocks.25.norm1.weight": "model-00001-of-00004.safetensors",
308
+ "visual.blocks.25.norm2.weight": "model-00001-of-00004.safetensors",
309
+ "visual.blocks.25.attn.qkv.weight": "model-00001-of-00004.safetensors",
310
+ "visual.blocks.25.attn.qkv.bias": "model-00001-of-00004.safetensors",
311
+ "visual.blocks.25.attn.proj.weight": "model-00001-of-00004.safetensors",
312
+ "visual.blocks.25.attn.proj.bias": "model-00001-of-00004.safetensors",
313
+ "visual.blocks.25.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
314
+ "visual.blocks.25.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
315
+ "visual.blocks.25.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
316
+ "visual.blocks.25.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
317
+ "visual.blocks.25.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
318
+ "visual.blocks.25.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
319
+ "visual.blocks.26.norm1.weight": "model-00001-of-00004.safetensors",
320
+ "visual.blocks.26.norm2.weight": "model-00001-of-00004.safetensors",
321
+ "visual.blocks.26.attn.qkv.weight": "model-00001-of-00004.safetensors",
322
+ "visual.blocks.26.attn.qkv.bias": "model-00001-of-00004.safetensors",
323
+ "visual.blocks.26.attn.proj.weight": "model-00001-of-00004.safetensors",
324
+ "visual.blocks.26.attn.proj.bias": "model-00001-of-00004.safetensors",
325
+ "visual.blocks.26.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
326
+ "visual.blocks.26.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
327
+ "visual.blocks.26.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
328
+ "visual.blocks.26.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
329
+ "visual.blocks.26.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
330
+ "visual.blocks.26.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
331
+ "visual.blocks.27.norm1.weight": "model-00001-of-00004.safetensors",
332
+ "visual.blocks.27.norm2.weight": "model-00001-of-00004.safetensors",
333
+ "visual.blocks.27.attn.qkv.weight": "model-00001-of-00004.safetensors",
334
+ "visual.blocks.27.attn.qkv.bias": "model-00001-of-00004.safetensors",
335
+ "visual.blocks.27.attn.proj.weight": "model-00001-of-00004.safetensors",
336
+ "visual.blocks.27.attn.proj.bias": "model-00001-of-00004.safetensors",
337
+ "visual.blocks.27.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
338
+ "visual.blocks.27.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
339
+ "visual.blocks.27.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
340
+ "visual.blocks.27.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
341
+ "visual.blocks.27.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
342
+ "visual.blocks.27.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
343
+ "visual.blocks.28.norm1.weight": "model-00001-of-00004.safetensors",
344
+ "visual.blocks.28.norm2.weight": "model-00001-of-00004.safetensors",
345
+ "visual.blocks.28.attn.qkv.weight": "model-00001-of-00004.safetensors",
346
+ "visual.blocks.28.attn.qkv.bias": "model-00001-of-00004.safetensors",
347
+ "visual.blocks.28.attn.proj.weight": "model-00001-of-00004.safetensors",
348
+ "visual.blocks.28.attn.proj.bias": "model-00001-of-00004.safetensors",
349
+ "visual.blocks.28.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
350
+ "visual.blocks.28.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
351
+ "visual.blocks.28.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
352
+ "visual.blocks.28.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
353
+ "visual.blocks.28.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
354
+ "visual.blocks.28.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
355
+ "visual.blocks.29.norm1.weight": "model-00001-of-00004.safetensors",
356
+ "visual.blocks.29.norm2.weight": "model-00001-of-00004.safetensors",
357
+ "visual.blocks.29.attn.qkv.weight": "model-00001-of-00004.safetensors",
358
+ "visual.blocks.29.attn.qkv.bias": "model-00001-of-00004.safetensors",
359
+ "visual.blocks.29.attn.proj.weight": "model-00001-of-00004.safetensors",
360
+ "visual.blocks.29.attn.proj.bias": "model-00001-of-00004.safetensors",
361
+ "visual.blocks.29.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
362
+ "visual.blocks.29.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
363
+ "visual.blocks.29.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
364
+ "visual.blocks.29.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
365
+ "visual.blocks.29.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
366
+ "visual.blocks.29.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
367
+ "visual.blocks.30.norm1.weight": "model-00001-of-00004.safetensors",
368
+ "visual.blocks.30.norm2.weight": "model-00001-of-00004.safetensors",
369
+ "visual.blocks.30.attn.qkv.weight": "model-00001-of-00004.safetensors",
370
+ "visual.blocks.30.attn.qkv.bias": "model-00001-of-00004.safetensors",
371
+ "visual.blocks.30.attn.proj.weight": "model-00001-of-00004.safetensors",
372
+ "visual.blocks.30.attn.proj.bias": "model-00001-of-00004.safetensors",
373
+ "visual.blocks.30.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
374
+ "visual.blocks.30.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
375
+ "visual.blocks.30.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
376
+ "visual.blocks.30.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
377
+ "visual.blocks.30.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
378
+ "visual.blocks.30.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
379
+ "visual.blocks.31.norm1.weight": "model-00001-of-00004.safetensors",
380
+ "visual.blocks.31.norm2.weight": "model-00001-of-00004.safetensors",
381
+ "visual.blocks.31.attn.qkv.weight": "model-00001-of-00004.safetensors",
382
+ "visual.blocks.31.attn.qkv.bias": "model-00001-of-00004.safetensors",
383
+ "visual.blocks.31.attn.proj.weight": "model-00001-of-00004.safetensors",
384
+ "visual.blocks.31.attn.proj.bias": "model-00001-of-00004.safetensors",
385
+ "visual.blocks.31.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
386
+ "visual.blocks.31.mlp.gate_proj.bias": "model-00001-of-00004.safetensors",
387
+ "visual.blocks.31.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
388
+ "visual.blocks.31.mlp.up_proj.bias": "model-00001-of-00004.safetensors",
389
+ "visual.blocks.31.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
390
+ "visual.blocks.31.mlp.down_proj.bias": "model-00001-of-00004.safetensors",
391
+ "visual.merger.ln_q.weight": "model-00001-of-00004.safetensors",
392
+ "visual.merger.mlp.0.weight": "model-00001-of-00004.safetensors",
393
+ "visual.merger.mlp.0.bias": "model-00001-of-00004.safetensors",
394
+ "visual.merger.mlp.2.weight": "model-00001-of-00004.safetensors",
395
+ "visual.merger.mlp.2.bias": "model-00001-of-00004.safetensors",
396
+ "model.embed_tokens.weight": "model-00001-of-00004.safetensors",
397
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
398
+ "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
399
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
400
+ "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
401
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
402
+ "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
403
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
404
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
405
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
406
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
407
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
408
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
409
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
410
+ "model.layers.1.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
411
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
412
+ "model.layers.1.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
413
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
414
+ "model.layers.1.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
415
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
416
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
417
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
418
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
419
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
420
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
421
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
422
+ "model.layers.2.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
423
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
424
+ "model.layers.2.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
425
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
426
+ "model.layers.2.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
427
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
428
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
429
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
430
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
431
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
432
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
433
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
434
+ "model.layers.3.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
435
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
436
+ "model.layers.3.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
437
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
438
+ "model.layers.3.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
439
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
440
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
441
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
442
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
443
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
444
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
445
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
446
+ "model.layers.4.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
447
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
448
+ "model.layers.4.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
449
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
450
+ "model.layers.4.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
451
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
452
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
453
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
454
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
455
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
456
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
457
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
458
+ "model.layers.5.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
459
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
460
+ "model.layers.5.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
461
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
462
+ "model.layers.5.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
463
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
464
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
465
+ "model.layers.5.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
466
+ "model.layers.5.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
467
+ "model.layers.5.input_layernorm.weight": "model-00002-of-00004.safetensors",
468
+ "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
469
+ "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
470
+ "model.layers.6.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
471
+ "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
472
+ "model.layers.6.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
473
+ "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
474
+ "model.layers.6.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
475
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
476
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
477
+ "model.layers.6.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
478
+ "model.layers.6.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
479
+ "model.layers.6.input_layernorm.weight": "model-00002-of-00004.safetensors",
480
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
481
+ "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
482
+ "model.layers.7.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
483
+ "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
484
+ "model.layers.7.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
485
+ "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
486
+ "model.layers.7.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
487
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
488
+ "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
489
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
490
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
491
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
492
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
493
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
494
+ "model.layers.8.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
495
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
496
+ "model.layers.8.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
497
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
498
+ "model.layers.8.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
499
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
500
+ "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
501
+ "model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
502
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
503
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
504
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
505
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
506
+ "model.layers.9.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
507
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
508
+ "model.layers.9.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
509
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
510
+ "model.layers.9.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
511
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
512
+ "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
513
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
514
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
515
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
516
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
517
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
518
+ "model.layers.10.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
519
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
520
+ "model.layers.10.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
521
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
522
+ "model.layers.10.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
523
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
524
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
525
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
526
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
527
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
528
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
529
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
530
+ "model.layers.11.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
531
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
532
+ "model.layers.11.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
533
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
534
+ "model.layers.11.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
535
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
536
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
537
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
538
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
539
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
540
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
541
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
542
+ "model.layers.12.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
543
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
544
+ "model.layers.12.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
545
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
546
+ "model.layers.12.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
547
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
548
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
549
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
550
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
551
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
552
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
553
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
554
+ "model.layers.13.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
555
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
556
+ "model.layers.13.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
557
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
558
+ "model.layers.13.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
559
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
560
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
561
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
562
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
563
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
564
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
565
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
566
+ "model.layers.14.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
567
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
568
+ "model.layers.14.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
569
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
570
+ "model.layers.14.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
571
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
572
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
573
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
574
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
575
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
576
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
577
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
578
+ "model.layers.15.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
579
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
580
+ "model.layers.15.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
581
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
582
+ "model.layers.15.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
583
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
584
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
585
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
586
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
587
+ "model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
588
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
589
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
590
+ "model.layers.16.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
591
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
592
+ "model.layers.16.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
593
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
594
+ "model.layers.16.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
595
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
596
+ "model.layers.16.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
597
+ "model.layers.16.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
598
+ "model.layers.16.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
599
+ "model.layers.16.input_layernorm.weight": "model-00003-of-00004.safetensors",
600
+ "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
601
+ "model.layers.17.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
602
+ "model.layers.17.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
603
+ "model.layers.17.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
604
+ "model.layers.17.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
605
+ "model.layers.17.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
606
+ "model.layers.17.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
607
+ "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
608
+ "model.layers.17.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
609
+ "model.layers.17.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
610
+ "model.layers.17.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
611
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00004.safetensors",
612
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
613
+ "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
614
+ "model.layers.18.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
615
+ "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
616
+ "model.layers.18.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
617
+ "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
618
+ "model.layers.18.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
619
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
620
+ "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
621
+ "model.layers.18.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
622
+ "model.layers.18.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
623
+ "model.layers.18.input_layernorm.weight": "model-00003-of-00004.safetensors",
624
+ "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
625
+ "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
626
+ "model.layers.19.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
627
+ "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
628
+ "model.layers.19.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
629
+ "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
630
+ "model.layers.19.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
631
+ "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
632
+ "model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
633
+ "model.layers.19.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
634
+ "model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
635
+ "model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
636
+ "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
637
+ "model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
638
+ "model.layers.20.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
639
+ "model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
640
+ "model.layers.20.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
641
+ "model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
642
+ "model.layers.20.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
643
+ "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
644
+ "model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
645
+ "model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
646
+ "model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
647
+ "model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
648
+ "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
649
+ "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
650
+ "model.layers.21.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
651
+ "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
652
+ "model.layers.21.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
653
+ "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
654
+ "model.layers.21.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
655
+ "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
656
+ "model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
657
+ "model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
658
+ "model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
659
+ "model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
660
+ "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
661
+ "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
662
+ "model.layers.22.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
663
+ "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
664
+ "model.layers.22.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
665
+ "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
666
+ "model.layers.22.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
667
+ "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
668
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
669
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
670
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
671
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
672
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
673
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
674
+ "model.layers.23.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
675
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
676
+ "model.layers.23.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
677
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
678
+ "model.layers.23.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
679
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
680
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
681
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
682
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
683
+ "model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
684
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
685
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
686
+ "model.layers.24.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
687
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
688
+ "model.layers.24.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
689
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
690
+ "model.layers.24.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
691
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
692
+ "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
693
+ "model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
694
+ "model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
695
+ "model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
696
+ "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
697
+ "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
698
+ "model.layers.25.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
699
+ "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
700
+ "model.layers.25.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
701
+ "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
702
+ "model.layers.25.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
703
+ "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
704
+ "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
705
+ "model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
706
+ "model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
707
+ "model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
708
+ "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
709
+ "model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
710
+ "model.layers.26.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
711
+ "model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
712
+ "model.layers.26.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
713
+ "model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
714
+ "model.layers.26.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
715
+ "model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
716
+ "model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
717
+ "model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
718
+ "model.layers.26.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
719
+ "model.layers.26.input_layernorm.weight": "model-00004-of-00004.safetensors",
720
+ "model.layers.26.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
721
+ "model.layers.27.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
722
+ "model.layers.27.self_attn.q_proj.bias": "model-00004-of-00004.safetensors",
723
+ "model.layers.27.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
724
+ "model.layers.27.self_attn.k_proj.bias": "model-00004-of-00004.safetensors",
725
+ "model.layers.27.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
726
+ "model.layers.27.self_attn.v_proj.bias": "model-00004-of-00004.safetensors",
727
+ "model.layers.27.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
728
+ "model.layers.27.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
729
+ "model.layers.27.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
730
+ "model.layers.27.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
731
+ "model.layers.27.input_layernorm.weight": "model-00004-of-00004.safetensors",
732
+ "model.layers.27.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
733
+ "model.norm.weight": "model-00004-of-00004.safetensors",
734
+ "lm_head.weight": "model-00004-of-00004.safetensors"
735
+ }
736
+ }
preprocessor_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": true,
3
+ "do_normalize": true,
4
+ "do_rescale": true,
5
+ "do_resize": true,
6
+ "image_mean": [
7
+ 0.48145466,
8
+ 0.4578275,
9
+ 0.40821073
10
+ ],
11
+ "image_processor_type": "Qwen2VLImageProcessor",
12
+ "image_std": [
13
+ 0.26862954,
14
+ 0.26130258,
15
+ 0.27577711
16
+ ],
17
+ "max_pixels": 12845056,
18
+ "merge_size": 2,
19
+ "min_pixels": 3136,
20
+ "patch_size": 14,
21
+ "processor_class": "Qwen2_5_VLProcessor",
22
+ "resample": 3,
23
+ "rescale_factor": 0.00392156862745098,
24
+ "size": {
25
+ "longest_edge": 12845056,
26
+ "shortest_edge": 3136
27
+ },
28
+ "temporal_patch_size": 2
29
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|vision_pad|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
3
+ size 11421896
tokenizer_config.json ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "clean_up_tokenization_spaces": false,
199
+ "eos_token": "<|im_end|>",
200
+ "errors": "replace",
201
+ "extra_special_tokens": {},
202
+ "model_max_length": 128000,
203
+ "pad_token": "<|vision_pad|>",
204
+ "padding_side": "left",
205
+ "processor_class": "Qwen2_5_VLProcessor",
206
+ "split_special_tokens": false,
207
+ "tokenizer_class": "Qwen2Tokenizer",
208
+ "unk_token": null,
209
+ "chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}"
210
+ }
video_preprocessor_config.json ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_valid_kwargs_names": [
3
+ "do_convert_rgb",
4
+ "do_resize",
5
+ "size",
6
+ "size_divisor",
7
+ "default_to_square",
8
+ "resample",
9
+ "do_rescale",
10
+ "rescale_factor",
11
+ "do_normalize",
12
+ "image_mean",
13
+ "image_std",
14
+ "do_pad",
15
+ "do_center_crop",
16
+ "crop_size",
17
+ "data_format",
18
+ "input_data_format",
19
+ "device",
20
+ "min_pixels",
21
+ "max_pixels",
22
+ "patch_size",
23
+ "temporal_patch_size",
24
+ "merge_size"
25
+ ],
26
+ "crop_size": null,
27
+ "data_format": "channels_first",
28
+ "default_to_square": true,
29
+ "device": null,
30
+ "do_center_crop": null,
31
+ "do_convert_rgb": true,
32
+ "do_normalize": true,
33
+ "do_pad": null,
34
+ "do_rescale": true,
35
+ "do_resize": true,
36
+ "image_mean": [
37
+ 0.48145466,
38
+ 0.4578275,
39
+ 0.40821073
40
+ ],
41
+ "image_processor_type": "Qwen2VLImageProcessor",
42
+ "image_std": [
43
+ 0.26862954,
44
+ 0.26130258,
45
+ 0.27577711
46
+ ],
47
+ "input_data_format": null,
48
+ "max_pixels": 12845056,
49
+ "merge_size": 2,
50
+ "min_pixels": 3136,
51
+ "model_valid_processing_keys": [
52
+ "do_convert_rgb",
53
+ "do_resize",
54
+ "size",
55
+ "size_divisor",
56
+ "default_to_square",
57
+ "resample",
58
+ "do_rescale",
59
+ "rescale_factor",
60
+ "do_normalize",
61
+ "image_mean",
62
+ "image_std",
63
+ "do_pad",
64
+ "do_center_crop",
65
+ "crop_size",
66
+ "data_format",
67
+ "input_data_format",
68
+ "device",
69
+ "min_pixels",
70
+ "max_pixels",
71
+ "patch_size",
72
+ "temporal_patch_size",
73
+ "merge_size"
74
+ ],
75
+ "patch_size": 14,
76
+ "processor_class": "Qwen2_5_VLProcessor",
77
+ "resample": 3,
78
+ "rescale_factor": 0.00392156862745098,
79
+ "size": {
80
+ "longest_edge": 12845056,
81
+ "shortest_edge": 3136
82
+ },
83
+ "size_divisor": null,
84
+ "temporal_patch_size": 2,
85
+ "video_processor_type": "Qwen2VLVideoProcessor"
86
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff