W0219 19:59:24.363000 3191014 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:24.363000 3191014 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:24.363000 3191014 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:24.363000 3191014 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:24.363000 3413658 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:24.363000 3413658 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:24.363000 3413658 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:24.363000 3413658 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.784000 2898054 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:26.784000 2898054 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.784000 2898054 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:26.784000 2898054 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.809000 3777618 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:26.809000 3777618 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.809000 3777618 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:26.809000 3777618 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.849000 622361 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:26.849000 622361 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.849000 622361 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:26.849000 622361 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.856000 973974 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:26.856000 973974 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.856000 973974 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:26.856000 973974 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.910000 47406 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:26.910000 47406 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.910000 47406 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:26.910000 47406 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.932000 3007830 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:26.932000 3007830 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.932000 3007830 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:26.932000 3007830 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.939000 3378232 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:26.939000 3378232 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.939000 3378232 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:26.939000 3378232 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.948000 47326 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:26.948000 47326 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.948000 47326 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:26.948000 47326 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.999000 47313 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:26.999000 47313 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.999000 47313 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:26.999000 47313 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.999000 642692 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:26.999000 642692 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:26.999000 642692 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:26.999000 642692 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:27.043000 2786519 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:27.043000 2786519 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:27.043000 2786519 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:27.043000 2786519 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:27.485000 47539 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:27.485000 47539 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:27.485000 47539 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:27.485000 47539 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:27.576000 3297987 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:27.576000 3297987 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:27.576000 3297987 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:27.576000 3297987 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:29.178000 3283955 .local/lib/python3.10/site-packages/torch/distributed/run.py:793]
W0219 19:59:29.178000 3283955 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
W0219 19:59:29.178000 3283955 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0219 19:59:29.178000 3283955 .local/lib/python3.10/site-packages/torch/distributed/run.py:793] *****************************************
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.19it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.05it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.04it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.07it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.00it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.03it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.12it/s]
Loading checkpoint shards: 2
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.05it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.12it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.08it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.98it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.04it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.81it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.00it/s]
Loading checkpoint shards: 2You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.24it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.11it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.29it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.08it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.10it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.03it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.98it/s]
Loading checkpoint shards: 2
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.86it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.99it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.83it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.86it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:01<00:04, 1.03s/it]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.71it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.67it/s]
Loading checkpoint shards: 2
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.69it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.20it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.66it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.03it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.09it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.37it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.11it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.15it/s]
Loading checkpoin
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.11it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.04it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.23it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.80it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.10it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.08it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.04it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.22it/s]
Loading checkpoin
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.53it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.03it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.17it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.27it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.88it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.54it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.33it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.04it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.76it/s]
L
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.31it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.39it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.06it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.43it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.50it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.89it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.51it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.25it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.95it/s]
L
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.09it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.30it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.90it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.28it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.41it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.28it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.18it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.24it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.88it/s]
L
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.22it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.12it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.31it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.80it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.18it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.33it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.05it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.06it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.75it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.53it/s]
Loading checkpoint shards: 40%|████
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.32it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.01it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.84it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.76it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.10it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.74it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 2.00it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.89it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.70it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.66it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.09it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.70it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.04it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.98it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:02, 1.98it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.22it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.43it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.72it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.75it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.78it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.65it/s]
Loading checkpoint
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.16it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.61it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.42it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.73it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.44it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.62it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.69it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.49it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.91it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 3.00it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.09it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.87it/s]
Loading checkpoint
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:03, 1.11it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.34it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.21it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.96it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.08it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.03it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.53it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.01it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.95it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.79it/s]
Loading checkpoint shards: 40%|████
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
0%|██ | 1/5 [00:00<00:02, 1.99it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.69it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.51it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.78it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.59it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.62it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.58it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.59it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.44it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.97it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.85it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.27it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00,t shards: 40%|████ | 2/5 [00:00<00:01, 2.79it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.65it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.64it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.45it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.58it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.71it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.53it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.69it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.03it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.94it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.82it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.84it/s]
Loading checkpoint shards: 60%|██████ t shards: 40%|████ | 2/5 [00:00<00:01, 2.80it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.07it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.95it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.74it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.82it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.72it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.73it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.86it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.21it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.03it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.07it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.03it/s]
Loading checkpoint shards: 60%|██████ 0%|██ | 1/5 [00:00<00:02, 1.90it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.89it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.68it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.62it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.71it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.72it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.63it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.63it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.60it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.09it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.00it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.92it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00,| 2/5 [00:00<00:00, 3.19it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.05it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.52it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.33it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.81it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.90it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.58it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.33it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.10it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.46it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.33it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.42it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.4oading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.91it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.56it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.80it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.80it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.89it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.93it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.62it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.01it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.11it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.88it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.05it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.21it/s]
Loading checkpoint shards: 60%|�0%|██ | 1/5 [00:00<00:01, 2.31it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.65it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.68it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.62it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.61it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.68it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.80it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.53it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.61it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.92it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.87it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.96it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00,oading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.80it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.83it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.02it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.90it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.76it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.77it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.79it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.32it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.06it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.06it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.18it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.07it/s]
Loading checkpoint shards: 60%|�01, 2.74it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.58it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.39it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.67it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.61it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.39it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.30it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.07it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.21it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.92it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.77it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.79it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.93it/s]
Loading check0%|██ | 1/5 [00:00<00:02, 1.66it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.51it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.61it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.54it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.47it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.42it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.35it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.36it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.62it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.93it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.85it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.88it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00,oading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.74it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.10it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.84it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.98it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.72it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.68it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.53it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.05it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.23it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.23it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.10it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.97it/s]
Loading checkpoint shards: 60%|�| 2/5 [00:00<00:00, 3.47it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.62it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.65it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.07it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.57it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.51it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.74it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.90it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.30it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.44it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.48it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.74it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.8PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
PyTorch: setting up devices
shards: 40%|████ | 2/5 [00:00<00:01, 2.96it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.96it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.80it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:01<00:01, 1.73it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.12it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.17it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.19it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.06it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.15it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.17it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.71it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.31it/s]
Loading checkpoint shards: 80%|██ shards: 40%|████ | 2/5 [00:00<00:01, 2.84it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.63it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.90it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:01, 2.67it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.54it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.00it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.26it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.95it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.08it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.93it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.14it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.11it/s]
Loading checkpoint shards: 80%|██ 2.84it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.78it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.81it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.16it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.74it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.16it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.09it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.65it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.16it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.93it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.05it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.05it/s]
Loading checkpoint shards: 80%|█████� 2.91it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.91it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.03it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.88it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.21it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.06it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.04it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.08it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.11it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.57it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.08it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.07it/s]
Loading checkpoint shards: 80%|█████� 2.22it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.92it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.91it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.80it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.84it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.18it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.20it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.14it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.09it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.13it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.55it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.06it/s]
Loading checkpoint shards: 80%|█████� 2.92it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.92it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.81it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.90it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.89it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.12it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.04it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.61it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.08it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.08it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.07it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.06it/s]
Loading checkpoint shards: 80%|█████�
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.31it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.12it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.52it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.41it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.90it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.82it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.90it/s]
Loading checkpoint shards�██ | 4/5 [00:01<00:00, 2.93it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.71it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.08it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
██████ | 4/5 [00:01<00:00, 3.22it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.24it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.67it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.19it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.27it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.24it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.12it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.40it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.77it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.37it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.64it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.17it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
��█████ | 3/5 [00:01<00:00, 3.01it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.03it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.08it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.21it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.64it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.18it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.24it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.14it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.20it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.17it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.14it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.74it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.33it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
��█████ | 3/5 [00:01<00:00, 3.14it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.02it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.12it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.15it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.21it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.05it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.16it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.23it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.55it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.14it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.22it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.68it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.26it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
�██ | 4/5 [00:01<00:00, 3.04it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.74it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.72it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.24it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.33it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
9it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.22it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.49it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.35it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.78it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.43it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.41it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.29it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.45it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.65it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.98it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.78it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.75it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.37it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.23it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.52it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
██████ | 4/5 [00:01<00:00, 3.49it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.68it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.16it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.37it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.20it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.10it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.12it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.23it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.98it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.79it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
1it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.75it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.05it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.69it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.59it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.64it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.43it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.84it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.58it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.82it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.42it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.26it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.78it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.42it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
| 3/5 [00:01<00:00, 3.01it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.12it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.12it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.22it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.28it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.16it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.27it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.60it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.18it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.22it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.14it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.22it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.78it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.45it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
��█████ | 3/5 [00:01<00:00, 2.99it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.08it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.96it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.24it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.59it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.30it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.17it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.20it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.24it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.13it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.10it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.76it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.39it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.22it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.01it/s]
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.68it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.27it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.68it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.23it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.86it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.58it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
| 3/5 [00:01<00:00, 2.91it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 3.01it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.21it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.85it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.16it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.13it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.09it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.01it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.07it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.12it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.56it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.04it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.69it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.28it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.14it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.89it/s]
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.70it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.25it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 3.17it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.45it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 3.25it/s]All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.50it/s]
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 3.23it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.41it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.92it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.69it/s]
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
�██ | 4/5 [00:01<00:00, 2.99it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.65it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.21it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.70it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.26it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.58it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.12it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.66it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.19it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.60it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.11it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.71it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.15it/s]
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.62it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.18it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 3.18it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.49it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.77it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.30it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.04it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.85it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.61it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.15it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.68it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.28it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.61it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.17it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.21it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.51it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
point shards: 60%|██████ | 3/5 [00:01<00:00, 2.75it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:01<00:00, 2.94it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.55it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.15it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.07it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.01it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.97it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.09it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 2.97it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.10it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 3.12it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.40it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.67it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.27it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.88it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.44it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.47it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.02it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
�██ | 4/5 [00:01<00:00, 3.15it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.56it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.15it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 3.15it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.44it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.59it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.12it/s]
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.58it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.41it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.65it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.27it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.62it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.36it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.63it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.25it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.67it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.37it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.38it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.98it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.44it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.98it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.61it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.30it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.57it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.22it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.47it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.97it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.52it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.13it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.49it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.11it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 3.14it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.49it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 3.13it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.50it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.46it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.10it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.57it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.24it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.74it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.65it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.56it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.40it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.61it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.30it/s]
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.78it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.61it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.60it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.31it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.37it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.96it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.46it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.09it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.42it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.97it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.52it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.12it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 3.03it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.37it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.57it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.24it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.48it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.10it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.50it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.12it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.50it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.14it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.56it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.26it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.58it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.28it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.51it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.16it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 3.04it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.37it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.46it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.47it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.08it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.10it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.61it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.35it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.44it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.00it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.59it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.36it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.51it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.18it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.51it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.17it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.47it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.10it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.73it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.60it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.45it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.08it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.54it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.24it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.48it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.13it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.43it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.06it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.51it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.20it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.99it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.37it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.35it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.92it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.58it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.34it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.50it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.16it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.82it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.70it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.00it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.83it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.58it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.29it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.42it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.06it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.42it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.06it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.98it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.80it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.54it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.28it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.36it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 2.96it/s]
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.99it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:02<00:00, 2.37it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.47it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.16it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.39it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.04it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.43it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.15it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.71it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.65it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file added_tokens.json from cache at None
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file special_tokens_map.json from cache at None
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file added_tokens.json from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file special_tokens_map.json from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file added_tokens.json from cache at None
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file chat_template.jinja from cache at None
loading file special_tokens_map.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file added_tokens.json from cache at None
loading file chat_template.jinja from cache at None
loading file added_tokens.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file chat_template.jinja from cache at None
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file added_tokens.json from cache at None
loading file chat_template.jinja from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file special_tokens_map.json from cache at None
loading file chat_template.jinja from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file added_tokens.json from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file special_tokens_map.json from cache at None
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file chat_template.jinja from cache at None
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file added_tokens.json from cache at None
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file special_tokens_map.json from cache at None
loading file added_tokens.json from cache at None
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file special_tokens_map.json from cache at None
loading file added_tokens.json from cache at None
loading file chat_template.jinja from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file added_tokens.json from cache at None
loading file added_tokens.json from cache at None
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file special_tokens_map.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file added_tokens.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file special_tokens_map.json from cache at None
loading file chat_template.jinja from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file chat_template.jinja from cache at None
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file added_tokens.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file special_tokens_map.json from cache at None
loading file chat_template.jinja from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file added_tokens.json from cache at None
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file special_tokens_map.json from cache at None
loading file added_tokens.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file special_tokens_map.json from cache at None
loading file chat_template.jinja from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file chat_template.jinja from cache at None
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file special_tokens_map.json from cache at None
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
: 20%|██ | 1/5 [00:00<00:01, 3.03it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.89it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.26it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.26it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.19it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.45it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.50it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.46it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.56it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.41it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.56it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.49it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
:00, 3.41it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.87it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.90it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.67it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.78it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.67it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.53it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.69it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.19it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.15it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.63it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.38it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.91it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.79it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.78it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.56it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.90it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.61it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.76it/s]loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.47it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.09it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.40it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.04it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.11it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.83it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.03it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.85it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file added_tokens.json from cache at None
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file special_tokens_map.json from cache at None
loading file added_tokens.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file special_tokens_map.json from cache at None
loading file chat_template.jinja from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.93it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.73it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 2.33it/s]loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
loading configuration file config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/config.json
You are using a model of type qwen2_5_vl to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Model config LlavaQwenConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "llava_qwen",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.49.0.dev0",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"hidden_size": 1280,
"in_chans": 3,
"model_type": "qwen2_5_vl",
"spatial_patch_size": 14,
"tokens_per_second": 2
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
loading weights file model.safetensors from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/model.safetensors.index.json
Instantiating LlavaQwenForCausalLM model under default dtype torch.bfloat16.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645
}
Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00, ?it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.09it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.28it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.26it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.37it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.09it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.13it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.03it/s]
Loading checkpoint shards: 20%|██ | 1/5 [00:00<00:01, 3.22it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.40it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.13it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 4.04it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.93it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.74it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.74it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.79it/s]
Loading checkpoint shards: 40%|████ | 2/5 [00:00<00:00, 3.61it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 3.72it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.48it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.38it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.34it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.95it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.01it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 4.00it/s]
Loading checkpoint shards: 60%|██████ | 3/5 [00:00<00:00, 3.86it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.16it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.68it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.68it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.59it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:00<00:00, 4.59it/s]loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.15it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.05it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.14it/s]
Loading checkpoint shards: 80%|████████ | 4/5 [00:01<00:00, 4.01it/s]loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.85it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.57it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.79it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.49it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.82it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.47it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.30it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.08it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.20it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.03it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.29it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.09it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 4.17it/s]
Loading checkpoint shards: 100%|██████████| 5/5 [00:01<00:00, 3.95it/s]
All model checkpoint weights were used when initializing LlavaQwenForCausalLM.
All the weights of LlavaQwenForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlavaQwenForCausalLM for predictions without further training.
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file generation_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/generation_config.json
Generate config GenerationConfig {
"attn_implementation": "flash_attention_2",
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.1,
"top_k": 1,
"top_p": 0.001
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading configuration file preprocessor_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/preprocessor_config.json
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Image processor Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
loading file vocab.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/vocab.json
loading file merges.txt from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/merges.txt
loading file tokenizer.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /fsx_0/user/zhaojiang/models/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/6e6556e8ce728c7b3e438d75ebf04ec93403dc19/tokenizer_config.json
loading file chat_template.jinja from cache at None
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessor {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
{
"processor_class": "Qwen2_5_VLProcessor"
}
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 151668. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/model/multimodal_encoder/eva_clip/eva_vit.py:622: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=map_location)
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Using custom data configuration default-5e4e9de28fd39dca
Loading Dataset Infos from /home/zhaojiang/.local/lib/python3.10/site-packages/datasets/packaged_modules/webdataset
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Overwrite dataset info from restored data version if exists.
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
Found cached dataset webdataset (/fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f)
Loading Dataset info from /fsx_0/user/zhaojiang/wb/webdataset/default-5e4e9de28fd39dca/0.0.0/e9ef0843eead451e800ef3bd9a9ee86b731520f88aa20be2d598ddfeef5b3f7f
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Using auto half precision backend
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
/opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/llava/train/train.py:1637: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `LLaVATrainer.__init__`. Use `processing_class` instead.
trainer = LLaVATrainer(
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
Attempting to resume from /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: jchen169 to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Number of trainable parameters = 1,365,239,712
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
wandb: Tracking run with wandb version 0.19.6
wandb: Run data is saved locally in /opt/hpcaas/.mounts/fs-036153e63d56f4dc2/home/zhaojiang/interleaved-llava/wandb/run-20250219_202153-dr1ryi02
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run qwen-vl-diff-clip-16-nodes_early_pool2d_4
wandb: ⭐️ View project at https://wandb.ai/jchen169/huggingface
wandb: 🚀 View run at https://wandb.ai/jchen169/huggingface/runs/dr1ryi02
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
***** Running training *****
Num examples = 194,420,624
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 1,024
Gradient Accumulation steps = 1
Total optimization steps = 569,592
Number of trainable parameters = 1,365,239,712
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 48000
Will skip the first 0 epochs then the first 48000 batches in the first epoch.
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
0%| | 0/569592 [00:00, ?it/s]/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
/home/zhaojiang/.local/lib/python3.10/site-packages/transformers/trainer.py:3119: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint_rng_state = torch.load(rng_file)
8%|▊ | 48001/569592 [00:44<08:07, 1069.28it/s]
8%|▊ | 48001/569592 [00:55<08:07, 1069.28it/s]
8%|▊ | 48001/569592 [01:00<08:07, 1069.28it/s]
8%|▊ | 48002/569592 [01:01<12:29, 696.35it/s]
8%|▊ | 48002/569592 [01:01<12:29, 696.35it/s]
8%|▊ | 48003/569592 [01:02<12:48, 678.56it/s]
8%|▊ | 48003/569592 [01:02<12:48, 678.56it/s]
8%|▊ | 48004/569592 [01:03<13:19, 652.65it/s]
8%|▊ | 48004/569592 [01:03<13:19, 652.65it/s]
8%|▊ | 48005/569592 [01:04<14:03, 618.63it/s]
8%|▊ | 48005/569592 [01:04<14:03, 618.63it/s]
8%|▊ | 48006/569592 [01:05<15:08, 573.91it/s]
8%|▊ | 48006/569592 [01:05<15:08, 573.91it/s]
8%|▊ | 48007/569592 [01:06<16:34, 524.39it/s]
8%|▊ | 48007/569592 [01:06<16:34, 524.39it/s]
8%|▊ | 48008/569592 [01:07<18:36, 467.08it/s]
8%|▊ | 48008/569592 [01:07<18:36, 467.08it/s]
8%|▊ | 48009/569592 [01:08<21:30, 404.13it/s]
8%|▊ | 48009/569592 [01:08<21:30, 404.13it/s]
8%|▊ | 48010/569592 [01:14<47:03, 184.70it/s]
8%|▊ | 48010/569592 [01:14<47:03, 184.70it/s]
8%|▊ | 48011/569592 [01:15<55:22, 156.98it/s]
8%|▊ | 48011/569592 [01:15<55:22, 156.98it/s]
8%|▊ | 48012/569592 [01:16<1:04:21, 135.06it/s]
8%|▊ | 48012/569592 [01:16<1:04:21, 135.06it/s]
8%|▊ | 48013/569592 [01:17<1:19:37, 109.17it/s]
8%|▊ | 48013/569592 [01:17<1:19:37, 109.17it/s]
8%|▊ | 48014/569592 [01:25<3:45:28, 38.55it/s]
8%|▊ | 48014/569592 [01:25<3:45:28, 38.55it/s]
8%|▊ | 48015/569592 [01:26<4:11:51, 34.51it/s]
8%|▊ | 48015/569592 [01:26<4:11:51, 34.51it/s]
8%|▊ | 48016/569592 [01:27<4:45:50, 30.41it/s]
8%|▊ | 48016/569592 [01:27<4:45:50, 30.41it/s]
8%|▊ | 48017/569592 [01:28<5:34:31, 25.99it/s]
8%|▊ | 48017/569592 [01:28<5:34:31, 25.99it/s]
8%|▊ | 48018/569592 [01:34<13:56:37, 10.39it/s]
8%|▊ | 48018/569592 [01:34<13:56:37, 10.39it/s]
8%|▊ | 48019/569592 [01:36<15:55:40, 9.10it/s]
8%|▊ | 48019/569592 [01:36<15:55:40, 9.10it/s]
8%|▊ | 48020/569592 [01:37<18:04:31, 8.02it/s]
8%|▊ | 48020/569592 [01:37<18:04:31, 8.02it/s]
8%|▊ | 48021/569592 [01:38<21:30:52, 6.73it/s]
8%|▊ | 48021/569592 [01:38<21:30:52, 6.73it/s]
8%|▊ | 48022/569592 [01:45<55:50:23, 2.59it/s]
8%|▊ | 48022/569592 [01:45<55:50:23, 2.59it/s]
8%|▊ | 48023/569592 [01:46<59:58:03, 2.42it/s]
8%|▊ | 48023/569592 [01:46<59:58:03, 2.42it/s]
8%|▊ | 48024/569592 [01:47<64:55:45, 2.23it/s]
8%|▊ | 48024/569592 [01:47<64:55:45, 2.23it/s]
8%|▊ | 48025/569592 [01:48<70:55:45, 2.04it/s]
8%|▊ | 48025/569592 [01:48<70:55:45, 2.04it/s]
8%|▊ | 48026/569592 [01:55<174:03:51, 1.20s/it]
8%|▊ | 48026/569592 [01:55<174:03:51, 1.20s/it]
8%|▊ | 48027/569592 [01:56<170:17:31, 1.18s/it]
8%|▊ | 48027/569592 [01:56<170:17:31, 1.18s/it]
8%|▊ | 48028/569592 [01:57<166:10:27, 1.15s/it]
8%|▊ | 48028/569592 [01:57<166:10:27, 1.15s/it]
8%|▊ | 48029/569592 [01:59<168:11:02, 1.16s/it]
8%|▊ | 48029/569592 [01:59<168:11:02, 1.16s/it]
8%|▊ | 48030/569592 [02:06<339:42:39, 2.34s/it]
8%|▊ | 48030/569592 [02:06<339:42:39, 2.34s/it]
8%|▊ | 48031/569592 [02:07<296:23:35, 2.05s/it]
8%|▊ | 48031/569592 [02:07<296:23:35, 2.05s/it]
8%|▊ | 48032/569592 [02:08<257:29:08, 1.78s/it]
8%|▊ | 48032/569592 [02:08<257:29:08, 1.78s/it]
8%|▊ | 48033/569592 [02:09<234:47:58, 1.62s/it]
8%|▊ | 48033/569592 [02:09<234:47:58, 1.62s/it]
8%|▊ | 48034/569592 [02:17<485:56:00, 3.35s/it]
8%|▊ | 48034/569592 [02:17<485:56:00, 3.35s/it]
8%|▊ | 48035/569592 [02:18<391:19:14, 2.70s/it]
8%|▊ | 48035/569592 [02:18<391:19:14, 2.70s/it]
8%|▊ | 48036/569592 [02:19<320:41:24, 2.21s/it]
8%|▊ | 48036/569592 [02:19<320:41:24, 2.21s/it]
8%|▊ | 48037/569592 [02:20<286:10:44, 1.98s/it]
8%|▊ | 48037/569592 [02:20<286:10:44, 1.98s/it]
8%|▊ | 48038/569592 [02:27<492:51:46, 3.40s/it]
8%|▊ | 48038/569592 [02:27<492:51:46, 3.40s/it]
8%|▊ | 48039/569592 [02:28<387:30:29, 2.67s/it]
8%|▊ | 48039/569592 [02:28<387:30:29, 2.67s/it]
8%|▊ | 48040/569592 [02:29<313:08:15, 2.16s/it]
8%|▊ | 48040/569592 [02:29<313:08:15, 2.16s/it]
8%|▊ | 48041/569592 [02:30<263:58:41, 1.82s/it]
8%|▊ | 48041/569592 [02:30<263:58:41, 1.82s/it]
8%|▊ | 48042/569592 [02:37<493:56:30, 3.41s/it]
8%|▊ | 48042/569592 [02:37<493:56:30, 3.41s/it]
8%|▊ | 48043/569592 [02:38<389:35:58, 2.69s/it]
8%|▊ | 48043/569592 [02:38<389:35:58, 2.69s/it]
8%|▊ | 48044/569592 [02:39<314:08:19, 2.17s/it]
8%|▊ | 48044/569592 [02:39<314:08:19, 2.17s/it]
8%|▊ | 48045/569592 [02:40<261:25:44, 1.80s/it]
8%|▊ | 48045/569592 [02:40<261:25:44, 1.80s/it]
8%|▊ | 48046/569592 [02:46<461:16:35, 3.18s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
8%|▊ | 48046/569592 [02:46<461:16:35, 3.18s/it]
8%|▊ | 48047/569592 [02:47<365:25:13, 2.52s/it]
8%|▊ | 48047/569592 [02:47<365:25:13, 2.52s/it]
8%|▊ | 48048/569592 [02:48<296:38:15, 2.05s/it]
8%|▊ | 48048/569592 [02:48<296:38:15, 2.05s/it]
8%|▊ | 48049/569592 [02:49<248:40:12, 1.72s/it]
8%|▊ | 48049/569592 [02:49<248:40:12, 1.72s/it]
8%|▊ | 48050/569592 [02:58<543:47:46, 3.75s/it]
8%|▊ | 48050/569592 [02:58<543:47:46, 3.75s/it]
8%|▊ | 48051/569592 [02:59<419:15:27, 2.89s/it]
8%|▊ | 48051/569592 [02:59<419:15:27, 2.89s/it]
8%|▊ | 48052/569592 [03:00<332:43:15, 2.30s/it]
8%|▊ | 48052/569592 [03:00<332:43:15, 2.30s/it]
8%|▊ | 48053/569592 [03:01<276:51:45, 1.91s/it]
8%|▊ | 48053/569592 [03:01<276:51:45, 1.91s/it]
8%|▊ | 48054/569592 [03:07<452:05:45, 3.12s/it]
8%|▊ | 48054/569592 [03:07<452:05:45, 3.12s/it]
8%|▊ | 48055/569592 [03:07<357:19:24, 2.47s/it]
8%|▊ | 48055/569592 [03:07<357:19:24, 2.47s/it]
8%|▊ | 48056/569592 [03:08<290:33:17, 2.01s/it]
8%|▊ | 48056/569592 [03:08<290:33:17, 2.01s/it]
8%|▊ | 48057/569592 [03:09<245:33:56, 1.70s/it]
8%|▊ | 48057/569592 [03:09<245:33:56, 1.70s/it]
8%|▊ | 48058/569592 [03:16<481:32:51, 3.32s/it]
8%|▊ | 48058/569592 [03:16<481:32:51, 3.32s/it]
8%|▊ | 48059/569592 [03:17<377:16:52, 2.60s/it]
8%|▊ | 48059/569592 [03:17<377:16:52, 2.60s/it]
8%|▊ | 48060/569592 [03:18<304:48:35, 2.10s/it]
8%|▊ | 48060/569592 [03:18<304:48:35, 2.10s/it]
8%|▊ | 48061/569592 [03:20<286:55:00, 1.98s/it]
8%|▊ | 48061/569592 [03:20<286:55:00, 1.98s/it]
8%|▊ | 48062/569592 [03:28<535:17:09, 3.69s/it]
8%|▊ | 48062/569592 [03:28<535:17:09, 3.69s/it]
8%|▊ | 48063/569592 [03:29<413:42:16, 2.86s/it]
8%|▊ | 48063/569592 [03:29<413:42:16, 2.86s/it]
8%|▊ | 48064/569592 [03:30<333:08:30, 2.30s/it]
8%|▊ | 48064/569592 [03:30<333:08:30, 2.30s/it]
8%|▊ | 48065/569592 [03:31<274:18:50, 1.89s/it]
8%|▊ | 48065/569592 [03:31<274:18:50, 1.89s/it]
8%|▊ | 48066/569592 [03:37<488:54:52, 3.37s/it]
8%|▊ | 48066/569592 [03:37<488:54:52, 3.37s/it]
8%|▊ | 48067/569592 [03:38<384:19:00, 2.65s/it]
8%|▊ | 48067/569592 [03:38<384:19:00, 2.65s/it]
8%|▊ | 48068/569592 [03:39<312:09:54, 2.15s/it]
8%|▊ | 48068/569592 [03:39<312:09:54, 2.15s/it]
8%|▊ | 48069/569592 [03:40<259:27:22, 1.79s/it]
8%|▊ | 48069/569592 [03:40<259:27:22, 1.79s/it]
8%|▊ | 48070/569592 [03:47<485:49:15, 3.35s/it]
8%|▊ | 48070/569592 [03:47<485:49:15, 3.35s/it]
8%|▊ | 48071/569592 [03:48<389:08:04, 2.69s/it]
8%|▊ | 48071/569592 [03:48<389:08:04, 2.69s/it]
8%|▊ | 48072/569592 [03:49<315:52:10, 2.18s/it]
8%|▊ | 48072/569592 [03:49<315:52:10, 2.18s/it]
8%|▊ | 48073/569592 [03:50<261:00:07, 1.80s/it]
8%|▊ | 48073/569592 [03:50<261:00:07, 1.80s/it]
8%|▊ | 48074/569592 [03:57<469:53:23, 3.24s/it]
8%|▊ | 48074/569592 [03:57<469:53:23, 3.24s/it]
8%|▊ | 48075/569592 [03:58<371:45:55, 2.57s/it]
8%|▊ | 48075/569592 [03:58<371:45:55, 2.57s/it]
8%|▊ | 48076/569592 [03:59<305:05:08, 2.11s/it]
8%|▊ | 48076/569592 [03:59<305:05:08, 2.11s/it]
8%|▊ | 48077/569592 [04:00<254:32:19, 1.76s/it]
8%|▊ | 48077/569592 [04:00<254:32:19, 1.76s/it]
8%|▊ | 48078/569592 [04:07<474:54:15, 3.28s/it]
8%|▊ | 48078/569592 [04:07<474:54:15, 3.28s/it]
8%|▊ | 48079/569592 [04:08<375:26:37, 2.59s/it]
8%|▊ | 48079/569592 [04:08<375:26:37, 2.59s/it]
8%|▊ | 48080/569592 [04:11<416:07:38, 2.87s/it]
8%|▊ | 48080/569592 [04:11<416:07:38, 2.87s/it]
8%|▊ | 48081/569592 [04:12<331:37:43, 2.29s/it]
8%|▊ | 48081/569592 [04:12<331:37:43, 2.29s/it]
8%|▊ | 48082/569592 [04:15<343:16:39, 2.37s/it]
8%|▊ | 48082/569592 [04:15<343:16:39, 2.37s/it]
8%|▊ | 48083/569592 [04:16<281:45:57, 1.95s/it]
8%|▊ | 48083/569592 [04:16<281:45:57, 1.95s/it]
8%|▊ | 48084/569592 [04:18<310:32:42, 2.14s/it]
8%|▊ | 48084/569592 [04:18<310:32:42, 2.14s/it]
8%|▊ | 48085/569592 [04:19<258:55:55, 1.79s/it]
8%|▊ | 48085/569592 [04:19<258:55:55, 1.79s/it]
8%|▊ | 48086/569592 [04:25<436:00:48, 3.01s/it]
8%|▊ | 48086/569592 [04:25<436:00:48, 3.01s/it]
8%|▊ | 48087/569592 [04:26<348:48:53, 2.41s/it]
8%|▊ | 48087/569592 [04:26<348:48:53, 2.41s/it]
8%|▊ | 48088/569592 [04:32<479:57:20, 3.31s/it]
8%|▊ | 48088/569592 [04:32<479:57:20, 3.31s/it]
8%|▊ | 48089/569592 [04:32<376:23:36, 2.60s/it]
8%|▊ | 48089/569592 [04:33<376:23:36, 2.60s/it]
8%|▊ | 48090/569592 [04:36<429:28:08, 2.96s/it]
8%|▊ | 48090/569592 [04:36<429:28:08, 2.96s/it]
8%|▊ | 48091/569592 [04:37<342:23:49, 2.36s/it]
8%|▊ | 48091/569592 [04:37<342:23:49, 2.36s/it]
8%|▊ | 48092/569592 [04:41<422:50:25, 2.92s/it]
8%|▊ | 48092/569592 [04:42<422:50:25, 2.92s/it]
8%|▊ | 48093/569592 [04:42<336:24:34, 2.32s/it]
8%|▊ | 48093/569592 [04:42<336:24:34, 2.32s/it]
8%|▊ | 48094/569592 [04:46<378:56:38, 2.62s/it]
8%|▊ | 48094/569592 [04:46<378:56:38, 2.62s/it]
8%|▊ | 48095/569592 [04:47<308:32:06, 2.13s/it]
8%|▊ | 48095/569592 [04:47<308:32:06, 2.13s/it]
8%|▊ | 48096/569592 [04:51<413:25:29, 2.85s/it]
8%|▊ | 48096/569592 [04:51<413:25:29, 2.85s/it]
8%|▊ | 48097/569592 [04:52<331:31:53, 2.29s/it]
8%|▊ | 48097/569592 [04:52<331:31:53, 2.29s/it]
8%|▊ | 48098/569592 [04:56<374:38:05, 2.59s/it]
8%|▊ | 48098/569592 [04:56<374:38:05, 2.59s/it]
8%|▊ | 48099/569592 [04:56<303:41:23, 2.10s/it]
8%|▊ | 48099/569592 [04:56<303:41:23, 2.10s/it]
8%|▊ | 48100/569592 [05:01<407:51:16, 2.82s/it]
8%|▊ | 48100/569592 [05:01<407:51:16, 2.82s/it]
8%|▊ | 48101/569592 [05:02<329:20:06, 2.27s/it]
8%|▊ | 48101/569592 [05:02<329:20:06, 2.27s/it]
8%|▊ | 48102/569592 [05:07<439:23:26, 3.03s/it]
8%|▊ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (115022592 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90326016 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 48102/569592 [05:07<439:23:26, 3.03s/it]
8%|▊ | 48103/569592 [05:08<347:02:32, 2.40s/it]
8%|▊ | 48103/569592 [05:08<347:02:32, 2.40s/it]
8%|▊ | 48104/569592 [05:11<386:57:22, 2.67s/it]
8%|▊ | 48104/569592 [05:11<386:57:22, 2.67s/it]
8%|▊ | 48105/569592 [05:12<313:55:20, 2.17s/it]
8%|▊ | 48105/569592 [05:12<313:55:20, 2.17s/it]
8%|▊ | 48106/569592 [05:15<341:17:44, 2.36s/it]
8%|▊ | 48106/569592 [05:15<341:17:44, 2.36s/it]
8%|▊ | 48107/569592 [05:16<281:35:44, 1.94s/it]
8%|▊ | 48107/569592 [05:16<281:35:44, 1.94s/it]
8%|▊ | 48108/569592 [05:22<448:50:28, 3.10s/it]
8%|▊ | 48108/569592 [05:22<448:50:28, 3.10s/it]
8%|▊ | 48109/569592 [05:23<355:39:21, 2.46s/it]
8%|▊ | 48109/569592 [05:23<355:39:21, 2.46s/it]
8%|▊ | 48110/569592 [05:28<504:24:06, 3.48s/it]
8%|▊ | 48110/569592 [05:28<504:24:06, 3.48s/it]
8%|▊ | 48111/569592 [05:34<610:06:37, 4.21s/it]
8%|▊ | 48111/569592 [05:34<610:06:37, 4.21s/it]
8%|▊ | 48112/569592 [05:40<671:31:18, 4.64s/it]
8%|▊ | 48112/569592 [05:40<671:31:18, 4.64s/it]
8%|▊ | 48113/569592 [05:46<714:33:46, 4.93s/it]
8%|▊ | 48113/569592 [05:46<714:33:46, 4.93s/it]
8%|▊ | 48114/569592 [05:51<757:52:47, 5.23s/it]
8%|▊ | 48114/569592 [05:51<757:52:47, 5.23s/it]
8%|▊ | 48115/569592 [05:52<569:20:18, 3.93s/it]
8%|▊ | 48115/569592 [05:52<569:20:18, 3.93s/it]
8%|▊ | 48116/569592 [05:53<438:51:24, 3.03s/it]
8%|▊ | 48116/569592 [05:53<438:51:24, 3.03s/it]
8%|▊ | 48117/569592 [05:54<348:01:32, 2.40s/it]
8%|▊ | 48117/569592 [05:54<348:01:32, 2.40s/it]
8%|▊ | 48118/569592 [05:55<287:57:54, 1.99s/it]
8%|▊ | 48118/569592 [05:55<287:57:54, 1.99s/it]
8%|▊ | 48119/569592 [05:56<243:53:15, 1.68s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
8%|▊ | 48119/569592 [05:56<243:53:15, 1.68s/it]
8%|▊ | 48120/569592 [05:57<212:15:29, 1.47s/it]
8%|▊ | 48120/569592 [05:57<212:15:29, 1.47s/it]
8%|▊ | 48121/569592 [05:58<189:44:29, 1.31s/it]
8%|▊ | 48121/569592 [05:58<189:44:29, 1.31s/it]
8%|▊ | 48122/569592 [05:59<174:19:23, 1.20s/it]
8%|▊ | 48122/569592 [05:59<174:19:23, 1.20s/it]
8%|▊ | 48123/569592 [06:04<352:50:13, 2.44s/it]
8%|▊ | 48123/569592 [06:04<352:50:13, 2.44s/it]
8%|▊ | 48124/569592 [06:06<302:39:28, 2.09s/it]
8%|▊ | 48124/569592 [06:06<302:39:28, 2.09s/it]
8%|▊ | 48125/569592 [06:07<263:21:08, 1.82s/it]
8%|▊ | 48125/569592 [06:07<263:21:08, 1.82s/it]
8%|▊ | 48126/569592 [06:08<226:34:53, 1.56s/it]
8%|▊ | 48126/569592 [06:08<226:34:53, 1.56s/it]
8%|▊ | 48127/569592 [06:15<463:10:05, 3.20s/it]
8%|▊ | 48127/569592 [06:15<463:10:05, 3.20s/it]
8%|▊ | 48128/569592 [06:17<397:38:52, 2.75s/it]
8%|▊ | 48128/569592 [06:17<397:38:52, 2.75s/it]
8%|▊ | 48129/569592 [06:18<320:53:07, 2.22s/it]
8%|▊ | 48129/569592 [06:18<320:53:07, 2.22s/it]
8%|▊ | 48130/569592 [06:18<265:51:39, 1.84s/it]
8%|▊ | 48130/569592 [06:18<265:51:39, 1.84s/it]
8%|▊ | 48131/569592 [06:25<479:19:25, 3.31s/it]
8%|▊ | 48131/569592 [06:25<479:19:25, 3.31s/it]
8%|▊ | 48132/569592 [06:27<399:08:30, 2.76s/it]
8%|▊ | 48132/569592 [06:27<399:08:30, 2.76s/it]
8%|▊ | 48133/569592 [06:28<321:41:18, 2.22s/it]
8%|▊ | 48133/569592 [06:28<321:41:18, 2.22s/it]
8%|▊ | 48134/569592 [06:29<265:28:29, 1.83s/it]
8%|▊ | 48134/569592 [06:29<265:28:29, 1.83s/it]
8%|▊ | 48135/569592 [06:34<432:08:02, 2.98s/it]
8%|▊ | 48135/569592 [06:34<432:08:02, 2.98s/it]
8%|▊ | 48136/569592 [06:37<408:39:00, 2.82s/it]
8%|▊ | 48136/569592 [06:37<408:39:00, 2.82s/it]
8%|▊ | 48137/569592 [06:38<327:08:37, 2.26s/it]
8%|▊ | 48137/569592 [06:38<327:08:37, 2.26s/it]
8%|▊ | 48138/569592 [06:39<270:13:03, 1.87s/it]
8%|▊ | 48138/569592 [06:39<270:13:03, 1.87s/it]
8%|▊ | 48139/569592 [06:45<481:11:50, 3.32s/it]
8%|▊ | 48139/569592 [06:45<481:11:50, 3.32s/it]
8%|▊ | 48140/569592 [06:46<380:20:38, 2.63s/it]
8%|▊ | 48140/569592 [06:46<380:20:38, 2.63s/it]
8%|▊ | 48141/569592 [06:48<336:16:52, 2.32s/it]
8%|▊ | 48141/569592 [06:48<336:16:52, 2.32s/it]
8%|▊ | 48142/569592 [06:49<276:59:54, 1.91s/it]
8%|▊ | 48142/569592 [06:49<276:59:54, 1.91s/it]
8%|▊ | 48143/569592 [06:55<462:09:31, 3.19s/it]
8%|▊ | 48143/569592 [06:55<462:09:31, 3.19s/it]
8%|▊ | 48144/569592 [06:57<398:55:04, 2.75s/it]
8%|▊ | 48144/569592 [06:57<398:55:04, 2.75s/it]
8%|▊ | 48145/569592 [06:58<320:05:11, 2.21s/it]
8%|▊ | 48145/569592 [06:58<320:05:11, 2.21s/it]
8%|▊ | 48146/569592 [06:59<264:33:48, 1.83s/it]
8%|▊ | 48146/569592 [06:59<264:33:48, 1.83s/it]
8%|▊ | 48147/569592 [07:05<452:13:31, 3.12s/it]
8%|▊ | 48147/569592 [07:05<452:13:31, 3.12s/it]
8%|▊ | 48148/569592 [07:09<518:41:50, 3.58s/it]
8%|▊ | 48148/569592 [07:09<518:41:50, 3.58s/it]
8%|▊ | 48149/569592 [07:10<403:59:11, 2.79s/it]
8%|▊ | 48149/569592 [07:10<403:59:11, 2.79s/it]
8%|▊ | 48150/569592 [07:11<326:18:01, 2.25s/it]
8%|▊ | 48150/569592 [07:11<326:18:01, 2.25s/it]
8%|▊ | 48151/569592 [07:15<399:43:29, 2.76s/it]
8%|▊ | 48151/569592 [07:15<399:43:29, 2.76s/it]
8%|▊ | 48152/569592 [07:19<427:51:11, 2.95s/it]
8%|▊ | 48152/569592 [07:19<427:51:11, 2.95s/it]
8%|▊ | 48153/569592 [07:20<342:19:19, 2.36s/it]
8%|▊ | 48153/569592 [07:20<342:19:19, 2.36s/it]
8%|▊ | 48154/569592 [07:21<280:41:14, 1.94s/it]
8%|▊ | 48154/569592 [07:21<280:41:14, 1.94s/it]
8%|▊ | 48155/569592 [07:26<421:08:38, 2.91s/it]
8%|▊ | 48155/569592 [07:26<421:08:38, 2.91s/it]
8%|▊ | 48156/569592 [07:30<484:31:06, 3.35s/it]
8%|▊ | 48156/569592 [07:30<484:31:06, 3.35s/it]
8%|▊ | 48157/569592 [07:31<379:48:34, 2.62s/it]
8%|▊ | 48157/569592 [07:31<379:48:34, 2.62s/it]
8%|▊ | 48158/569592 [07:32<309:04:37, 2.13s/it]
8%|▊ | 48158/569592 [07:32<309:04:37, 2.13s/it]
8%/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (94040804 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
|▊ | 48159/569592 [07:36<365:41:26, 2.52s/it]
8%|▊ | 48159/569592 [07:36<365:41:26, 2.52s/it]
8%|▊ | 48160/569592 [07:40<451:08:14, 3.11s/it]
8%|▊ | 48160/569592 [07:40<451:08:14, 3.11s/it]
8%|▊ | 48161/569592 [07:41<359:04:29, 2.48s/it]
8%|▊ | 48161/569592 [07:41<359:04:29, 2.48s/it]
8%|▊ | 48162/569592 [07:42<293:57:59, 2.03s/it]
8%|▊ | 48162/569592 [07:42<293:57:59, 2.03s/it]
8%|▊ | 48163/569592 [07:46<359:55:53, 2.49s/it]
8%|▊ | 48163/569592 [07:46<359:55:53, 2.49s/it]
8%|▊ | 48164/569592 [07:50<446:33:38, 3.08s/it]
8%|▊ | 48164/569592 [07:50<446:33:38, 3.08s/it]
8%|▊ | 48165/569592 [07:51<354:40:22, 2.45s/it]
8%|▊ | 48165/569592 [07:51<354:40:22, 2.45s/it]
8%|▊ | 48166/569592 [07:52<292:06:31, 2.02s/it]
8%|▊ | 48166/569592 [07:52<292:06:31, 2.02s/it]
8%|▊ | 48167/569592 [07:56<378:58:10, 2.62s/it]
8%|▊ | 48167/569592 [07:56<378:58:10, 2.62s/it]
8%|▊ | 48168/569592 [08:00<437:40:51, 3.02s/it]
8%|▊ | 48168/569592 [08:00<437:40:51, 3.02s/it]
8%|▊ | 48169/569592 [08:01<347:26:10, 2.40s/it]
8%|▊ | 48169/569592 [08:01<347:26:10, 2.40s/it]
8%|▊ | 48170/569592 [08:02<284:32:55, /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (103012940 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
1.96s/it]
8%|▊ | 48170/569592 [08:02<284:32:55, 1.96s/it]
8%|▊ | 48171/569592 [08:05<350:54:48, 2.42s/it]
8%|▊ | 48171/569592 [08:05<350:54:48, 2.42s/it]
8%|▊ | 48172/569592 [08:09<403:14:00, 2.78s/it]
8%|▊ | 48172/569592 [08:09<403:14:00, 2.78s/it]
8%|▊ | 48173/569592 [08:10<325:10:08, 2.25s/it]
8%|▊ | 48173/569592 [08:10<325:10:08, 2.25s/it]
8%|▊ | 48174/569592 [08:11<278:13:23, 1.92s/it]
8%|▊ | 48174/569592 [08:11<278:13:23, 1.92s/it]
8%|▊ | 48175/569592 [08:17<438:08:36, 3.03s/it]
8%|▊ | 48175/569592 [08:17<438:08:36, 3.03s/it]
8%|▊ | 48176/569592 [08:20<442:32:05, 3.06s/it]
8%|▊ | 48176/569592 [08:20<442:32:05, 3.06s/it]
8%|▊ | 48177/569592 [08:21<355:11:55, 2.45s/it]
8%|▊ | 48177/569592 [08:21<355:11:55, 2.45s/it]
8%|▊ | 48178/569592 [08:22<289:15:48, 2.00s/it]
8%|▊ | 48178/569592 [08:22<289:15:48, 2.00s/it]
8%|▊ | 48179/569592 [08:27<414:01:57, 2.86s/it]
8%|▊ | 48179/569592 [08:27<414:01:57, 2.86s/it]
8%|▊ | 48180/569592 [08:31<468:36:57, 3.24s/it]
8%|▊ | 48180/569592 [08:31<468:36:57, 3.24s/it]
8%|▊ | 48181/569592 [08:32<379:57:35, 2.62s/it]
8%|▊ | 48181/569592 [08:32<379:57:35, 2.62s/it]
8%|▊ | 48182/569592 [08:33<307:50:13, 2.13s/it]
8%|▊ | 48182/569592 [08:33<307:50:13, 2.13s/it]
8%|▊ | 48183/569592 [08:37<378:01:54, 2.61s/it]
8%|▊ | 48183/569592 [08:37<378:01:54, 2.61s/it]
8%|▊ | 48184/569592 [08:41<452:03:57, 3.12s/it]
8%|▊ | 48184/569592 [08:41<452:03:57, 3.12s/it]
8%|▊ | 48185/569592 [08:42<356:38:59, 2.46s/it]
8%|▊ | 48185/569592 [08:42<356:38:59, 2.46s/it]
8%|▊ | 48186/569592 [08:43<292:12:54, 2.02s/it]
8%|▊ | 48186/569592 [08:43<292:12:54, 2.02s/it]
8%|▊ | 48187/569592 [08:47<366:13:53, 2.53s/it]
8%|▊ | 48187/569592 [08:47<366:13:53, 2.53s/it]
8%|▊ | 48188/569592 [08:54<554:12:19, 3.83s/it]
8%|▊ | 48188/569592 [08:54<554:12:19, 3.83s/it]
8%|▊ | 48189/569592 [08:55<428:12:11, 2.96s/it]
8%|▊ | 48189/569592 [08:55<428:12:11, 2.96s/it]
8%|▊ | 48190/569592 [08:55<339:45:32, 2.35s/it]
8%|▊ | 48190/569592 [08:55<339:45:32, 2.35s/it]
8%|▊ | 48191/569592 [08:56<278:32:24, 1.92s/it]
8%|▊ | 48191/569592 [08:56<278:32:24, 1.92s/it]
8%|▊ | 48192/569592 [09:03<482:23:37, 3.33s/it]
8%|▊ | 48192/569592 [09:03<482:23:37, 3.33s/it]
8%|▊ | 48193/569592 [09:04<378:43:00, 2.61s/it]
8%|▊ | 48193/569592 [09:04<378:43:00, 2.61s/it]
8%|▊ | 48194/569592 [09:05<305:59:44, 2.11s/it]
8%|▊ | 48194/569592 [09:05<305:59:44, 2.11s/it]
8%|▊ | 48195/569592 [09:06<269:05:29, 1.86s/it]
8%|▊ | 48195/569592 [09:06<269:05:29, 1.86s/it]
8%|▊ | 48196/569592 [09:13<488:01:37, 3.37s/it]
8%|▊ | 48196/569592 [09:13<488:01:37, 3.37s/it]
8%|▊ | 48197/569592 [09:14<381:12:21, 2.63s/it]
8%|▊ | 48197/569592 [09:14<381:12:21, 2.63s/it]
8%|▊ | 48198/569592 [09:15<308:24:25, 2.13s/it]
8%|▊ | 48198/569592 [09:15<308:24:25, 2.13s/it]
8%|▊ | 48199/569592 [09:16<283:55:48, 1.96s/it]
8%|▊ | 48199/569592 [09:16<283:55:48, 1.96s/it]
8%|▊ | 48200/569592 [09:23<476:26:31, 3.29s/it]
8%|▊ | 48200/569592 [09:23<476:26:31, 3.29s/it]
8%|▊ | 48201/569592 [09:24<373:08:18, 2.58s/it]
8%|▊ | 48201/569592 [09:24<373:08:18, 2.58s/it]
8%|▊ | 48202/569592 [09:25<302:13:36, 2.09s/it]
8%|▊ | 48202/569592 [09:25<302:13:36, 2.09s/it]
8%|▊ | 48203/569592 [09:26<271:16:37, 1.87s/it]
8%|▊ | 48203/569592 [09:26<271:16:37, 1.87s/it]
8%|▊ | 48204/569592 [09:32<466:50:23, 3.22s/it]
8%|▊ | 48204/569592 [09:32<466:50:23, 3.22s/it]
8%|▊ | 48205/569592 [09:33<369:52:24, 2.55s/it]
8%|▊ | 48205/569592 [09:33<369:52:24, 2.55s/it]
8%|▊ | 48206/569592 [09:34<300:39:30, 2.08s/it]
8%|▊ | 48206/569592 [09:34<300:39:30, 2.08s/it]
8%|▊ | 48207/569592 [09:36<284:09:09, 1.96s/it]
8%|▊ | 48207/569592 [09:36<284:09:09, 1.96s/it]
8%|▊ | 48208/569592 [09:42<458:25:03, 3.17s/it]
8%|▊ | 48208/569592 [09:42<458:25:03, 3.17s/it]
8%|▊ | 48209/569592 [09:43<362:25:57, 2.50s/it]
8%|▊ | 48209/569592 [09:43<362:25:57, 2.50s/it]
8%|▊ | 48210/569592 [09:44<294:21:17, 2.03s/it]
8%|▊ | 48210/569592 [09:44<294:21:17, 2.03s/it]
8%|▊ | 48211/569592 [09:46<289:59:04, 2.00s/it]
8%|▊ | 48211/569592 [09:46<289:59:04, 2.00s/it]
8%|▊ | 48212/569592 [09:53<493:26:18, 3.41s/it]
8%|▊ | 48212/569592 [09:53<493:26:18, 3.41s/it]
8%|▊ | 48213/569592 [09:54<385:47:18, 2.66s/it]
8%|▊ | 48213/569592 [09:54<385:47:18, 2.66s/it]
8%|▊ | 48214/569592 [09:54<312:23:36, 2.16s/it]
8%|▊ | 48214/569592 [09:55<312:23:36, 2.16s/it]
8%|▊ | 48215/569592 [09:57<326:08:28, 2.25s/it]
8%|▊ | 48215/569592 [09:57<326:08:28, 2.25s/it]
8%|▊ | 48216/569592 [10:02<463:14:25, 3.20s/it]
8%|▊ | 48216/569592 [10:02<463:14:25, 3.20s/it]
8%|▊ | 48217/569592 [10:03<368:11:38, 2.54s/it]
8%|▊ | 48217/569592 [10:03<368:11:38, 2.54s/it]
8%|▊ | 48218/569592 [10:05<316:05:42, 2.18s/it]
8%|▊ | 48218/569592 [10:05<316:05:42, 2.18s/it]
8%|▊ | 48219/569592 [10:07<310:58:43, 2.15s/it]
8%|▊ | 48219/569592 [10:07<310:58:43, 2.15s/it]
8%|▊ | 48220/569592 [10:13<505:44:14, 3.49s/it]
8%|▊ | 48220/569592 [10:13<505:44:14, 3.49s/it]
8%|▊ | 48221/569592 [10:14<393:40:03, 2.72s/it]
8%|▊ | 48221/569592 [10:14<393:40:03, 2.72s/it]
8%|▊ | 48222/569592 [10:15<319:17:54, 2.20s/it]
8%|▊ | 48222/569592 [10:15<319:17:54, 2.20s/it]
8%|▊ | 48223/569592 [10:17<290:20:49, 2.00s/it]
8%|▊ | 48223/569592 [10:17<290:20:49, 2.00s/it]
8%|▊ | 48224/569592 [10:24<490:56:17, 3.39s/it]
8%|▊ | 48224/569592 [10:24<490:56:17, 3.39s/it]
8%|▊ | 48225/569592 [10:28<550:18:43, 3.80s/it]
8%|▊ | 48225/569592 [10:28<550:18:43, 3.80s/it]
8%|▊ | 48226/569592 [10:33<601:58:04, 4.16s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
8%|▊ | 48226/569592 [10:33<601:58:04, 4.16s/it]
8%|▊ | 48227/569592 [10:34<466:30:59, 3.22s/it]
8%|▊ | 48227/569592 [10:34<466:30:59, 3.22s/it]
8%|▊ | 48228/569592 [10:39<520:32:50, 3.59s/it]
8%|▊ | 48228/569592 [10:39<520:32:50, 3.59s/it]
8%|▊ | 48229/569592 [10:43<568:38:28, 3.93s/it]
8%|▊ | 48229/569592 [10:43<568:38:28, 3.93s/it]
8%|▊ | 48230/569592 [10:47<546:56:01, 3.78s/it]
8%|▊ | 48230/569592 [10:47<546:56:01, 3.78s/it]
8%|▊ | 48231/569592 [10:52<594:42:55, 4.11s/it]
8%|▊ | 48231/569592 [10:52<594:42:55, 4.11s/it]
8%|▊ | 48232/569592 [10:57<663:33:53, 4.58s/it]
8%|▊ | 48232/569592 [10:57<663:33:53, 4.58s/it]
8%|▊ | 48233/569592 [10:58<502:43:27, 3.47s/it]
8%|▊ | 48233/569592 [10:58<502:43:27, 3.47s/it]
8%|▊ | 48234/569592 [10:59<391:36:34, 2.70s/it]
8%|▊ | 48234/569592 [10:59<391:36:34, 2.70s/it]
8%|▊ | 48235/569592 [11:00<317:28:46, 2.19s/it]
8%|▊ | 48235/569592 [11:00<317:28:46, 2.19s/it]
8%|▊ | 48236/569592 [11:01<264:32:42, 1.83s/it]
8%|▊ | 48236/569592 [11:01<264:32:42, 1.83s/it]
8%|▊ | 48237/569592 [11:02<227:19:43, 1.57s/it]
8%|▊ | 48237/569592 [11:02<227:19:43, 1.57s/it]
8%|▊ | 48238/569592 [11:03<199:56:57, 1.38s/it]
8%|▊ | 48238/569592 [11:03<199:56:57, 1.38s/it]
8%|▊ | 48239/569592 [11:04<180:49:04, 1.25s/it]
8%|▊ | 48239/569592 [11:04<180:49:04, 1.25s/it]
8%|▊ | 48240/569592 [11:05<170:38:50, 1.18s/it]
8%|▊ | 48240/569592 [11:05<170:38:50, 1.18s/it]
8%|▊ | 48241/569592 [11:11<396:50:35, 2.74s/it]
8%|▊ | 48241/569592 [11:11<396:50:35, 2.74s/it]
8%|▊ | 48242/569592 [11:13<323:52:19, 2.24s/it]
8%|▊ | 48242/569592 [11:13<323:52:19, 2.24s/it]
8%|▊ | 48243/569592 [11:14<275:37:27, 1.90s/it]
8%|▊ | 48243/569592 [11:14<275:37:27, 1.90s/it]
8%|▊ | 48244/569592 [11:15<244:16:14, 1.69s/it]
8%|▊ | 48244/569592 [11:15<244:16:14, 1.69s/it]
8%|▊ | 48245/569592 [11:23<508:37:45, 3.51s/it]
8%|▊ | 48245/569592 [11:23<508:37:45, 3.51s/it]
8%|▊ | 48246/569592 [11:24<402:46:40, 2.78s/it]
8%|▊ | 48246/569592 [11:24<402:46:40, 2.78s/it]
8%|▊ | 48247/569592 [11:25<324:22:34, 2.24s/it]
8%|▊ | 48247/569592 [11:25<324:22:34, 2.24s/it]
8%|▊ | 48248/569592 [11:26<279:47:35, 1.93s/it]
8%|▊ | 48248/569592 [11:26<279:47:35, 1.93s/it]
8%|▊ | 48249/569592 [11:32<477:26:57, 3.30s/it]
8%|▊ | 48249/569592 [11:32<477:26:57, 3.30s/it]
8%|▊ | 48250/569592 [11:33<378:59:38, 2.62s/it]
8%|▊ | 48250/569592 [11:33<378:59:38, 2.62s/it]
8%|▊ | 48251/569592 [11:34<308:32:32, 2.13s/it]
8%|▊ | 48251/569592 [11:34<308:32:32, 2.13s/it]
8%|▊ | 48252/569592 [11:36<283:17:43, 1.96s/it]
8%|▊ | 48252/569592 [11:36<283:17:43, 1.96s/it]
8%|▊ | 48253/569592 [11:42<480:03:56, 3.31s/it]
8%|▊ | 48253/569592 [11:42<480:03:56, 3.31s/it]
8%|▊ | 48254/569592 [11:43<378:37:20, 2.61s/it]
8%|▊ | 48254/569592 [11:43<378:37:20, 2.61s/it]
8%|▊ | 48255/569592 [11:44<306:22:40, 2.12s/it]
8%|▊ | 48255/569592 [11:44<306:22:40, 2.12s/it]
8%|▊ | 48256/569592 [11:46<289:00:02, 2.00s/it]
8%|▊ | 48256/569592 [11:46<289:00:02, 2.00s/it]
8%|▊ | 48257/569592 [11:53<484:41:48, 3.35s/it]
8%|▊ | 48257/569592 [11:53<484:41:48, 3.35s/it]
8%|▊ | 48258/569592 [11:54<381:30:39, 2.63s/it]
8%|▊ | 48258/569592 [11:54<381:30:39, 2.63s/it]
8%|▊ | 48259/569592 [11:54<308:09:58, 2.13s/it]
8%|▊ | 48259/569592 [11:55<308:09:58, 2.13s/it]
8%|▊ | 48260/569592 [11:56<295:38:11, 2.0/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
4s/it]
8%|▊ | 48260/569592 [11:56<295:38:11, 2.04s/it]
8%|▊ | 48261/569592 [12:02<467:54:19, 3.23s/it]
8%|▊ | 48261/569592 [12:02<467:54:19, 3.23s/it]
8%|▊ | 48262/569592 [12:03<370:23:37, 2.56s/it]
8%|▊ | 48262/569592 [12:03<370:23:37, 2.56s/it]
8%|▊ | 48263/569592 [12:04<299:10:56, 2.07s/it]
8%|▊ | 48263/569592 [12:04<299:10:56, 2.07s/it]
8%|▊ | 48264/569592 [12:06<299:16:20, 2.07s/it]
8%|▊ | 48264/569592 [12:06<299:16:20, 2.07s/it]
8%|▊ | 48265/569592 [12:12<478:08:28, 3.30s/it]
8%|▊ | 48265/569592 [12:13<478:08:28, 3.30s/it]
8%|▊ | 48266/569592 [12:14<380:34:51, 2.63s/it]
8%|▊ | 48266/569592 [12:14<380:34:51, 2.63s/it]
8%|▊ | 48267/569592 [12:15<308:16:06, 2.13s/it]
8%|▊ | 48267/569592 [12:15<308:16:06, 2.13s/it]
8%|▊ | 48268/569592 [12:17<320:29:03, 2.21s/it]
8%|▊ | 48268/569592 [12:17<320:29:03, 2.21s/it]
8%|▊ | 48269/569592 [12:24<515:49:34, 3.56s/it]
8%|▊ | 48269/569592 [12:24<515:49:34, 3.56s/it]
8%|▊ | 48270/569592 [12:25<401:57:17, 2.78s/it]
8%|▊ | 48270/569592 [12:25<401:57:17, 2.78s/it]
8%|▊ | 48271/569592 [12:26<323:06:53, 2.23s/it]
8%|▊ | 48271/569592 [12:26<323:06:53, 2.23s/it]
8%|▊ | 48272/569592 [12:26<268:00:37, 1.85s/it]
8%|▊ | 48272/569592 [12:27<268:00:37, 1.85s/it]
8%|▊ | 48273/569592 [12:33<485:40:28, 3.35s/it]
8%|▊ | 48273/569592 [12:33<485:40:28, 3.35s/it]
8%|▊ | 48274/569592 [12:34<383:42:40, 2.65s/it]
8%|▊ | 48274/569592 [12:34<383:42:40, 2.65s/it]
8%|▊ | 48275/569592 [12:35<309:22:05, 2.14s/it]
8%|▊ | 48275/569592 [12:35<309:22:05, 2.14s/it]
8%|▊ | 48276/569592 [12:36<259:11:22, 1.79s/it]
8%|▊ | 48276/569592 [12:36<259:11:22, 1.79s/it]
8%|▊ | 48277/569592 [12:43<489:50:16, 3.38s/it]
8%|▊ | 48277/569592 [12:43<489:50:16, 3.38s/it]
8%|▊ | 48278/569592 [12:44<391:07:34, 2.70s/it]
8%|▊ | 48278/569592 [12:44<391:07:34, 2.70s/it]
8%|▊ | 48279/569592 [12:45<315:37:16, 2.18s/it]
8%|▊ | 48279/569592 [12:45<315:37:16, 2.18s/it]
8%|▊ | 48280/569592 [12:46<264:57:31, 1.83s/it]
8%|▊ | 48280/569592 [12:46<264:57:31, 1.83s/it]
8%|▊ | 48281/569592 [12:53<484:18:59, 3.34s/it]
8%|▊ | 48281/569592 [12:53<484:18:59, 3.34s/it]
8%|▊ | 48282/569592 [12:55<403:01:57, 2.78s/it]
8%|▊ | 48282//home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (119209328 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
569592 [12:55<403:01:57, 2.78s/it]
8%|▊ | 48283/569592 [12:56<322:43:32, 2.23s/it]
8%|▊ | 48283/569592 [12:56<322:43:32, 2.23s/it]
8%|▊ | 48284/569592 [12:57<267:33:01, 1.85s/it]
8%|▊ | 48284/569592 [12:57<267:33:01, 1.85s/it]
8%|▊ | 48285/569592 [13:04<502:15:35, 3.47s/it]
8%|▊ | 48285/569592 [13:04<502:15:35, 3.47s/it]
8%|▊ | 48286/569592 [13:05<396:19:59, 2.74s/it]
8%|▊ | 48286/569592 [13:05<396:19:59, 2.74s/it]
8%|▊ | 48287/569592 [13:06<340:25:21, 2.35s/it]
8%|▊ | 48287/569592 [13:06<340:25:21, 2.35s/it]
8%|▊ | 48288/569592 [13:07<279:37:38, 1.93s/it]
8%|▊ | 48288/569592 [13:07<279:37:38, 1.93s/it]
8%|▊ | 48289/569592 [13:13<452:26:53, 3.12s/it]
8%|▊ | 48289/569592 [13:13<452:26:53, 3.12s/it]
8%|▊ | 48290/569592 [13:16<432:16:19, 2.99s/it]
8%|▊ | 48290/569592 [13:16<432:16:19, 2.99s/it]
8%|▊ | 48291/569592 [13:17<343:18:14, 2.37s/it]
8%|▊ | 48291/569592 [13:17<343:18:14, 2.37s/it]
8%|▊ | 48292/569592 [13:18<288:45:43, 1.99s/it]
8%|▊ | 48292/569592 [13:18<288:45:43, 1.99s/it]
8%|▊ | 48293/569592 [13:23<440:24:15, 3.04s/it]
8%|▊ | 48293/569592 [13:24<440:24:15, 3.04s/it]
8%|▊ | 48294/569592 [13:25<352:06:01, 2.43s/it]
8%|▊ | 48294/569592 [13:25<352:06:01, 2.43s/it]
8%|▊ | 48295/569592 [13:25<287:30:07, 1.99s/it]
8%|▊ | 48295/569592 [13:25<287:30:07, 1.99s/it]
8%|▊ | 48296/569592 [13:29<353:46:40, 2.44s/it]
8%|▊ | 48296/569592 [13:29<353:46:40, 2.44s/it]
8%|▊ | 48297/569592 [13:33<425:18:45, 2.94s/it]
8%|▊ | 48297/569592 [13:33<425:18:45, 2.94s/it]
8%|▊ | 48298/569592 [13:34<359:38:32, 2.48s/it]
8%|▊ | 48298/569592 [13:35<359:38:32, 2.48s/it]
8%|▊ | 48299/569592 [13:35<294:22:58, 2.03s/it]
8%|▊ | 48299/569592 [13:35<294:22:58, 2.03s/it]
8%|▊ | 48300/569592 [13:38<296:41:20, 2.05s/it]
8%|▊ | 48300/569592 [13:38<296:41:20, 2.05s/it]
8%|▊ | 48301/569592 [13:43<425:44:07, 2.94s/it]
8%|▊ | 48301/569592 [13:43<425:44:07, 2.94s/it]
8%|▊ | 48302/569592 [13:45<383:23:52, 2.65s/it]
8%|▊ | 48302/569592 [13:45<383:23:52, 2.65s/it]
8%|▊ | 48303/569592 [13:45<310:06:43, 2.14s/it]
8%|▊ | 48303/569592 [13:46<310:06:43, 2.14s/it]
8%|▊ | 48304/569592 [13:50<420:19:47, 2.90s/it]
8%|▊ | 48304/569592 [13:50<420:19:47, 2.90s/it]
8%|▊ | 48305/569592 [13:53<425:04:56, 2.94s/it]
8%|▊ | 48305/569592 [13:53<425:04:56, 2.94s/it]
8%|▊ | 48306/569592 [13:56<406:06:00, 2.80s/it]
8%|▊ | 48306/569592 [13:56<406:06:00, 2.80s/it]
8%|▊ | 48307/569592 [13:57<324:32:27, 2.24s/it]
8%|▊ | 48307/569592 [13:57<324:32:27, 2.24s/it]
8%|▊ | 48308/569592 [13:58<306:13:30, 2.11s/it]
8%|▊ | 48308/569592 [13:58<306:13:30, 2.11s/it]
8%|▊ | 48309/569592 [14:03<427:19:52, 2.95s/it]
8%|▊ | 48309/569592 [14:03<427:19:52, 2.95s/it]
8%|▊ | 48310/569592 [14:07<451:04:09, 3.12s/it]
8%|▊ | 48310/569592 [14:07<451:04:09, 3.12s/it]
8%|▊ | 48311/569592 [14:08<357:28:11, 2.47s/it]
8%|▊ | 48311/569592 [14:08<357:28:11, 2.47s/it]
8%|▊ | 48312/569592 [14:09<292:32:00, 2.02s/it]
8%|▊ | 48312/569592 [14:09<292:32:00, 2.02s/it]
8%|▊ | 48313/569592 [14:14<418:49:35, 2.89s/it]
8%|▊ | 48313/569592 [14:14<418:49:35, 2.89s/it]
8%|▊ | 48314/569592 [14:16<385:54:00, 2.67s/it]
8%|▊ | 48314/569592 [14:16<385:54:00, 2.67s/it]
8%|▊ | 48315/569592 [14:17<313:34:57, 2.17s/it]
8%|▊ | 48315/569592 [14:17<313:34:57, 2.17s/it]
8%|▊ | 48316/569592 [14:18<263:13:47, 1.82s/it]
8%|▊ | 48316/569592 [14:18<263:13:47, 1.82s/it]
8%|▊ | 48317/569592 [14:23<427:56:22, 2.96s/it]
8%|▊ | 48317/569592 [14:23<427:56:22, 2.96s/it]
8%|▊ | 48318/569592 [14:28<478:01:32, 3.30s/it]
8%|▊ | 48318/569592 [14:28<478:01:32, 3.30s/it]
8%|▊ | 48319/569592 [14:28<373:21:10, 2.58s/it]
8%|▊ | 48319/569592 [14:28<373:21:10, 2.58s/it]
8%|▊ | 48320/569592 [14:29<301:32:07, 2.08s/it]
8%|▊ | 48320/569592 [14:29<301:32:07, 2.08s/it]
8%|▊ | 48321/569592 [14:33<378:45:17, 2.62s/it]
8%|▊ | 48321/569592 [14:33<378:45:17, 2.62s/it]
8%|▊ | 48322/569592 [14:36<387:49:30, 2.68s/it]
8%|▊ | 48322/569592 [14:36<387:49:30, 2.68s/it]
8%|▊ | 48323/569592 [14:37<312:34:37, 2.16s/it]
8%|▊ | 48323/569592 [14:37<312:34:37, 2.16s/it]
8%|▊ | 48324/569592 [14:38<260:41:24, 1.80s/it]
8%|▊ | 48324/569592 [14:38<260:41:24, 1.80s/it]
8%|▊ | 48325/569592 [14:43<409:30:02, 2.83s/it]
8%|▊ | 48325/569592 [14:43<409:30:02, 2.83s/it]
8%|▊ | 48326/569592 [14:49<529:23:32, 3.66s/it]
8%|▊ | 48326/569592 [14:49<529:23:32, 3.66s/it]
8%|▊ | 48327/569592 [14:50<409:50:54, 2.83s/it]
8%|▊ | 48327/569592 [14:50<409:50:54, 2.83s/it]
8%|▊ | 48328/569592 [14:51<328:04:46, 2.27s/it]
8%|▊ | 48328/569592 [14:51<328:04:46, 2.27s/it]
8%|▊ | 48329/569592 [14:54<369:12:54, 2.55s/it]
8%|▊ | 48329/569592 [14:54<369:12:54, 2.55s/it]
8%|▊ | 48330/569592 [14:58<437:52:29, 3.02s/it]
8%|▊ | 48330/569592 [14:58<437:52:29, 3.02s/it]
8%|▊ | 48331/569592 [14:59<356:59:20, 2.47s/it]
8%|▊ | 48331/569592 [14:59<356:59:20, 2.47s/it]
8%|▊ | 48332/569592 [15:00<307:45:43, 2.13s/it]
8%|▊ | 48332/569592 [15:00<307:45:43, 2.13s/it]
8%|▊ | 48333/569592 [15:04<366:15:53, 2.53s/it]
8%|▊ | 48333/569592 [15:04<366:15:53, 2.53s/it]
8%|▊ | 48334/569592 [15:08<430:48:42, 2.98s/it]
8%|▊ | 48334/569592 [15:08<430:48:42, 2.98s/it]
8%|▊ | 48335/569592 [15:09<343:35:37, 2.37s/it]
8%|▊ | 48335/569592 [15:09<343:35:37, 2.37s/it]
8%|▊ | 48336/569592 [15:10<281:16:13, 1.94s/it]
8%|▊ | 48336/569592 [15:10<281:16:13, 1.94s/it]
8%|▊ | 48337/569592 [15:14<379:36:05, 2.62s/it]
8%|▊ | 48337/569592 [15:14<379:36:05, 2.62s/it]
8%|▊ | 48338/569592 [15:18<428:26:34, 2.96s/it]
8%|▊ | 48338/569592 [15:18<428:26:34, 2.96s/it]
8%|▊ | 48339/569592 [15:23<506:26:18, 3.50s/it]
8%|▊ | 48339/569592 [15:23<506:26:18, 3.50s/it]
8%|▊ | 48340/569592 [15:26<488:43:56, 3.38s/it]
8%|▊ | 48340/569592 [15:26<488:43:56, 3.38s/it]
8%|▊ | 48341/569592 [15:27<382:20:35, 2.64s/it]
8%|▊ | 48341/569592 [15:27<382:20:35, 2.64s/it]
8%|▊ | 48342/569592 [15:32<487:36:25, 3.37s/it]
8%|▊ | 48342/569592 [15:32<487:36:25, 3.37s/it]
8%|▊ | 48343/569592 [15:35<477:12:07, 3.30s/it]
8%|▊ | 48343/569592 [15:35<477:12:07, 3.30s/it]
8%|▊ | 48344/569592 [15:39<506:56:08, 3.50s/it]
8%|▊ | 48344/569592 [15:39<506:56:08, 3.50s/it]
8%|▊ | 48345/569592 [15:42<498:54:47, 3.45s/it]
8%|▊ | 48345/569592 [15:42<498:54:47, 3.45s/it]
8%|▊ | 48346/569592 [15:47<557:30:35, 3.85s/it]
8%|▊ | 48346/569592 [15:47<557:30:35, 3.85s/it]
8%|▊ | 48347/569592 [15:52<606:26:24, 4.19s/it]
8%|▊ | 48347/569592 [15:52<606:26:24, 4.19s/it]
8%|▊ | 48348/569592 [15:56<589:53:29, 4.07s/it]
8%|▊ | 48348/569592 [15:56<589:53:29, 4.07s/it]
8%|▊ | 48349/569592 [16:01<644:59:11, 4.45s/it]
8%|▊ | 48349/569592 [16:01<644:59:11, 4.45s/it]
8%|▊ | 48350/569592 [16:05<617:01:05, 4.26s/it]
8%|▊ | 48350/569592 [16:05<617:01:05, 4.26s/it]
8%|▊ | 48351/569592 [16:06<470:13:28, 3.25s/it]
8%|▊ | 48351/569592 [16:06<470:13:28, 3.25s/it]
8%|▊ | 48352/569592 [16:07<371:06:22, 2.56s/it]
8%|▊ | 48352/569592 [16:07<371:06:22, 2.56s/it]
8%|▊ | 48353/569592 [16:08<300:33:05, 2.08s/it]
8%|▊ | 48353/569592 [16:08<300:33:05, 2.08s/it]
8%|▊ | 48354/569592 [16:09<253:28:03, 1.75s/it]
8%|▊ | 48354/569592 [16:09<253:28:03, 1.75s/it]
8%|▊ | 48355/569592 [16:10<219:26:37, 1.52s/it]
8%|▊ | 48355/569592 [16:10<219:26:37, 1.52s/it]
8%|▊ | 48356/569592 [16:10<194:50:19, 1.35s/it]
8%|▊ | 48356/569592 [16:11<194:50:19, 1.35s/it]
8%|▊ | 48357/569592 [16:11<177:43:47, 1.23s/it]
8%|▊ | 48357/569592 [16:11<177:43:47, 1.23s/it]
8%|▊ | 48358/569592 [16:13<209:20:52, 1.45s/it]
8%|▊ | 48358/569592 [16:13<209:20:52, 1.45s/it]
8%|▊ | 48359/569592 [16:18<356:57:12, 2.47s/it]
8%|▊ | 48359/569592 [16:18<356:57:12, 2.47s/it]
8%|▊ | 48360/569592 [16:20<310:47:39, 2.15s/it]
8%|▊ | 48360/569592 [16:20<310:47:39, 2.15s/it]
8%|▊ | 48361/569592 [16:21<262:43:03, 1.81s/it]
8%|▊ | 48361/569592 [16:21<262:43:03, 1.81s/it]
8%|▊ | 48362/569592 [16:23<288:41:09, 1.99s/it]
8%|▊ | 48362/569592 [16:23<288:41:09, 1.99s/it]
8%|▊ | 48363/569592 [16:29<452:17:38, 3.12s/it]
8%|▊ | 48363/569592 [16:29<452:17:38, 3.12s/it]
8%|▊ | 48364/569592 [16:30<360:41:30, 2.49s/it]
8%|▊ | 48364/569592 [16:30<360:41:30, 2.49s/it]
8%|▊ | 48365/569592 [16:31<294:21:38, 2.03s/it]
8%|▊ | 48365/569592 [16:31<294:21:38, 2.03s/it]
8%|▊ | 48366/569592 [16:34<336:36:20, 2.32s/it]
8%|▊ | 48366/569592 [16:34<336:36:20, 2.32s/it]
8%|▊ | 48367/569592 [16:40<501:55:08, 3.47s/it]
8%|▊ | 48367/569592 [16:40<501:55:08, 3.47s/it]
8%|▊ | 48368/569592 [16:41<395:28:47, 2.73s/it]
8%|▊ | 48368/569592 [16:41<395:28:47, 2.73s/it]
8%|▊ | 48369/569592 [16:42<318:40:39, 2.20s/it]
8%|▊ | 48369/569592 [16:42<318:40:39, 2.20s/it]
8%|▊ | 48370/569592 [16:44<295:48:59, 2.04s/it]
8%|▊ | 48370/569592 [16:44<295:48:59, 2.04s/it]
8%|▊ | 48371/569592 [16:50<469:55:43, 3.25s/it]
8%|▊ | 48371/569592 [16:50<469:55:43, 3.25s/it]
8%|▊ | 48372/569592 [16:51<376:20:25, 2.60s/it]
8%|▊ | 48372/569592 [16:51<376:20:25, 2.60s/it]
8%|▊ | 48373/569592 [16:52<305:49:58, 2.11s/it]
8%|▊ | 48373/569592 [16:52<305:49:58, 2.11s/it]
8%|▊ | 48374/569592 [16:54<316:23:42, 2.19s/it]
8%|▊ | 48374/569592 [16:54<316:23:42, 2.19s/it]
8%|▊ | 48375/569592 [17:00<466:34:13, 3.22s/it]
8%|▊ | 48375/569592 [17:00<466:34:13, 3.22s/it]
8%|▊ | 48376/569592 [17:01<368:29:26, 2.55s/it]
8%|▊ | 48376/569592 [17:01<368:29:26, 2.55s/it]
8%|▊ | 48377/569592 [17:02<298:59:33, 2.07s/it]
8%|▊ | 48377/569592 [17:02<298:59:33, 2.07s/it]
8%|▊ | 48378/569592 [17:04<321:11:27, 2.22s/it]
8%|▊ | 48378/569592 [17:04<321:11:27, 2.22s/it]
8%|▊ | 48379/569592 [17:10<494:45:18, 3.42s/it]
8%|▊ | 48379/569592 [17:10<494:45:18, 3.42s/it]
8%|▊ | 48380/569592 [17:11<388:12:32, 2.68s/it]
8%|▊ | 48380/569592 [17:11<388:12:32, 2.68s/it]
8%|▊ | 48381/569592 [17:12<313:44:24, 2.17s/it]
8%|▊ | 48381/569592 [17:12<313:44:24, 2.17s/it]
8%|▊ | 48382/569592 [17:14<303:57:38, 2.10s/it]
8%|▊ | 48382/569592 [17:14<303:57:38, 2.10s/it]
8%|▊ | 48383/569592 [17:22<552:42:07, 3.82s/it]
8%|▊ | 48383/569592 [17:22<552:42:07, 3.82s/it]
8%|▊ | 48384/569592 [17:23<428:46:56, 2.96s/it]
8%|▊ | 48384/569592 [17:23<428:46:56, 2.96s/it]
8%|▊ | 48385/569592 [17:24<345:16:35, 2.38s/it]
8%|▊ | 48385/569592 [17:24<345:16:35, 2.38s/it]
8%|▊ | 48386/569592 [17:26<323:57:07, 2.24s/it]
8%|▊ | 48386/569592 [17:26<323:57:07, 2.24s/it]
8%|▊ | 48387/569592 [17:33<520:08:06, 3.59s/it]
8%|▊ | 48387/569592 [17:33<520:08:06, 3.59s/it]
8%|▊ | 48388/569592 [17:34<405:13:06, 2.80s/it]
8%|▊ | 48388/569592 [17:34<405:13:06, 2.80s/it]
8%|▊ | 48389/569592 [17:35<324:43:13, 2.24s/it]
8%|�/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (97680000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
� | 48389/569592 [17:35<324:43:13, 2.24s/it]
8%|▊ | 48390/569592 [17:36<268:24:48, 1.85s/it]
8%|▊ | 48390/569592 [17:36<268:24:48, 1.85s/it]
8%|▊ | 48391/569592 [17:43<524:55:22, 3.63s/it]
8%|▊ | 48391/569592 [17:43<524:55:22, 3.63s/it]
8%|▊ | 48392/569592 [17:44<407:48:08, 2.82s/it]
8%|▊ | 48392/569592 [17:44<407:48:08, 2.82s/it]
8%|▊ | 48393/569592 [17:45<329:00:36, 2.27s/it]
8%|▊ | 48393/569592 [17:45<329:00:36, 2.27s/it]
8%|▊ | 48394/569592 [17:47<294:16:20, 2.03s/it]
8%|▊ | 48394/569592 [17:47<294:16:20, 2.03s/it]
8%|▊ | 48395/569592 [17:53<484:13:55, 3.34s/it]
8%|▊ | 48395/569592 [17:53<484:13:55, 3.34s/it]
8%|▊ | 48396/569592 [17:54<380:55:03, 2.63s/it]
8%|▊ | 48396/569592 [17:54<380:55:03, 2.63s/it]
8%|▊ | 48397/569592 [17:55<309:26:32, 2.14s/it]
8%|▊ | 48397/569592 [17:55<309:26:32, 2.14s/it]
8%|▊ | 48398/569592 [17:56<258:46:27, 1.79s/it]
8%|▊ | 48398/569592 [17:56<258:46:27, 1.79s/it]
8%|▊ | 48399/569592 [18:03<485:11:30, 3.35s/it]
8%|▊ | 48399/569592 [18:03<485:11:30, 3.35s/it]
8%|▊ | 48400/569592 [18:04<384:07:27, 2.65s/it]
8%|▊ | 48400/569592 [18:04<384:07:27, 2.65s/it]
8%|▊ | 48401/569592 [18:05<312:32:00, 2.16s/it]
8%|▊ | 48401/569592 [18:05<312:32:00, 2.16s/it]
8%|▊ | 48402/569592 [18:07<289:13:44, 2.00s/it]
8%|▊ | 48402/569592 [18:07<289:13:44, 2.00s/it]
8%|▊ | 48403/569592 [18:14<517:49:45, 3.58s/it]
8%|▊ | 48403/569592 [18:14<517:49:45, 3.58s/it]
8%|▊ | 48404/569592 [18:15<403:29:09, 2.79s/it]
8%|▊ | 48404/569592 [18:15<403:29:09, 2.79s/it]
8%|▊ | 48405/569592 [18:16<323:19:26, 2.23s/it]
8%|▊ | 48405/569592 [18:16<323:19:26, 2.23s/it]
8%|▊ | 48406/569592 [18:17<269:10:40, 1.86s/it]
8%|▊ | 48406/569592 [18:17<269:10:40, 1.86s/it]
8%|▊ | 48407/569592 [18:25<535:53:58, 3.70s/it]
8%|▊ | 48407/569592 [18:25<535:53:58, 3.70s/it]
8%|▊ | 48408/569592 [18:26<414:58:21, 2.87s/it]
8%|▊ | 48408/569592 [18:26<414:58:21, 2.87s/it]
8%|▊ | 48409/569592 [18:27<329:49:49, 2.28s/it]
8%|▊ | 48409/569592 [18:27<329:49:49, 2.28s/it]
8%|▊ | 48410/569592 [18:28<282:57:36, 1.95s/it]
8%|▊ | 48410/569592 [18:28<282:57:36, 1.95s/it]
8%|▊ | 48411/569592 [18:34<440:09:23, 3.04s/it]
8%|▊ | 48411/569592 [18:34<440:09:23, 3.04s/it]
8%|▊ | 48412/569592 [18:35<354:46:34, 2.45s/it]
8%|▊ | 48412/569592 [18:35<354:46:34, 2.45s/it]
8%|▊ | 48413/569592 [18:36<290:47:27, 2.01s/it]
8%|▊ | 48413/569592 [18:36<290:47:27, 2.01s/it]
8%|▊ | 48414/569592 [18:38<304:26:26, 2.10s/it]
8%|▊ | 48414/569592 [18:38<304:26:26, 2.10s/it]
8%|▊ | 48415/569592 [18:43<414:33:14, 2.86s/it]
8%|▊ | 48415/569592 [18:43<414:33:14, 2.86s/it]
9%|▊ | 48416/569592 [18:45<411:32:41, 2.84s/it]
9%|▊ | 48416/569592 [18:45<411:32:41, 2.84s/it]
9%|▊ | 48417/569592 [18:46<328:45:56, 2.27s/it]
9%|▊ | 48417/569592 [18:46<328:45:56, 2.27s/it]
9%|▊ | 48418/569592 [18:47<279:44:10, 1.93s/it]
9%|▊ | 48418/569592 [18:47<279:44:10, 1.93s/it]
9%|▊ | 48419/569592 [18:54<486:53:19, 3.36s/it]
9%|▊ | 48419/569592 [18:54<486:53:19, 3.36s/it]
9%|▊ | 48420/569592 [18:55<382:23:09, 2.64s/it]
9%|▊ | 48420/569592 [18:55<382:23:09, 2.64s/it]
9%|▊ | 48421/569592 [18:56<308:27:37, 2.13s/it]
9%|▊ | 48421/569592 [18:56<308:27:37, 2.13s/it]
9%|▊ | 48422/569592 [18:59<336:48:16, 2.33s/it]
9%|▊ | 48422/569592 [18:59<336:48:16, 2.33s/it]
9%|▊ | 48423/569592 [19:05<510:30:25, 3.53s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▊ | 48423/569592 [19:05<510:30:25, 3.53s/it]
9%|▊ | 48424/569592 [19:06<396:24:36, 2.74s/it]
9%|▊ | 48424/569592 [19:06<396:24:36, 2.74s/it]
9%|▊ | 48425/569592 [19:07<317:40:21, 2.19s/it]
9%|▊ | 48425/569592 [19:07<317:40:21, 2.19s/it]
9%|▊ | 48426/569592 [19:08<265:15:56, 1.83s/it]
9%|▊ | 48426/569592 [19:08<265:15:56, 1.83s/it]
9%|▊ | 48427/569592 [19:15<485:00:22, 3.35s/it]
9%|▊ | 48427/569592 [19:15<485:00:22, 3.35s/it]
9%|▊ | 48428/569592 [19:16<381:51:02, 2.64s/it]
9%|▊ | 48428/569592 [19:16<381:51:02, 2.64s/it]
9%|▊ | 48429/569592 [19:17<309:16:13, 2.14s/it]
9%|▊ | 48429/569592 [19:17<309:16:13, 2.14s/it]
9%|▊ | 48430/569592 [19:18<257:28:43, 1.78s/it]
9%|▊ | 48430/569592 [19:18<257:28:43, 1.78s/it]
9%|▊ | 48431/569592 [19:24<475:04:15, 3.28s/it]
9%|▊ | 48431/569592 [19:25<475:04:15, 3.28s/it]
9%|▊ | 48432/569592 [19:25<374:59:35, 2.59s/it]
9%|▊ | 48432/569592 [19:25<374:59:35, 2.59s/it]
9%|▊ | 48433/569592 [19:26<304:40:20, 2.10s/it]
9%|▊ | 48433/569592 [19:26<304:40:20, 2.10s/it]
9%|▊ | 48434/569592 [19:29<318:16:55, 2.20s/it]
9%|▊ | 48434/569592 [19:29<318:16:55, 2.20s/it]
9%|▊ | 48435/569592 [19:34<452:23:53, 3.13s/it]
9%|▊ | 48435/569592 [19:34<452:23:53, 3.13s/it]
9%|▊ | 48436/569592 [19:35<359:29:37, 2.48s/it]
9%|▊ | 48436/569592 [19:35<359:29:37, 2.48s/it]
9%|▊ | 48437/569592 [19:36<292:27:47, 2.02s/it]
9%|▊ | 48437/569592 [19:36<292:27:47, 2.02s/it]
9%|▊ | 48438/569592 [19:38<296:20:15, 2.05s/it]
9%|▊ | 48438/569592 [19:38<296:20:15, 2.05s/it]
9%|▊ | 48439/569592 [19:44<447:03:08, 3.09s/it]
9%|▊ | 48439/569592 [19:44<447:03:08, 3.09s/it]
9%|▊ | 48440/569592 [19:45<355:09:59, 2.45s/it]
9%|▊ | 48440/569592 [19:45<355:09:59, 2.45s/it]
9%|▊ | 48441/569592 [19:48<385:14:51, 2.66s/it]
9%|▊ | 48441/569592 [19:48<385:14:51, 2.66s/it]
9%|▊ | 48442/569592 [19:50<366:19:36, 2.53s/it]
9%|▊ | 48442/569592 [19:50<366:19:36, 2.53s/it]
9%|▊ | 48443/569592 [19:53<395:56:45, 2.74s/it]
9%|▊ | 48443/569592 [19:53<395:56:45, 2.74s/it]
9%|▊ | 48444/569592 [19:54<320:32:44, 2.21s/it]
9%|▊ | 48444/569592 [19:54<320:32:44, 2.21s/it]
9%|▊ | 48445/569592 [19:58<383:47:06, 2.65s/it]
9%|▊ | 48445/569592 [19:58<383:47:06, 2.65s/it]
9%|▊ | 48446/569592 [19:59<326:43:27, 2.26s/it]
9%|▊ | 48446/569592 [19:59<326:43:27, 2.26s/it]
9%|▊ | 48447/569592 [20:03<412:48:12, 2.85s/it]
9%|▊ | 48447/569592 [20:04<412:48:12, 2.85s/it]
9%|▊ | 48448/569592 [20:04<332:07:08, 2.29s/it]
9%|▊ | 48448/569592 [20:05<332:07:08, 2.29s/it]
9%|▊ | 48449/569592 [20:07<346:08:27, 2.39s/it]
9%|▊ | 48449/569592 [20:07<346:08:27, 2.39s/it]
9%|▊ | 48450/569592 [20:10<381:54:56, 2.64s/it]
9%|▊ | 48450/569592 [20:10<381:54:56, 2.64s/it]
9%|▊ | 48451/569592 [20:14<412:07:12, 2.85s/it]
9%|▊ | 48451/569592 [20:14<412:07:12, 2.85s/it]
9%|▊ | 48452/569592 [20:17<421:54:21, 2.91s/it]
9%|▊ | 48452/569592 [20:17<421:54:21, 2.91s/it]
9%|▊ | 48453/569592 [20:18<337:24:53, 2.33s/it]
9%|▊ | 48453/569592 [20:18<337:24:53, 2.33s/it]
9%|▊ | 48454/569592 [20:21<388:44:51, 2.69s/it]
9%|▊ | 48454/569592 [20:21<388:44:51, 2.69s/it]
9%|▊ | 48455/569592 [20:25<422:38:28, 2.92s/it]
9%|▊ | 48455/569592 [20:25<422:38:28, 2.92s/it]
9%|▊ | 48456/569592 [20:29<503:29:20, 3.48s/it]
9%|▊ | 48456/569592 [20:29<503:29:20, 3.48s/it]
9%|▊ | 48457/569592 [20:33<496:48:20, 3.43s/it]
9%|▊ | 48457/569592 [20:33<496:48:20, 3.43s/it]
9%|▊ | 48458/569592 [20:36<479:59:34, 3.32s/it]
9%|▊ | 48458/569592 [20:36<479:59:34, 3.32s/it]
9%|▊ | 48459/569592 [20:43<647:15:27, 4.47s/it]
9%|▊ | 48459/569592 [20:43<647:15:27, 4.47s/it]
9%|▊ | 48460/569592 [20:48<655:24:58, 4.53s/it]
9%|▊ | 48460/569592 [20:48<655:24:58, 4.53s/it]
9%|▊ | 48461/569592 [20:52<660:17:03, 4.56s/it]
9%|▊ | 48461/569592 [20:52<660:17:03, 4.56s/it]
9%|▊ | 48462/569592 [20:57<658:03:56, 4.55s/it]
9%|▊ | 48462/569592 [20:57<658:03:56, 4.55s/it]
9%|▊ | 48463/569592 [21:01<652:30:24, 4.51s/it]
9%|▊ | 48463/569592 [21:01<652:30:24, 4.51s/it]
9%|▊ | 48464/569592 [21:06<667:25:23, 4.61s/it]
9%|▊ | 48464/569592 [21:06<667:25:23, 4.61s/it]
9%|▊ | 48465/569592 [21:14<802:03:07, 5.54s/it]
9%|▊ | 48465/569592 [21:14<802:03:07, 5.54s/it]
9%|▊ | 48466/569592 [21:17<707:12:13, 4.89s/it]
9%|▊ | 48466/569592 [21:17<707:12:13, 4.89s/it]
9%|▊ | 48467/569592 [21:22<726:25:14, 5.02s/it]
9%|▊ | 48467/569592 [21:22<726:25:14, 5.02s/it]
9%|▊ | 48468/569592 [21:23<548:43:06, 3.79s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (92315488 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▊ | 48468/569592 [21:23<548:43:06, 3.79s/it]
9%|▊ | 48469/569592 [21:24<424:38:31, 2.93s/it]
9%|▊ | 48469/569592 [21:24<424:38:31, 2.93s/it]
9%|▊ | 48470/569592 [21:25<340:09:07, 2.35s/it]
9%|▊ | 48470/569592 [21:25<340:09:07, 2.35s/it]
9%|▊ | 48471/569592 [21:26<278:57:10, 1.93s/it]
9%|▊ | 48471/569592 [21:26<278:57:10, 1.93s/it]
9%|▊ | 48472/569592 [21:27<235:54:22, 1.63s/it]
9%|▊ | 48472/569592 [21:27<235:54:22, 1.63s/it]
9%|▊ | 48473/569592 [21:28<205:37:20, 1.42s/it]
9%|▊ | 48473/569592 [21:28<205:37:20, 1.42s/it]
9%|▊ | 48474/569592 [21:29<185:07:08, 1.28s/it]
9%|▊ | 48474/569592 [21:29<185:07:08, 1.28s/it]
9%|▊ | 48475/569592 [21:30<170:47:29, 1.18s/it]
9%|▊ | 48475/569592 [21:30<170:47:29, 1.18s/it]
9%|▊ | 48476/569592 [21:36<400:12:29, 2.76s/it]
9%|▊ | 48476/569592 [21:37<400:12:29, 2.76s/it]
9%|▊ | 48477/569592 [21:38<327:21:43, 2.26s/it]
9%|▊ | 48477/569592 [21:38<327:21:43, 2.26s/it]
9%|▊ | 48478/569592 [21:39<269:42:18, 1.86s/it]
9%|▊ | 48478/569592 [21:39<269:42:18, 1.86s/it]
9%|▊ | 48479/569592 [21:39<229:37:44, 1.59s/it]
9%|▊ | 48479/569592 [21:39<229:37:44, 1.59s/it]
9%|▊ | 48480/569592 [21:46<447:54:34, 3.09s/it]
9%|▊ | 48480/569592 [21:46<447:54:34, 3.09s/it]
9%|▊ | 48481/569592 [21:47<356:31:27, 2.46s/it]
9%|▊ | 48481/569592 [21:47<356:31:27, 2.46s/it]
9%|▊ | 48482/569592 [21:48<307:19:19, 2.12s/it]
9%|▊ | 48482/569592 [21:48<307:19:19, 2.12s/it]
9%|▊ | 48483/569592 [21:50<273:44:58, 1.89s/it]
9%|▊ | 48483/569592 [21:50<273:44:58, 1.89s/it]
9%|▊ | 48484/569592 [21:57<495:03:24, 3.42s/it]
9%|▊ | 48484/569592 [21:57<495:03:24, 3.42s/it]
9%|▊ | 48485/569592 [21:58<394:54:41, 2.73s/it]
9%|▊ | 48485/569592 [21:58<394:54:41, 2.73s/it]
9%|▊ | 48486/569592 [21:59<319:33:07, 2.21s/it]
9%|▊ | 48486/569592 [21:59<319:33:07, 2.21s/it]
9%|▊ | 48487/569592 [22:00<265:22:39, 1.83s/it]
9%|▊ | 48487/569592 [22:00<265:22:39, 1.83s/it]
9%|▊ | 48488/569592 [22:07<490:31:39, 3.39s/it]
9%|▊ | 48488/569592 [22:07<490:31:39, 3.39s/it]
9%|▊ | 48489/569592 [22:09<431:48:19, 2.98s/it]
9%|▊ | 48489/569592 [22:09<431:48:19, 2.98s/it]
9%|▊ | 48490/569592 [22:10<347:17:19, 2.40s/it]
9%|▊ | 48490/569592 [22:10<347:17:19, 2.40s/it]
9%|▊ | 48491/569592 [22:11<288:36:59, 1.99s/it]
9%|▊ | 48491/569592 [22:11<288:36:59, 1.99s/it]
9%|▊ | 48492/569592 [22:16<438:32:14, 3.03s/it]
9%|▊ | 48492/569592 [22:16<438:32:14, 3.03s/it]
9%|▊ | 48493/569592 [22:18<362:20:54, 2.50s/it]
9%|▊ | 48493/569592 [22:18<362:20:54, 2.50s/it]
9%|▊ | 48494/569592 [22:20<366:30:33, 2.53s/it]
9%|▊ | 48494/569592 [22:20<366:30:33, 2.53s/it]
9%|▊ | 48495/569592 [22:21<302:15:47, 2.09s/it]
9%|▊ | 48495/569592 [22:21<302:15:47, 2.09s/it]
9%|▊ | 48496/569592 [22:28<487:05:40, 3.37s/it]
9%|▊ | 48496/569592 [22:28<487:05:40, 3.37s/it]
9%|▊ | 48497/569592 [22:29<386:33:15, 2.67s/it]
9%|▊ | 48497/569592 [22:29<386:33:15, 2.67s/it]
9%|▊ | 48498/569592 [22:30<333:41:13, 2.31s/it]
9%|▊ | 48498/569592 [22:30<333:41:13, 2.31s/it]
9%|▊ | 48499/569592 [22:31<275:21:00, 1.90s/it]
9%|▊ | 48499/569592 [22:31<275:21:00, 1.90s/it]
9%|▊ | 48500/569592 [22:37<430:41:13, 2.98s/it]
9%|▊ | 48500/569592 [22:37<430:41:13, 2.98s/it]
9%|▊ | 48501/569592 [22:38<351:50:26, 2.43s/it]
9%|▊ | 48501/569592 [22:38<351:50:26, 2.43s/it]
9%|▊ | 48502/569592 [22:40<354:52:52, 2.45s/it]
9%|▊ | 48502/569592 [22:40<354:52:52, 2.45s/it]
9%|▊ | 48503/569592 [22:41<290:00:28, 2.00s/it]
9%|▊ | 48503/569592 [22:41<290:00:28, 2.00s/it]
9%|▊ | 48504/569592 [22:48<500:26:46, 3.46s/it]
9%|▊ | 48504/569592 [22:48<500:26:46, 3.46s/it]
9%|▊ | 48505/569592 [22:49<392:23:39, 2.71s/it]
9%|▊ | 48505/569592 [22:49<392:23:39, 2.71s/it]
9%|▊ | 48506/569592 [22:51<339:13:19, 2.34s/it]
9%|▊ | 48506/569592 [22:51<339:13:19, 2.34s/it]
9%|▊ | 48507/569592 [22:51<279:26:19, 1.93s/it]
9%|▊ | 48507/569592 [22:51<279:26:19, 1.93s/it]
9%|▊ | 48508/569592 [22:57<449:16:30, 3.10s/it]
9%|▊ | 48508/569592 [22:57<449:16:30, 3.10s/it]
9%|▊ | 48509/569592 [22:58<356:06:31, 2.46s/it]
9%|▊ | 48509/569592 [22:58<356:06:31, 2.46s/it]
9%|▊ | 48510/569592 [23:00<325:48:01, 2.25s/it]
9%|▊ | 48510/569592 [23:00<325:48:01, 2.25s/it]
9%|▊ | 48511/569592 [23:01<275:02:10, 1.90s/it]
9%|▊ | 48511/569592 [23:01<275:02:10, 1.90s/it]
9%|▊ | 48512/569592 [23:08<470:29:11, 3.25s/it]
9%|▊ | 48512/569592 [23:08<470:29:11, 3.25s/it]
9%|▊ | 48513/569592 [23:09<375:40:15, 2.60s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▊ | 48513/569592 [23:09<375:40:15, 2.60s/it]
9%|▊ | 48514/569592 [23:11<347:39:23, 2.40s/it]
9%|▊ | 48514/569592 [23:11<347:39:23, 2.40s/it]
9%|▊ | 48515/569592 [23:12<327:08:41, 2.26s/it]
9%|▊ | 48515/569592 [23:12<327:08:41, 2.26s/it]
9%|▊ | 48516/569592 [23:18<450:23:49, 3.11s/it]
9%|▊ | 48516/569592 [23:18<450:23:49, 3.11s/it]
9%|▊ | 48517/569592 [23:19<358:30:18, 2.48s/it]
9%|▊ | 48517/569592 [23:19<358:30:18, 2.48s/it]
9%|▊ | 48518/569592 [23:21<337:33:28, 2.33s/it]
9%|▊ | 48518/569592 [23:21<337:33:28, 2.33s/it]
9%|▊ | 48519/569592 [23:23<325:36:19, 2.25s/it]
9%|▊ | 48519/569592 [23:23<325:36:19, 2.25s/it]
9%|▊ | 48520/569592 [23:28<477:59:37, 3.30s/it]
9%|▊ | 48520/569592 [23:28<477:59:37, 3.30s/it]
9%|▊ | 48521/569592 [23:29<375:18:29, 2.59s/it]
9%|▊ | 48521/569592 [23:29<375:18:29, 2.59s/it]
9%|▊ | 48522/569592 [23:31<321:28:25, 2.22s/it]
9%|▊ | 48522/569592 [23:31<321:28:25, 2.22s/it]
9%|▊ | 48523/569592 [23:32<266:18:57, 1.84s/it]
9%|▊ | 48523/569592 [23:32<266:18:57, 1.84s/it]
9%|▊ | 48524/569592 [23:38<468:27:44, 3.24s/it]
9%|▊ | 48524/569592 [23:38<468:27:44, 3.24s/it]
9%|▊ | 48525/569592 [23:41<436:43:21, 3.02s/it]
9%|▊ | 48525/569592 [23:41<436:43:21, 3.02s/it]
9%|▊ | 48526/569592 [23:42<345:43:46, 2.39s/it]
9%|▊ | 48526/569592 [23:42<345:43:46, 2.39s/it]
9%|▊ | 48527/569592 [23:45<398:36:24, 2.75s/it]
9%|▊ | 48527/569592 [23:45<398:36:24, 2.75s/it]
9%|▊ | 48528/569592 [23:48<402:05:40, 2.78s/it]
9%|▊ | 48528/569592 [23:48<402:05:40, 2.78s/it]
9%|▊ | 48529/569592 [23:51<402:11:37, 2.78s/it]
9%|▊ | 48529/569592 [23:51<402:11:37, 2.78s/it]
9%|▊ | 48530/569592 [23:52<322:59:58, 2.23s/it]
9%|▊ | 48530/569592 [23:52<322:59:58, 2.23s/it]
9%|▊ | 48531/569592 [23:54<326:53:06, 2.26s/it]
9%|▊ | 48531/569592 [23:54<326:53:06, 2.26s/it]
9%|▊ | 48532/569592 [23:58<409:01:46, 2.83s/it]
9%|▊ | 48532/569592 [23:58<409:01:46, 2.83s/it]
9%|▊ | 48533/569592 [24:01<404:01:55, 2.79s/it]
9%|▊ | 48533/569592 [24:01<404:01:55, 2.79s/it]
9%|▊ | 48534/569592 [24:02<326:06:38, 2.25s/it]
9%|▊ | 48534/569592 [24:02<326:06:38, 2.25s/it]
9%|▊ | 48535/569592 [24:04<326:05:30, 2.25s/it]
9%|▊ | 48535/569592 [24:04<326:05:30, 2.25s/it]
9%|▊ | 48536/569592 [24:08<402:41:47, 2.78s/it]
9%|▊ | 48536/569592 [24:08<402:41:47, 2.78s/it]
9%|▊ | 48537/569592 [24:12<430:58:26, 2.98s/it]
9%|▊ | 48537/569592 [24:12<430:58:26, 2.98s/it]
9%|▊ | 48538/569592 [24:13<342:55:50, 2.37s/it]
9%|▊ | 48538/569592 [24:13<342:55:50, 2.37s/it]
9%|▊ | 48539/569592 [24:14<305:49:56, 2.11s/it]
9%|▊ | 48539/569592 [24:14<305:49:56, 2.11s/it]
9%|▊ | 48540/569592 [24:17<348:23:09, 2.41s/it]
9%|▊ | 48540/569592 [24:17<348:23:09, 2.41s/it]
9%|▊ | 48541/569592 [24:22<436:40:38, 3.02s/it]
9%|▊ | 48541/569592 [24:22<436:40:38, 3.02s/it]
9%|▊ | 48542/569592 [24:23<346:47:44, 2.40s/it]
9%|▊ | 48542/569592 [24:23<346:47:44, 2.40s/it]
9%|▊ | 48543/569592 [24:24<305:13:49, 2.11s/it]
9%|▊ | 48543/569592 [24:24<305:13:49, 2.11s/it]
9%|▊ | 48544/569592 [24:28<388:26:28, 2.68s/it]
9%|▊ | 48544/569592 [24:28<388:26:28, 2.68s/it]
9%|▊ | 48545/569592 [24:33<471:13:18, 3.26s/it]
9%|▊ | 48545/569592 [24:33<471:13:18, 3.26s/it]
9%|▊ | 48546/569592 [24:34<369:33:13, 2.55s/it]
9%|▊ | 48546/569592 [24:34<369:33:13, 2.55s/it]
9%|▊ | 48547/569592 [24:34<298:25:31, 2.06s/it]
9%|▊ | 48547/569592 [24:34<298:25:31, 2.06s/it]
9%|▊ | 48548/569592 [24:40<430:20:04, 2.97s/it]
9%|▊ | 48548/569592 [24:40<430:20:04, 2.97s/it]
9%|▊ | 48549/569592 [24:43<465:46:42, 3.22s/it]
9%|▊ | 48549/569592 [24:43<465:46:42, 3.22s/it]
9%|▊ | 48550/569592 [24:44<367:21:12, 2.54s/it]
9%|▊ | 48550/569592 [24:44<367:21:12, 2.54s/it]
9%|▊ | 48551/569592 [24:45<300:33:55, 2.08s/it]
9%|▊ | 48551/569592 [24:45<300:33:55, 2.08s/it]
9%|▊ | 48552/569592 [24:50<404:01:22, 2.79s/it]
9%|▊ | 48552/569592 [24:50<404:01:22, 2.79s/it]
9%|▊ | 48553/569592 [24:53<444:28:10, 3.07s/it]
9%|▊ | 48553/569592 [24:53<444:28:10, 3.07s/it]
9%|▊ | 48554/569592 [24:54<350:29:25, 2.42s/it]
9%|▊ | 48554/569592 [24:54<350:29:25, 2.42s/it]
9%|▊ | 48555/569592 [24:55<285:48:12, 1.97s/it]
9%|▊ | 48555/569592 [24:55<285:48:12, 1.97s/it]
9%|▊ | 48556/569592 [24:59<342:25:56, 2.37s/it]
9%|▊ | 48556/569592 [24:59<342:25:56, 2.37s/it]
9%|▊ | 48557/569592 [25:03<452:43:39, 3.13s/it]
9%|▊ | 48557/569592 [25:03<452:43:39, 3.13s/it]
9%|▊ | 48558/569592 [25:04<357:26:29, 2.47s/it]
9%|▊ | 48558/569592 [25:04<357:26:29, 2.47s/it]
9%|▊ | 48559/569592 [25:05<289:55:53, 2.00s/it]
9%|▊ | 48559/569592 [25:05<289:55:53, 2.00s/it]
9%|▊ | 48560/569592 [25:10<398:21:59, 2.75s/it]
9%|▊ | 48560/569592 [25:10<398:21:59, 2.75s/it]
9%|▊ | 48561/569592 [25:13<437:38:48, 3.02s/it]
9%|▊ | 48561/569592 [25:13<437:38:48, 3.02s/it]
9%|▊ | 48562/569592 [25:14<345:50:15, 2.39s/it]
9%|▊ | 48562/569592 [25:14<345:50:15, 2.39s/it]
9%|▊ | 48563/569592 [25:17<376:01:56, 2.60s/it]
9%|▊ | 48563/569592 [25:17<376:01:56, 2.60s/it]
9%|▊ | 48564/569592 [25:20<362:59:57, 2.51s/it]
9%|▊ | 48564/569592 [25:20<362:59:57, 2.51s/it]
9%|▊ | 48565/569592 [25:24<437:24:01, 3.02s/it]
9%|▊ | 48565/569592 [25:24<437:24:01, 3.02s/it]
9%|▊ | 48566/569592 [25:25<345:51:08, 2.39s/it]
9%|▊ | 48566/569592 [25:25<345:51:08, 2.39s/it]
9%|▊ | 48567/569592 [25:26<283:00:33, 1.96s/it]
9%|▊ | 48567/569592 [25:26<283:00:33, 1.96s/it]
9%|▊ | 48568/569592 [25:30<374:00:00, 2.58s/it]
9%|▊ | 48568/569592 [25:30<374:00:00, 2.58s/it]
9%|▊ | 48569/569592 [25:35<488:57:40, 3.38s/it]
9%|▊ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 48569/569592 [25:35<488:57:40, 3.38s/it]
9%|▊ | 48570/569592 [25:36<384:25:35, 2.66s/it]
9%|▊ | 48570/569592 [25:36<384:25:35, 2.66s/it]
9%|▊ | 48571/569592 [25:40<423:11:57, 2.92s/it]
9%|▊ | 48571/569592 [25:40<423:11:57, 2.92s/it]
9%|▊ | 48572/569592 [25:44<483:01:27, 3.34s/it]
9%|▊ | 48572/569592 [25:44<483:01:27, 3.34s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95153872 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▊ | 48573/569592 [25:49<552:40:00, 3.82s/it]
9%|▊ | 48573/569592 [25:49<552:40:00, 3.82s/it]
9%|▊ | 48574/569592 [25:53<582:37:02, 4.03s/it]
9%|▊ | 48574/569592 [25:53<582:37:02, 4.03s/it]
9%|▊ | 48575/569592 [25:58<615:09:27, 4.25s/it]
9%|▊ | 48575/569592 [25:58<615:09:27, 4.25s/it]
9%|▊ | 48576/569592 [26:03<628:46:41, 4.34s/it]
9%|▊ | 48576/569592 [26:03<628:46:41, 4.34s/it]
9%|▊ | 48577/569592 [26:08<658:29:59, 4.55s/it]
9%|▊ | 48577/569592 [26:08<658:29:59, 4.55s/it]
9%|▊ | 48578/569592 [26:12<657:47:22, 4.55s/it]
9%|▊ | 48578/569592 [26:12<657:47:22, 4.55s/it]
9%|▊ | 48579/569592 [26:13<498:38:17, 3.45s/it]
9%|▊ | 48579/569592 [26:13<498:38:17, 3.45s/it]
9%|▊ | 48580/569592 [26:18<554:19:47, 3.83s/it]
9%|▊ | 48580/569592 [26:18<554:19:47, 3.83s/it]
9%|▊ | 48581/569592 [26:21<540:26:44, 3.73s/it]
9%|▊ | 48581/569592 [26:21<540:26:44, 3.73s/it]
9%|▊ | 48582/569592 [26:27<609:02:35, 4.21s/it]
9%|▊ | 48582/569592 [26:27<609:02:35, 4.21s/it]
9%|▊ | 48583/569592 [26:31<628:25:16, 4.34s/it]
9%|▊ | 48583/569592 [26:32<628:25:16, 4.34s/it]
9%|▊ | 48584/569592 [26:36<643:50:05, 4.45s/it]
9%|▊ | 48584/569592 [26:36<643:50:05, 4.45s/it]
9%|▊ | 48585/569592 [26:42<687:41:38, 4.75s/it]
9%|▊ | 48585/569592 [26:42<687:41:38, 4.75s/it]
9%|▊ | 48586/569592 [26:42<522:08:29, 3.61s/it]
9%|▊ | 48586/569592 [26:42<522:08:29, 3.61s/it]
9%|▊ | 48587/569592 [26:43<404:42:29, 2.80s/it]
9%|▊ | 48587/569592 [26:43<404:42:29, 2.80s/it]
9%|▊ | 48588/569592 [26:44<324:37:53, 2.24s/it]
9%|▊ | 48588/569592 [26:44<324:37:53, 2.24s/it]
9%|▊ | 48589/569592 [26:45<272:13:43, 1.88s/it]
9%|▊ | 48589/569592 [26:45<272:13:43, 1.88s/it]
9%|▊ | 48590/569592 [26:46<233:34:37, 1.61s/it]
9%|▊ | 48590/569592 [26:46<233:34:37, 1.61s/it]
9%|▊ | 48591/569592 [26:47<203:34:29, 1.41s/it]
9%|▊ | 48591/569592 [26:47<203:34:29, 1.41s/it]
9%|▊ | 48592/569592 [26:48<185:00:12, 1.28s/it]
9%|▊ | 48592/569592 [26:48<185:00:12, 1.28s/it]
9%|▊ | 48593/569592 [26:49<172:00:34, 1.19s/it]
9%|▊ | 48593/569592 [26:49<172:00:34, 1.19s/it]
9%|▊ | 48594/569592 [26:55<385:27:38, 2.66s/it]
9%|▊ | 48594/569592 [26:55<385:27:38, 2.66s/it]
9%|▊ | 48595/569592 [26:56<315:34:04, 2.18s/it]
9%|▊ | 48595/569592 [26:56<315:34:04, 2.18s/it]
9%|▊ | 48596/569592 [26:57<261:38:57, 1.81s/it]
9%|▊ | 48596/569592 [26:57<261:38:57, 1.81s/it]
9%|▊ | 48597/569592 [26:58<229:02:49, 1.58s/it]
9%|▊ | 48597/569592 [26:58<229:02:49, 1.58s/it]
9%|▊ | 48598/569592 [27:05<443:42:03, 3.07s/it]
9%|▊ | 48598/569592 [27:05<443:42:03, 3.07s/it]
9%|▊ | 48599/569592 [27:06<364:05:42, 2.52s/it]
9%|▊ | 48599/569592 [27:06<364:05:42, 2.52s/it]
9%|▊ | 48600/569592 [27:07<295:53:27, 2.04s/it]
9%|▊ | 48600/569592 [27:07<295:53:27, 2.04s/it]
9%|▊ | 48601/569592 [27:08<248:40:53, 1.72s/it]
9%|▊ | 48601/569592 [27:08<248:40:53, 1.72s/it]
9%|▊ | 48602/569592 [27:17<572:24:40, 3.96s/it]
9%|▊ | 48602/569592 [27:17<572:24:40, 3.96s/it]
9%|▊ | 48603/569592 [27:18<446:34:32, 3.09s/it]
9%|▊ | 48603/569592 [27:18<446:34:32, 3.09s/it]
9%|▊ | 48604/569592 [27:19<353:23:44, 2.44s/it]
9%|▊ | 48604/569592 [27:19<353:23:44, 2.44s/it]
9%|▊ | 48605/569592 [27:20<289:16:13, 2.00s/it]
9%|▊ | 48605/569592 [27:20<289:16:13, 2.00s/it]
9%|▊ | 48606/569592 [27:27<476:03:27, 3.29s/it]
9%|▊ | 48606/569592 [27:27<476:03:27, 3.29s/it]
9%|▊ | 48607/569592 [27:27<374:22:55, 2.59s/it]
9%|▊ | 48607/569592 [27:27<374:22:55, 2.59s/it]
9%|▊ | 48608/569592 [27:28<302:25:32, 2.09s/it]
9%|▊ | 48608/569592 [27:28<302:25:32, 2.09s/it]
9%|▊ | 48609/569592 [27:29<253:18:03, 1.75s/it]
9%|▊ | 48609/569592 [27:29<253:18:03, 1.75s/it]
9%|▊ | 48610/569592 [27:37<488:18:21, 3.37s/it]
9%|▊ | 48610/569592 [27:37<488:18:21, 3.37s/it]
9%|▊ | 48611/569592 [27:37<383:28:55, 2.65s/it]
9%|▊ | 48611/569592 [27:37<383:28:55, 2.65s/it]
9%|▊ | 48612/569592 [27:38<309:36:51, 2.14s/it]
9%|▊ | 48612/569592 [27:38<309:36:51, 2.14s/it]
9%|▊ | 48613/569592 [27:39<256:50:31, 1.77s/it]
9%|▊ | 48613/569592 [27:39<256:50:31, 1.77s/it]
9%|▊ | 48614/569592 [27:48<571:27:04, 3.95s/it]
9%|▊ | 48614/569592 [27:48<571:27:04, 3.95s/it]
9%|▊ | 48615/569592 [27:49<441:56:06, 3.05s/it]
9%|▊ | 48615/569592 [27:49<441:56:06, 3.05s/it]
9%|▊ | 48616/569592 [27:50<349:57:36, 2.42s/it]
9%|▊ | 48616/569592 [27:50<349:57:36, 2.42s/it]
9%|▊ | 48617/569592 [27:51<285:12:12, 1.97s/it]
9%|▊ | 48617/569592 [27:51<285:12:12, 1.97s/it]
9%|▊ | 48618/569592 [27:58<488:26:53, 3.38s/it]
9%|▊ | 48618/569592 [27:58<488:26:53, 3.38s/it]
9%|▊ | 48619/569592 [27:59<383:40:08, 2.65s/it]
9%|▊ | 48619/569592 [27:59<383:40:08, 2.65s/it]
9%|▊ | 48620/569592 [28:00<309:30:15, 2.14s/it]
9%|▊ | 48620/569592 [28:00<309:30:15, 2.14s/it]
9%|▊ | 48621/569592 [28:01<258:14:17, 1.78s/it]
9%|▊ | 48621/569592 [28:01<258:14:17, 1.78s/it]
9%|▊ | 48622/569592 [28:07<465:03:00, 3.21s/it]
9%|▊ | 48622/569592 [28:07<465:03:00, 3.21s/it]
9%|▊ | 48623/569592 [28:08<367:39:26, 2.54s/it]
9%|▊ | 48623/569592 [28:08<367:39:26, 2.54s/it]
9%|▊ | 48624/569592 [28:10<314:15:29, 2.17s/it]
9%|▊ | 48624/569592 [28:10<314:15:29, 2.17s/it]
9%|▊ | 48625/569592 [28:10<260:09:19, 1.80s/it]
9%|▊ | 48625/569592 [28:10<260:09:19, 1.80s/it]
9%|▊ | 48626/569592 [28:19<539:18:43, 3.73s/it]
9%|▊ | 48626/569592 [28:19<539:18:43, 3.73s/it]
9%|▊ | 48627/569592 [28:20<419:27:08, 2.90s/it]
9%|▊ | 48627/569592 [28:20<419:27:08, 2.90s/it]
9%|▊ | 48628/569592 [28:21<333:16:32, 2.30s/it]
9%|▊ | 48628/569592 [28:21<333:16:32, 2.30s/it]
9%|▊ | 48629/569592 [28:22<274:56:49, 1.90s/it]
9%|▊ | 48629/569592 [28:22<274:56:49, 1.90s/it]
9%|▊ | 48630/569592 [28:28<479:52:45, 3.32s/it]
9%|▊ | 48630/569592 [28:28<479:52:45, 3.32s/it]
9%|▊ | 48631/569592 [28:29<377:46:06, 2.61s/it]
9%|▊ | 48631/569592 [28:29<377:46:06, 2.61s/it]
9%|▊ | 48632/569592 [28:30<304:52:59, 2.11s/it]
9%|▊ | 48632/569592 [28:30<304:52:59, 2.11s/it]
9%|▊ | 48633/569592 [28:31<258:20:36, 1.79s/it]
9%|▊ | 48633/569592 [28:31<258:20:36, 1.79s/it]
9%|▊ | 48634/569592 [28:37<447:16:01, 3.09s/it]
9%|▊ | 48634/569592 [28:37<447:16:01, 3.09s/it]
9%|▊ | 48635/569592 [28:38<364:27:04, 2.52s/it]
9%|▊ | 48635/569592 [28:38<364:27:04, 2.52s/it]
9%|▊ | 48636/569592 [28:39<295:21:49, 2.04s/it]
9%|▊ | 48636/569592 [28:39<295:21:49, 2.04s/it]
9%|▊ | 48637/569592 [28:41<271:17:20, 1.87s/it]
9%|▊ | 48637/569592 [28:41<271:17:20, 1.87s/it]
9%|▊ | 48638/569592 [28:47<472:19:48, 3.26s/it]
9%|▊ | 48638/569592 [28:47<472:19:48, 3.26s/it]
9%|▊ | 48639/569592 [28:48<371:46:24, 2.57s/it]
9%|▊ | 48639/569592 [28:48<371:46:24, 2.57s/it]
9%|▊ | 48640/569592 [28:49<303:34:12, 2.10s/it]
9%|▊ | 48640/569592 [28:49<303:34:12, 2.10s/it]
9%|▊ | 48641/569592 [28:51<277:55:26, 1.92s/it]
9%|▊ | 48641/569592 [28:51<277:55:26, 1.92s/it]
9%|▊ | 48642/569592 [28:57<478:41:44, 3.31s/it]
9%|▊ | 48642/569592 [28:57<478:41:44, 3.31s/it]
9%|▊ | 48643/569592 [28:58<376:28:14, 2.60s/it]
9%|▊ | 48643/569592 [28:58<376:28:14, 2.60s/it]
9%|▊ | 48644/569592 [29:00<351:20:27, 2.43s/it]
9%|▊ | 48644/569592 [29:00<351:20:27, 2.43s/it]
9%|▊ | 48645/569592 [29:01<286:38:36, 1.98s/it]
9%|▊ | 48645/569592 [29:01<286:38:36, 1.98s/it]
9%|▊ | 48646/569592 [29:07<446:41:28, 3.09s/it]
9%|▊ | 48646/569592 [29:07<446:41:28, 3.09s/it]
9%|▊ | 48647/569592 [29:08<354:08:54, 2.45s/it]
9%|▊ | 48647/569592 [29:08<354:08:54, 2.45s/it]
9%|▊ | 48648/569592 [29:09<288:06:13, 1.99s/it]
9%|▊ | 48648/569592 [29:09<288:06:13, 1.99s/it]
9%|▊ | 48649/569592 [29:11<296:17:07, 2.05s/it]
9%|▊ | 48649/569592 [29:11<296:17:07, 2.05s/it]
9%|▊ | 48650/569592 [29:17<467:28:39, 3.23s/it]
9%|▊ | 48650/569592 [29:17<467:28:39, 3.23s/it]
9%|▊ | 48651/569592 [29:18<388:53:11, 2.69s/it]
9%|▊ | 48651/569592 [29:18<388:53:11, 2.69s/it]
9%|▊ | 48652/569592 [29:19<314:31:34, 2.17s/it]
9%|▊ | 48652/569592 [29:19<314:31:34, 2.17s/it]
9%|▊ | 48653/569592 [29:21<302:38:07, 2.09s/it]
9%|▊ | 48653/569592 [29:21<302:38:07, 2.09s/it]
9%|▊ | 48654/569592 [29:27<459:12:25, 3.17s/it]
9%|▊ | 48654/569592 [29:27<459:12:25, 3.17s/it]
9%|▊ | 48655/569592 [29:28<387:11:04, 2.68s/it]
9%|▊ | 48655/569592 [29:28<387:11:04, 2.68s/it]
9%|▊ | 48656/569592 [29:29<310:22:48, 2.14s/it]
9%|▊ | 48656/569592 [29:29<310:22:48, 2.14s/it]
9%|▊ | 48657/569592 [29:31<271:10:53, 1.87s/it]
9%|▊ | 48657/569592 [29:31<271:10:53, 1.87s/it]
9%|▊ | 48658/569592 [29:37<473:43:49, 3.27s/it]
9%|▊ | 48658/569592 [29:37<473:43:49, 3.27s/it]
9%|▊ | 48659/569592 [29:38<374:19:24, 2.59s/it]
9%|▊ | 48659/569592 [29:38<374:19:24, 2.59s/it]
9%|▊ | 48660/569592 [29:39<314:07:34, 2.17s/it]
9%|▊ | 48660/569592 [29:39<314:07:34, 2.17s/it]
9%|▊ | 48661/569592 [29:42<339:13:43, 2.34s/it]
9%|▊ | 48661/569592 [29:42<339:13:43, 2.34s/it]
9%|▊ | 48662/569592 [29:47<471:26:51, 3.26s/it]
9%|▊ | 48662/569592 [29:47<471:26:51, 3.26s/it]
9%|▊ | 48663/569592 [29:49<376:26:52, 2.60s/it]
9%|▊ | 48663/569592 [29:49<376:26:52, 2.60s/it]
9%|▊ | 48664/569592 [29:50<338:38:05, 2.34s/it]
9%|▊ | 48664/569592 [29:50<338:38:05, 2.34s/it]
9%|▊ | 48665/569592 [29:52<306:42:44, 2.12s/it]
9%|▊ | 48665/569592 [29:52<306:42:44, 2.12s/it]
9%|▊ | 48666/569592 [29:58<465:23:42, 3.22s/it]
9%|▊ | 48666/569592 [29:58<465:23:42, 3.22s/it]
9%|▊ | 48667/569592 [29:59<366:11:41, 2.53s/it]
9%|▊ | 48667/569592 [29:59<366:11:41, 2.53s/it]
9%|▊ | 48668/569592 [30:00<309:58:16, 2.14s/it]
9%|▊ | 48668/569592 [30:00<309:58:16, 2.14s/it]
9%|▊ | 48669/569592 [30:03<346:17:44, 2.39s/it]
9%|▊ | 48669/569592 [30:03<346:17:44, 2.39s/it]
9%|▊ | 48670/569592 [30:08<454:51:20, 3.14s/it]
9%|▊ | 48670/569592 [30:08<454:51:20, 3.14s/it]
9%|▊ | 48671/569592 [30:09<381:33:01, 2.64s/it]
9%|▊ | 48671/569592 [30:09<381:33:01, 2.64s/it]
9%|▊ | 48672/569592 [30:10<306:49:11, 2.12s/it]
9%|▊ | 48672/569592 [30:10<306:49:11, 2.12s/it]
9%|▊ | 48673/569592 [30:12<290:12:18, 2.01s/it]
9%|▊ | 48673/569592 [30:12<290:12:18, 2.01s/it]
9%|▊ | 48674/569592 [30:17<416:28:16, 2.88s/it]
9%|▊ | 48674/569592 [30:17<416:28:16, 2.88s/it]
9%|▊ | 48675/569592 [30:19<403:09:50, 2.79s/it]
9%|▊ | 48675/569592 [30:19<403:09:50, 2.79s/it]
9%|▊ | 48676/569592 [30:22<421:43:37, 2.91s/it]
9%|▊ | 48676/569592 [30:22<421:43:37, 2.91s/it]
9%|▊ | 48677/569592 [30:27<488:12:49, 3.37s/it]
9%|▊ | 48677/569592 [30:27<488:12:49, 3.37s/it]
9%|▊ | 48678/569592 [30:28<384:22:45, 2.66s/it]
9%|▊ | 48678/569592 [30:28<384:22:45, 2.66s/it]
9%|▊ | 48679/569592 [30:29<312:47:26, 2.16s/it]
9%|▊ | 48679/569592 [30:29<312:47:26, 2.16s/it]
9%|▊ | 48680/569592 [30:30<261:59:28, 1.81s/it]
9%|▊ | 48680/569592 [30:30<261:59:28, 1.81s/it]
9%|▊ | 48681/569592 [30:33<325:47:17, 2.25s/it]
9%|▊ | 48681/569592 [30:33<325:47:17, 2.25s/it]
9%|▊ | 48682/569592 [30:38<430:24:24, 2.97s/it]
9%|▊ | 48682/569592 [30:38<430:24:24, 2.97s/it]
9%|▊ | 48683/569592 [30:41<437:06:29, 3.02s/it]
9%|▊ | 48683/569592 [30:41<437:06:29, 3.02s/it]
9%|▊ | 48684/569592 [30:42<352:56:28, 2.44s/it]
9%|▊ | 48684/569592 [30:42<352:56:28, 2.44s/it]
9%|▊ | 48685/569592 [30:43<289:15:53, 2.00s/it]
9%|▊ | 48685/569592 [30:43<289:15:53, 2.00s/it]
9%|▊ | 48686/569592 [30:47<353:44:16, 2.44s/it]
9%|▊ | 48686/569592 [30:47<353:44:16, 2.44s/it]
9%|▊ | 48687/569592 [30:51<463:25:43, 3.20s/it]
9%|▊ | 48687/569592 [30:52<463:25:43, 3.20s/it]
9%|▊ | 48688/569592 [30:56<537:36:27, 3.72s/it]
9%|▊ | 48688/569592 [30:56<537:36:27, 3.72s/it]
9%|▊ | 48689/569592 [31:01<590:52:58, 4.08s/it]
9%|▊ | 48689/569592 [31:01<590:52:58, 4.08s/it]
9%|▊ | 48690/569592 [31:05<576:19:27, 3.98s/it]
9%|▊ | 48690/569592 [31:05<576:19:27, 3.98s/it]
9%|▊ | 48691/569592 [31:08<548:29:48, 3.79s/it]
9%|▊ | 48691/569592 [31:08<548:29:48, 3.79s/it]
9%|▊ | 48692/569592 [31:12<541:36:52, 3.74s/it]
9%|▊ | 48692/569592 [31:12<541:36:52, 3.74s/it]
9%|▊ | 48693/569592 [31:13<420:11:41, 2.90s/it]
9%|▊ | 48693/569592 [31:13<420:11:41, 2.90s/it]
9%|▊ | 48694/569592 [31:17<479:37:56, 3.31s/it]
9%|▊ | 48694/569592 [31:17<479:37:56, 3.31s/it]
9%|▊ | 48695/569592 [31:22<550:32:30, 3.80s/it]
9%|▊ | 48695/569592 [31:22<550:32:30, 3.80s/it]
9%|▊ | 48696/569592 [31:27<582:25:03, 4.03s/it]
9%|▊ | 48696/569592 [31:27<582:25:03, 4.03s/it]
9%|▊ | 48697/569592 [31:30<556:00:50, 3.84s/it]
9%|▊ | 48697/569592 [31:30<556:00:50, 3.84s/it]
9%|▊ | 48698/569592 [31:33<525:17:43, 3.63s/it]
9%|▊ | 48698/569592 [31:33<525:17:43, 3.63s/it]
9%|▊ | 48699/569592 [31:38<568:00:32, 3.93s/it]
9%|▊ | 48699/569592 [31:38<568:00:32, 3.93s/it]
9%|▊ | 48700/569592 [31:43<617:06:50, 4.27s/it]
9%|▊ | 48700/569592 [31:43<617:06:50, 4.27s/it]
9%|▊ | 48701/569592 [31:47<586:49:26, 4.06s/it]
9%|▊ | 48701/569592 [31:47<586:49:26, 4.06s/it]
9%|▊ | 48702/569592 [31:52<625:14:10, 4.32s/it]
9%|▊ | 48702/569592 [31:52<625:14:10, 4.32s/it]
9%|▊ | 48703/569592 [31:55<608:56:07, 4.21s/it]
9%|▊ | 48703/569592 [31:55<608:56:07, 4.21s/it]
9%|▊ | 48704/569592 [31:56<464:57:58, 3.21s/it]
9%|▊ | 48704/569592 [31:56<464:57:58, 3.21s/it]
9%|▊ | 48705/569592 [31:57<365:02:34, 2.52s/it]
9%|▊ | 48705/569592 [31:57<365:02:34, 2.52s/it]
9%|▊ | 48706/569592 [31:58<297:54:58, 2.06s/it]
9%|▊ | 48706/569592 [31:58<297:54:58, 2.06s/it]
9%|▊ | 48707/569592 [31:59<249:32:06, 1.72s/it]
9%|▊ | 48707/569592 [31:59<249:32:06, 1.72s/it]
9%|▊ | 48708/569592 [32:00<214:45:40, 1.48s/it]
9%|▊ | 48708/569592 [32:00<214:45:40, 1.48s/it]
9%|▊ | 48709/569592 [32:01<192:08:31, 1.33s/it]
9%|▊ | 48709/569592 [32:01<192:08:31, 1.33s/it]
9%|▊ | 48710/569592 [32:02<176:38:09, 1.22s/it]
9%|▊ | 48710/569592 [32:02<176:38:09, 1.22s/it]
9%|▊ | 48711/569592 [32:04<215:15:07, 1.49s/it]
9%|▊ | 48711/569592 [32:04<215:15:07, 1.49s/it]
9%|▊ | 48712/569592 [32:09<374:16:14, 2.59s/it]
9%|▊ | 48712/569592 [32:09<374:16:14, 2.59s/it]
9%|▊ | 48713/569592 [32:10<313:03:44, 2.16s/it]
9%|▊ | 48713/569592 [32:10<313:03:44, 2.16s/it]
9%|▊ | 48714/569592 [32:12<265:04:41, 1.83s/it]
9%|▊ | 48714/569592 [32:12<265:04:41, 1.83s/it]
9%|▊ | 48715/569592 [32:14<289:21:46, 2.00s/it]
9%|▊ | 48715/569592 [32:14<289:21:46, 2.00s/it]
9%|▊ | 48716/569592 [32:19<440:35:17, 3.05s/it]
9%|▊ | 48716/569592 [32:19<440:35:17, 3.05s/it]
9%|▊ | 48717/569592 [32:20<354:42:50, 2.45s/it]
9%|▊ | 48717/569592 [32:20<354:42:50, 2.45s/it]
9%|▊ | 48718/569592 [32:21<292:07:02, 2.02s/it]
9%|▊ | 48718/569592 [32:21<292:07:02, 2.02s/it]
9%|▊ | 48719/569592 [32:24<326:24:14, 2.26s/it]
9%|▊ | 48719/569592 [32:24<326:24:14, 2.26s/it]
9%|▊ | 48720/569592 [32:29<451:10:41, 3.12s/it]
9%|▊ | 48720/569592 [32:29<451:10:41, 3.12s/it]
9%|▊ | 48721/569592 [32:30<358:09:25, 2.48s/it]
9%|▊ | 48721/569592 [32:30<358:09:25, 2.48s/it]
9%|▊ | 48722/569592 [32:32<323:04:24, 2.23s/it]
9%|▊ | 48722/569592 [32:32<323:04:24, 2.23s/it]
9%|▊ | 48723/569592 [32:34<312:56:35, 2.16s/it]
9%|▊ | 48723/569592 [32:34<312:56:35, 2.16s/it]
9%|▊ | 48724/569592 [32:40<481:02:34, 3.32s/it]
9%|▊ | 48724/569592 [32:40<481:02:34, 3.32s/it]
9%|▊ | 48725/569592 [32:41<381:34:15, 2.64s/it]
9%|▊ | 48725/569592 [32:41<381:34:15, 2.64s/it]
9%|▊ | 48726/569592 [32:42<307:38:58, 2.13s/it]
9%|▊ | 48726/569592 [32:42<307:38:58, 2.13s/it]
9%|▊ | 48727/569592 [32:45<337:17:01, 2.33s/it]
9%|▊ | 48727/569592 [32:45<337:17:01, 2.33s/it]
9%|▊ | 48728/569592 [32:50<472:58:07, 3.27s/it]
9%|▊ | 48728/569592 [32:50<472:58:07, 3.27s/it]
9%|▊ | 48729/569592 [32:51<373:16:25, 2.58s/it]
9%|▊ | 48729/569592 [32:51<373:16:25, 2.58s/it]
9%|▊ | 48730/569592 [32:53<343:13:57, 2.37s/it]
9%|▊ | 48730/569592 [32:53<343:13:57, 2.37s/it]
9%|▊ | 48731/569592 [32:55<333:35:41, 2.31s/it]
9%|▊ | 48731/569592 [32:55<333:35:41, 2.31s/it]
9%|▊ | 48732/569592 [33:01<468:07:30, 3.24s/it]
9%|▊ | 48732/569592 [33:01<468:07:30, 3.24s/it]
9%|▊ | 48733/569592 [33:02<373:33:56, 2.58s/it]
9%|▊ | 48733/569592 [33:02<373:33:56, 2.58s/it]
9%|▊ | 48734/569592 [33:03<303:56:54, 2.10s/it]
9%|▊ | 48734/569592 [33:03<303:56:54, 2.10s/it]
9%|▊ | 48735/569592 [33:05<302:21:29, 2.09s/it]
9%|▊ | 48735/569592 [33:05<302:21:29, 2.09s/it]
9%|▊ | 48736/569592 [33:12<516:30:07, 3.57s/it]
9%|▊ | 48736/569592 [33:12<516:30:07, 3.57s/it]
9%|▊ | 48737/569592 [33:13<402:21:18, 2.78s/it]
9%|▊ | 48737/569592 [33:13<402:21:18, 2.78s/it]
9%|▊ | 48738/569592 [33:14<322:53:31, 2.23s/it]
9%|▊ | 48738/569592 [33:14<322:53:31, 2.23s/it]
9%|▊ | 48739/569592 [33:15<278:40:58, 1.93s/it]
9%|▊ | 48739/569592 [33:15<278:40:58, 1.93s/it]
9%|▊ | 48740/569592 [33:22<498:39:14, 3.45s/it]
9%|▊ | 48740/569592 [33:22<498:39:14, 3.45s/it]
9%|▊ | 48741/569592 [33:23<390:27:54, 2.70s/it]
9%|▊ | 48741/569592 [33:23<390:27:54, 2.70s/it]
9%|▊ | 48742/569592 [33:24<315:13:55, 2.18s/it]
9%|▊ | 48742/569592 [33:24<315:13:55, 2.18s/it]
9%|▊ | 48743/569592 [33:25<279:14:10, 1.93s/it]
9%|▊ | 48743/569592 [33:25<279:14:10, 1.93s/it]
9%|▊ | 48744/569592 [33:32<492:56:30, 3.41s/it]
9%|▊ | 48744/569592 [33:32<492:56:30, 3.41s/it]
9%|▊ | 48745/569592 [33:33<386:41:02, 2.67s/it]
9%|▊ | 48745/569592 [33:33<386:41:02, 2.67s/it]
9%|▊ | 48746/569592 [33:34<311:40:55, 2.15s/it]
9%|▊ | 48746/569592 [33:34<311:40:55, 2.15s/it]
9%|▊ | 48747/569592 [33:37<345:03:35, 2.38s/it]
9%|▊ | 48747/569592 [33:37<345:03:35, 2.38s/it]
9%|▊ | 48748/569592 [33:42<482:58:20, 3.34s/it]
9%|▊ | 48748/569592 [33:42<482:58:20, 3.34s/it]
9%|▊ | 48749/569592 [33:43<378:03:20, 2.61s/it]
9%|▊ | 48749/569592 [33:43<378:03:20, 2.61s/it]
9%|▊ | 48750/569592 [33:44<305:11:24, 2.11s/it]
9%|▊ | 48750/569592 [33:44<305:11:24, 2.11s/it]
9%|▊ | 48751/569592 [33:46<286:51:43, 1.98s/it]
9%|▊ | 48751/569592 [33:46<286:51:43, 1.98s/it]
9%|▊ | 48752/569592 [33:53<507:32:37, 3.51s/it]
9%|▊ | 48752/569592 [33:54<507:32:37, 3.51s/it]
9%|▊ | 48753/569592 [33:54<409:58:25, 2.83s/it]
9%|▊ | 48753/569592 [33:54<409:58:25, 2.83s/it]
9%|▊ | 48754/569592 [33:55<328:10:26, 2.27s/it]
9%|▊ | 48754/569592 [33:55<328:10:26, 2.27s/it]
9%|▊ | 48755/569592 [33:56<271:09:54, 1.87s/it]
9%|▊ | 48755/569592 [33:56<271:09:54, 1.87s/it]
9%|▊ | 48756/569592 [34:03<463:18:32, 3.20s/it]
9%|▊ | 48756/569592 [34:03<463:18:32, 3.20s/it]
9%|▊ | 48757/569592 [34:04<367:16:37, 2.54s/it]
9%|▊ | 48757/569592 [34:04<367:16:37, 2.54s/it]
9%|▊ | 48758/569592 [34:05<300:19:59, 2.08s/it]
9%|▊ | 48758/569592 [34:05<300:19:59, 2.08s/it]
9%|▊ | 48759/569592 [34:06<282:12:57, 1.95s/it]
9%|▊ | 48759/569592 [34:06<282:12:57, 1.95s/it]
9%|▊ | 48760/569592 [34:12<465:27:57, 3.22s/it]
9%|▊ | 48760/569592 [34:12<465:27:57, 3.22s/it]
9%|▊ | 48761/569592 [34:14<397:27:42, 2.75s/it]
9%|▊ | 48761/569592 [34:14<397:27:42, 2.75s/it]
9%|▊ | 48762/569592 [34:15<318:18:37, 2.20s/it]
9%|▊ | 48762/569592 [34:15<318:18:37, 2.20s/it]
9%|▊ | 48763/569592 [34:17<301:44:28, 2.09s/it]
9%|▊ | 48763/569592 [34:17<301:44:28, 2.09s/it]
9%|▊ | 48764/569592 [34:23<494:59:17, 3.42s/it]
9%|▊ | 48764/569592 [34:23<494:59:17, 3.42s/it]
9%|▊ | 48765/569592 [34:24<389:25:12, 2.69s/it]
9%|▊ | 48765/569592 [34:24<389:25:12, 2.69s/it]
9%|▊ | 48766/569592 [34:26<328:28:14, 2.27s/it]
9%|▊ | 48766/569592 [34:26<328:28:14, 2.27s/it]
9%|▊ | 48767/569592 [34:27<274:26:21, 1.90s/it]
9%|▊ | 48767/569592 [34:27<274:26:21, 1.90s/it]
9%|▊ | 48768/569592 [34:33<473:35:37, 3.27s/it]
9%|▊ | 48768/569592 [34:33<473:35:37, 3.27s/it]
9%|▊ | 48769/569592 [34:34<373:34:54, 2.58s/it]
9%|▊ | 48769/569592 [34:34<373:34:54, 2.58s/it]
9%|▊ | 48770/569592 [34:36<339:29:00, 2.35s/it]
9%|▊ | 48770/569592 [34:36<339:29:00, 2.35s/it]
9%|▊ | 48771/569592 [34:37<280:11:33, 1.94s/it]
9%|▊ | 48771/569592 [34:37<280:11:33, 1.94s/it]
9%|▊ | 48772/569592 [34:44<508:24:42, 3.51s/it]
9%|▊ | 48772/569592 [34:44<508:24:42, 3.51s/it]
9%|▊ | 48773/569592 [34:45<402:31:04, 2.78s/it]
9%|▊ | 48773/569592 [34:45<402:31:04, 2.78s/it]
9%|▊ | 48774/569592 [34:46<322:19:18, 2.23s/it]
9%|▊ | 48774/569592 [34:46<322:19:18, 2.23s/it]
9%|▊ | 48775/569592 [34:48<290:07:21, 2.01s/it]
9%|▊ | 48775/569592 [34:48<290:07:21, 2.01s/it]
9%|▊ | 48776/569592 [34:53<461:15:33, 3.19s/it]
9%|▊ | 48776/569592 [34:53<461:15:33, 3.19s/it]
9%|▊ | 48777/569592 [34:56<413:16:38, 2.86s/it]
9%|▊ | 48777/569592 [34:56<413:16:38, 2.86s/it]
9%|▊ | 48778/569592 [34:57<330:57:35, 2.29s/it]
9%|▊ | 48778/569592 [34:57<330:57:35, 2.29s/it]
9%|▊ | 48779/569592 [34:58<278:32:53, 1.93s/it]
9%|▊ | 48779/569592 [34:58<278:32:53, 1.93s/it]
9%|▊ | 48780/569592 [35:03<435:45:23, 3.01s/it]
9%|▊ | 48780/569592 [35:03<435:45:23, 3.01s/it]
9%|▊ | 48781/569592 [35:06<422:57:34, 2.92s/it]
9%|▊ | 48781/569592 [35:06<422:57:34, 2.92s/it]
9%|▊ | 48782/569592 [35:07<335:05:55, 2.32s/it]
9%|▊ | 48782/569592 [35:07<335:05:55, 2.32s/it]
9%|▊ | 48783/569592 [35:08<280:05:22, 1.94s/it]
9%|▊ | 48783/569592 [35:08<280:05:22, 1.94s/it]
9%|▊ | 48784/569592 [35:13<421:14:44, 2.91s/it]
9%|▊ | 48784/569592 [35:13<421:14:44, 2.91s/it]
9%|▊ | 48785/569592 [35:15<371:22:31, 2.57s/it]
9%|▊ | 48785/569592 [35:15<371:22:31, 2.57s/it]
9%|▊ | 48786/569592 [35:16<300:46:15, 2.08s/it]
9%|▊ | 48786/569592 [35:16<300:46:15, 2.08s/it]
9%|▊ | 48787/569592 [35:18<292:14:18, 2.02s/it]
9%|▊ | 48787/569592 [35:18<292:14:18, 2.02s/it]
9%|▊ | 48788/569592 [35:23<453:43:55, 3.14s/it]
9%|▊ | 48788/569592 [35:23<453:43:55, 3.14s/it]
9%|▊ | 48789/569592 [35:27<463:49:11, 3.21s/it]
9%|▊ | 48789/569592 [35:27<463:49:11, 3.21s/it]
9%|▊ | 48790/569592 [35:30<453:26:46, 3.13s/it]
9%|▊ | 48790/569592 [35:30<453:26:46, 3.13s/it]
9%|▊ | 48791/569592 [35:31<359:12:14, 2.48s/it]
9%|▊ | 48791/569592 [35:31<359:12:14, 2.48s/it]
9%|▊ | 48792/569592 [35:33<362:40:29, 2.51s/it]
9%|▊ | 48792/569592 [35:33<362:40:29, 2.51s/it]
9%|▊ | 48793/569592 [35:34<304:14:23, 2.10s/it]
9%|▊ | 48793/569592 [35:34<304:14:23, 2.10s/it]
9%|▊ | 48794/569592 [35:35<258:52:14, 1.79s/it]
9%|▊ | 48794/569592 [35:35<258:52:14, 1.79s/it]
9%|▊ | 48795/569592 [35:37<238:11:22, 1.65s/it]
9%|▊ | 48795/569592 [35:37<238:11:22, 1.65s/it]
9%|▊ | 48796/569592 [35:44<471:01:15, 3.26s/it]
9%|▊ | 48796/569592 [35:44<471:01:15, 3.26s/it]
9%|▊ | 48797/569592 [35:48<513:30:28, 3.55s/it]
9%|▊ | 48797/569592 [35:48<513:30:28, 3.55s/it]
9%|▊ | 48798/569592 [35:51<496:10:08, 3.43s/it]
9%|▊ | 48798/569592 [35:51<496:10:08, 3.43s/it]
9%|▊ | 48799/569592 [35:52<387:44:38, 2.68s/it]
9%|▊ | 48799/569592 [35:52<387:44:38, 2.68s/it]
9%|▊ | 48800/569592 [35:55<419:12:27, 2.90s/it]
9%|▊ | 48800/569592 [35:55<419:12:27, 2.90s/it]
9%|▊ | 48801/569592 [36:01<514:46:42, 3.56s/it]
9%|▊ | 48801/569592 [36:01<514:46:42, 3.56s/it]
9%|▊ | 48802/569592 [36:07<622:32:47, 4.30s/it]
9%|▊ | 48802/569592 [36:07<622:32:47, 4.30s/it]
9%|▊ | 48803/569592 [36:10<582:37:40, 4.03s/it]
9%|▊ | 48803/569592 [36:10<582:37:40, 4.03s/it]
9%|▊ | 48804/569592 [36:14<579:58:04, 4.01s/it]
9%|▊ | 48804/569592 [36:14<579:58:04, 4.01s/it]
9%|▊ | 48805/569592 [36:15<447:51:13, 3.10s/it]
9%|▊ | 48805/569592 [36:15<447:51:13, 3.10s/it]
9%|▊ | 48806/569592 [36:18<450:49:39, 3.12s/it]
9%|▊ | 48806/569592 [36:18<450:49:39, 3.12s/it]
9%|▊ | 48807/569592 [36:23<540:54:22, 3.74s/it]
9%|▊ | 48807/569592 [36:23<540:54:22, 3.74s/it]
9%|▊ | 48808/569592 [36:26<516:08:57, 3.57s/it]
9%|▊ | 48808/569592 [36:26<516:08:57, 3.57s/it]
9%|▊ | 48809/569592 [36:31<545:09:04, 3.77s/it]
9%|▊ | 48809/569592 [36:31<545:09:04, 3.77s/it]
9%|▊ | 48810/569592 [36:36<592:45:26, 4.10s/it]
9%|▊ | 48810/569592 [36:36<592:45:26, 4.10s/it]
9%|▊ | 48811/569592 [36:36<452:10:27, 3.13s/it]
9%|▊ | 48811/569592 [36:36<452:10:27, 3.13s/it]
9%|▊ | 48812/569592 [36:40<483:45:27, 3.34s/it]
9%|▊ | 48812/569592 [36:40<483:45:27, 3.34s/it]
9%|▊ | 48813/569592 [36:44<496:46:42, 3.43s/it]
9%|▊ | 48813/569592 [36:44<496:46:42, 3.43s/it]
9%|▊ | 48814/569592 [36:49<554:47:08, 3.84s/it]
9%|▊ | 48814/569592 [36:49<554:47:08, 3.84s/it]
9%|▊ | 48815/569592 [36:52<515:12:00, 3.56s/it]
9%|▊ | 48815/569592 [36:52<515:12:00, 3.56s/it]
9%|▊ | 48816/569592 [36:56<553:13:15, 3.82s/it]
9%|▊ | 48816/569592 [36:56<553:13:15, 3.82s/it]
9%|▊ | 48817/569592 [37:01<598:28:01, 4.14s/it]
9%|▊ | 48817/569592 [37:01<598:28:01, 4.14s/it]
9%|▊ | 48818/569592 [37:06<643:20:53, 4.45s/it]
9%|▊ | 48818/569592 [37:06<643:20:53, 4.45s/it]
9%|▊ | 48819/569592 [37:10<617:38:21, 4.27s/it]
9%|▊ | 48819/569592 [37:10<617:38:21, 4.27s/it]
9%|▊ | 48820/569592 [37:15<661:44:22, 4.57s/it]
9%|▊ | 48820/569592 [37:15<661:44:22, 4.57s/it]
9%|▊ | 48821/569592 [37:19<644:58:13, 4.46s/it]
9%|▊ | 48821/569592 [37:19<644:58:13, 4.46s/it]
9%|▊ | 48822/569592 [37:20<489:32:16, 3.38s/it]
9%|▊ | 48822/569592 [37:20<489:32:16, 3.38s/it]
9%|▊ | 48823/569592 [37:21<383:36:00, 2.65s/it]
9%|▊ | 48823/569592 [37:21<383:36:00, 2.65s/it]
9%|▊ | 48824/569592 [37:22<311:22:57, 2.15s/it]
9%|▊ | 48824/569592 [37:22<311:22:57, 2.15s/it]
9%|▊ | 48825/569592 [37:23<261:07:01, 1.81s/it]
9%|▊ | 48825/569592 [37:23<261:07:01, 1.81s/it]
9%|▊ | 48826/569592 [37:24<227:51:48, 1.58s/it]
9%|▊ | 48826/569592 [37:24<227:51:48, 1.58s/it]
9%|▊ | 48827/569592 [37:25<199:40:17, 1.38s/it]
9%|▊ | 48827/569592 [37:25<199:40:17, 1.38s/it]
9%|▊ | 48828/569592 [37:26<180:22:57, 1.25s/it]
9%|▊ | 48828/569592 [37:26<180:22:57, 1.25s/it]
9%|▊ | 48829/569592 [37:28<188:26:45, 1.30s/it]
9%|▊ | 48829/569592 [37:28<188:26:45, 1.30s/it]
9%|▊ | 48830/569592 [37:34<411:26:37, 2.84s/it]
9%|▊ | 48830/569592 [37:34<411:26:37, 2.84s/it]
9%|▊ | 48831/569592 [37:35<328:47:46, 2.27s/it]
9%|▊ | 48831/569592 [37:35<328:47:46, 2.27s/it]
9%|▊ | 48832/569592 [37:36<270:35:47, 1.87s/it]
9%|▊ | 48832/569592 [37:36<270:35:47, 1.87s/it]
9%|▊ | 48833/569592 [37:37<253:23:56, 1.75s/it]
9%|▊ | 48833/569592 [37:37<253:23:56, 1.75s/it]
9%|▊ | 48834/569592 [37:43<432:07:01, 2.99s/it]
9%|▊ | 48834/569592 [37:43<432:07:01, 2.99s/it]
9%|▊ | 48835/569592 [37:44<345:40:23, 2.39s/it]
9%|▊ | 48835/569592 [37:44<345:40:23, 2.39s/it]
9%|▊ | 48836/569592 [37:45<295:37:52, 2.04s/it]
9%|▊ | 48836/569592 [37:45<295:37:52, 2.04s/it]
9%|▊ | 48837/569592 [37:48<301:44:56, 2.09s/it]
9%|▊ | 48837/569592 [37:48<301:44:56, 2.09s/it]
9%|▊ | 48838/569592 [37:54<481:56:12, 3.33s/it]
9%|▊ | 48838/569592 [37:54<481:56:12, 3.33s/it]
9%|▊ | 48839/569592 [37:55<384:06:45, 2.66s/it]
9%|▊ | 48839/569592 [37:55<384:06:45, 2.66s/it]
9%|▊ | 48840/569592 [37:56<329:30:15, 2.28s/it]
9%|▊ | 48840/569592 [37:56<329:30:15, 2.28s/it]
9%|▊ | 48841/569592 [37:57<277:40:41, 1.92s/it]
9%|▊ | 48841/569592 [37:57<277:40:41, 1.92s/it]
9%|▊ | 48842/569592 [38:05<511:51:03, 3.54s/it]
9%|▊ | 48842/569592 [38:05<511:51:03, 3.54s/it]
9%|▊ | 48843/569592 [38:06<400:57:52, 2.77s/it]
9%|▊ | 48843/569592 [38:06<400:57:52, 2.77s/it]
9%|▊ | 48844/569592 [38:07<322:49:55, 2.23s/it]
9%|▊ | 48844/569592 [38:07<322:49:55, 2.23s/it]
9%|▊ | 48845/569592 [38:08<277:51:04, 1.92s/it]
9%|▊ | 48845/569592 [38:08<277:51:04, 1.92s/it]
9%|▊ | 48846/569592 [38:14<455:01:45, 3.15s/it]
9%|▊ | 48846/569592 [38:14<455:01:45, 3.15s/it]
9%|▊ | 48847/569592 [38:15<364:21:07, 2.52s/it]
9%|▊ | 48847/569592 [38:15<364:21:07, 2.52s/it]
9%|▊ | 48848/569592 [38:16<299:37:10, 2.07s/it]
9%|▊ | 48848/569592 [38:16<299:37:10, 2.07s/it]
9%|▊ | 48849/569592 [38:18<298:06:56, 2.06s/it]
9%|▊ | 48849/569592 [38:18<298:06:56, 2.06s/it]
9%|▊ | 48850/569592 [38:25<495:27:31, 3.43s/it]
9%|▊ | 48850/569592 [38:25<495:27:31, 3.43s/it]
9%|▊ | 48851/569592 [38:26<403:21:13, 2.79s/it]
9%|▊ | 48851/569592 [38:26<403:21:13, 2.79s/it]
9%|▊ | 48852/569592 [38:27<323:51:09, 2.24s/it]
9%|▊ | 48852/569592 [38:27<323:51:09, 2.24s/it]
9%|▊ | 48853/569592 [38:28<280:37:31, 1.94s/it]
9%|▊ | 48853/569592 [38:28<280:37:31, 1.94s/it]
9%|▊ | 48854/569592 [38:34<472:09:01, 3.26s/it]
9%|▊ | 48854/569592 [38:34<472:09:01, 3.26s/it]
9%|▊ | 48855/569592 [38:37<440:36:58, 3.05s/it]
9%|▊ | 48855/569592 [38:37<440:36:58, 3.05s/it]
9%|▊ | 48856/569592 [38:38<363:38:31, 2.51s/it]
9%|▊ | 48856/569592 [38:38<363:38:31, 2.51s/it]
9%|▊ | 48857/569592 [38:39<306:16:55, 2.12s/it]
9%|▊ | 48857/569592 [38:39<306:16:55, 2.12s/it]
9%|▊ | 48858/569592 [38:47<557:21:26, 3.85s/it]
9%|▊ | 48858/569592 [38:47<557:21:26, 3.85s/it]
9%|▊ | 48859/569592 [38:48<438:22:10, 3.03s/it]
9%|▊ | 48859/569592 [38:48<438:22:10, 3.03s/it]
9%|▊ | 48860/569592 [38:49<347:23:51, 2.40s/it]
9%|▊ | 48860/569592 [38:49<347:23:51, 2.40s/it]
9%|▊ | 48861/569592 [38:50<285:48:16, 1.98s/it]
9%|▊ | 48861/569592 [38:50<285:48:16, 1.98s/it]
9%|▊ | 48862/569592 [38:56<455:47:02, 3.15s/it]
9%|▊ | 48862/569592 [38:56<455:47:02, 3.15s/it]
9%|▊ | 48863/569592 [38:58<401:59:23, 2.78s/it]
9%|▊ | 48863/569592 [38:58<401:59:23, 2.78s/it]
9%|▊ | 48864/569592 [38:59<322:16:13, 2.23s/it]
9%|▊ | 48864/569592 [38:59<322:16:13, 2.23s/it]
9%|▊ | 48865/569592 [39:00<266:14:21, 1.84s/it]
9%|▊ | 48865/569592 [39:00<266:14:21, 1.84s/it]
9%|▊ | 48866/569592 [39:05<417:09:22, 2.88s/it]
9%|▊ | 48866/569592 [39:05<417:09:22, 2.88s/it]
9%|▊ | 48867/569592 [39:08<415:04:36, 2.87s/it]
9%|▊ | 48867/569592 [39:08<415:04:36, 2.87s/it]
9%|▊ | 48868/569592 [39:09<332:27:05, 2.30s/it]
9%|▊ | 48868/569592 [39:09<332:27:05, 2.30s/it]
9%|▊ | 48869/569592 [39:10<276:22:22, 1.91s/it]
9%|▊ | 48869/569592 [39:10<276:22:22, 1.91s/it]
9%|▊ | 48870/569592 [39:17<486:21:01, 3.36s/it]
9%|▊ | 48870/569592 [39:17<486:21:01, 3.36s/it]
9%|▊ | 48871/569592 [39:18<382:47:23, 2.65s/it]
9%|▊ | 48871/569592 [39:18<382:47:23, 2.65s/it]
9%|▊ | 48872/569592 [39:19<309:08:43, 2.14s/it]
9%|▊ | 48872/569592 [39:19<309:08:43, 2.14s/it]
9%|▊ | 48873/569592 [39:20<270:53:19, 1.87s/it]
9%|▊ | 48873/569592 [39:20<270:53:19, 1.87s/it]
9%|▊ | 48874/569592 [39:28<526:17:26, 3.64s/it]
9%|▊ | 48874/569592 [39:28<526:17:26, 3.64s/it]
9%|▊ | 48875/569592 [39:29<408:26:45, 2.82s/it]
9%|▊ | 48875/569592 [39:29<408:26:45, 2.82s/it]
9%|▊ | 48876/569592 [39:30<325:50:22, 2.25s/it]
9%|▊ | 48876/569592 [39:30<325:50:22, 2.25s/it]
9%|▊ | 48877/569592 [39:31<271:08:24, 1.87s/it]
9%|▊ | 48877/569592 [39:31<271:08:24, 1.87s/it]
9%|▊ | 48878/569592 [39:38<496:32:59, 3.43s/it]
9%|▊ | 48878/569592 [39:38<496:32:59, 3.43s/it]
9%|▊ | 48879/569592 [39:39<387:26:34, 2.68s/it]
9%|▊ | 48879/569592 [39:39<387:26:34, 2.68s/it]
9%|▊ | 48880/569592 [39:40<315:11:08, 2.18s/it]
9%|▊ | 48880/569592 [39:40<315:11:08, 2.18s/it]
9%|▊ | 48881/569592 [39:41<264:12:29, 1.83s/it]
9%|▊ | 48881/569592 [39:41<264:12:29, 1.83s/it]
9%|▊ | 48882/569592 [39:49<557:33:26, 3.85s/it]
9%|▊ | 48882/569592 [39:49<557:33:26, 3.85s/it]
9%|▊ | 48883/569592 [39:50<430:31:20, 2.98s/it]
9%|▊ | 48883/569592 [39:50<430:31:20, 2.98s/it]
9%|▊ | 48884/569592 [39:51<339:57:51, 2.35s/it]
9%|▊ | 48884/569592 [39:51<339:57:51, 2.35s/it]
9%|▊ | 48885/569592 [39:52<278:29:05, 1.93s/it]
9%|▊ | 48885/569592 [39:52<278:29:05, 1.93s/it]
9%|▊ | 48886/569592 [39:59<506:26:57, 3.50s/it]
9%|▊ | 48886/569592 [39:59<506:26:57, 3.50s/it]
9%|▊ | 48887/569592 [40:00<397:12:51, 2.75s/it]
9%|▊ | 48887/569592 [40:00<397:12:51, 2.75s/it]
9%|▊ | 48888/569592 [40:01<319:19:58, 2.21s/it]
9%|▊ | 48888/569592 [40:01<319:19:58, 2.21s/it]
9%|▊ | 48889/569592 [40:02<265:56:04, 1.84s/it]
9%|▊ | 48889/569592 [40:02<265:56:04, 1.84s/it]
9%|▊ | 48890/569592 [40:09<495:13:32, 3.42s/it]
9%|▊ | 48890/569592 [40:09<495:13:32, 3.42s/it]
9%|▊ | 48891/569592 [40:10<389:20:35, 2.69s/it]
9%|▊ | 48891/569592 [40:10<389:20:35, 2.69s/it]
9%|▊ | 48892/569592 [40:11<315:27:15, 2.18s/it]
9%|▊ | 48892/569592 [40:11<315:27:15, 2.18s/it]
9%|▊ | 48893/569592 [40:12<262:55:54, 1.82s/it]
9%|�/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100663296 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
�� | 48893/569592 [40:12<262:55:54, 1.82s/it]
9%|▊ | 48894/569592 [40:19<490:16:40, 3.39s/it]
9%|▊ | 48894/569592 [40:19<490:16:40, 3.39s/it]
9%|▊ | 48895/569592 [40:20<384:47:00, 2.66s/it]
9%|▊ | 48895/569592 [40:20<384:47:00, 2.66s/it]
9%|▊ | 48896/569592 [40:21<310:48:37, 2.15s/it]
9%|▊ | 48896/569592 [40:21<310:48:37, 2.15s/it]
9%|▊ | 48897/569592 [40:22<259:13:40, 1.79s/it]
9%|▊ | 48897/569592 [40:22<259:13:40, 1.79s/it]
9%|▊ | 48898/569592 [40:28<430:30:29, 2.98s/it]
9%|▊ | 48898/569592 [40:28<430:30:29, 2.98s/it]
9%|▊ | 48899/569592 [40:29<346:42:24, 2.40s/it/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
]
9%|▊ | 48899/569592 [40:29<346:42:24, 2.40s/it]
9%|▊ | 48900/569592 [40:30<282:48:44, 1.96s/it]
9%|▊ | 48900/569592 [40:30<282:48:44, 1.96s/it]
9%|▊ | 48901/569592 [40:31<241:33:29, 1.67s/it]
9%|▊ | 48901/569592 [40:31<241:33:29, 1.67s/it]
9%|▊ | 48902/569592 [40:38<466:39:46, 3.23s/it]
9%|▊ | 48902/569592 [40:38<466:39:46, 3.23s/it]
9%|▊ | 48903/569592 [40:43<538:55:02, 3.73s/it]
9%|▊ | 48903/569592 [40:43<538:55:02, 3.73s/it]
9%|▊ | 48904/569592 [40:44<417:09:34, 2.88s/it]
9%|▊ | 48904/569592 [40:44<417:09:34, 2.88s/it]
9%|▊ | 48905/569592 [40:44<333:00:39, 2.30s/it]
9%|▊ | 48905/569592 [40:44<333:00:39, 2.30s/it]
9%|▊ | 48906/569592 [40:48<389:40:25, 2.69s/it]
9%|▊ | 48906/569592 [40:48<389:40:25, 2.69s/it]
9%|▊ | 48907/569592 [40:49<315:04:36, 2.18s/it]
9%|▊ | 48907/569592 [40:49<315:04:36, 2.18s/it]
9%|▊ | 48908/569592 [40:51<301:07:36, 2.08s/it]
9%|▊ | 48908/569592 [40:51<301:07:36, 2.08s/it]
9%|▊ | 48909/569592 [40:52<251:23:06, 1.74s/it]
9%|▊ | 48909/569592 [40:52<251:23:06, 1.74s/it]
9%|▊ | 48910/569592 [40:58<464:17:43, 3.21s/it]
9%|▊ | 48910/569592 [40:59<464:17:43, 3.21s/it]
9%|▊ | 48911/569592 [41:02<475:06:04, 3.28s/it]
9%|▊ | 48911/569592 [41:02<475:06:04, 3.28s/it]
9%|▊ | 48912/569592 [41:03<374:14:43, 2.59s/it]
9%|▊ | 48912/569592 [41:03<374:14:43, 2.59s/it]
9%|▊ | 48913/569592 [41:07<445:09:28, 3.08s/it]
9%|▊ | 48913/569592 [41:07<445:09:28, 3.08s/it]
9%|▊ | 48914/569592 [41:11<497:50:15, 3.44s/it]
9%|▊ | 48914/569592 [41:11<497:50:15, 3.44s/it]
9%|▊ | 48915/569592 [41:18<647:49:37, 4.48s/it]
9%|▊ | 48915/569592 [41:18<647:49:37, 4.48s/it]
9%|▊ | 48916/569592 [41:23<654:26:12, 4.52s/it]
9%|▊ | 48916/569592 [41:23<654:26:12, 4.52s/it]
9%|▊ | 48917/569592 [41:26<610:39:34, 4.22s/it]
9%|▊ | 48917/569592 [41:27<610:39:34, 4.22s/it]
9%|▊ | 48918/569592 [41:30<583:16:09, 4.03s/it]
9%|▊ | 48918/569592 [41:30<583:16:09, 4.03s/it]
9%|▊ | 48919/569592 [41:34<565:50:37, 3.91s/it]
9%|▊ | 48919/569592 [41:34<565:50:37, 3.91s/it]
9%|▊ | 48920/569592 [41:37<556:59:23, 3.85s/it]
9%|▊ | 48920/569592 [41:37<556:59:23, 3.85s/it]
9%|▊ | 48921/569592 [41:38<428:52:51, 2.97s/it]
9%|▊ | 48921/569592 [41:38<428:52:51, 2.97s/it]
9%|▊ | 48922/569592 [41:43<483:43:16, 3.34s/it]
9%|▊ | 48922/569592 [41:43<483:43:16, 3.34s/it]
9%|▊ | 48923/569592 [41:46<504:33:11, 3.49s/it]
9%|▊ | 48923/569592 [41:46<504:33:11, 3.49s/it]
9%|▊ | 48924/569592 [41:51<568:14:04, 3.93s/it]
9%|▊ | 48924/569592 [41:51<568:14:04, 3.93s/it]
9%|▊ | 48925/569592 [41:55<549:06:31, 3.80s/it]
9%|▊ | 48925/569592 [41:55<549:06:31, 3.80s/it]
9%|▊ | 48926/569592 [41:59<587:41:06, 4.06s/it]
9%|▊ | 48926/569592 [42:00<587:41:06, 4.06s/it]
9%|▊ | 48927/569592 [42:04<611:40:48, 4.23s/it]
9%|▊ | 48927/569592 [42:04<611:40:48, 4.23s/it]
9%|▊ | 48928/569592 [42:09<635:40:39, 4.40s/it]
9%|▊ | 48928/569592 [42:09<635:40:39, 4.40s/it]
9%|▊ | 48929/569592 [42:14<676:05:16, 4.67s/it]
9%|▊ | 48929/569592 [42:14<676:05:16, 4.67s/it]
9%|▊ | 48930/569592 [42:19<675:54:36, 4.67s/it]
9%|▊ | 48930/569592 [42:19<675:54:36, 4.67s/it]
9%|▊ | 48931/569592 [42:23<668:02:54, 4.62s/it]
9%|▊ | 48931/569592 [42:23<668:02:54, 4.62s/it]
9%|▊ | 48932/569592 [42:24<505:27:19, 3.49s/it]
9%|▊ | 48932/569592 [42:24<505:27:19, 3.49s/it]
9%|▊ | 48933/569592 [42:29<553:41:20, 3.83s/it]
9%|▊ | 48933/569592 [42:29<553:41:20, 3.83s/it]
9%|▊ | 48934/569592 [42:32<533:59:59, 3.69s/it]
9%|▊ | 48934/569592 [42:32<533:59:59, 3.69s/it]
9%|▊ | 48935/569592 [42:36<537:31:06, 3.72s/it]
9%|▊ | 48935/569592 [42:36<537:31:06, 3.72s/it]
9%|▊ | 48936/569592 [42:41<603:21:43, 4.17s/it]
9%|▊ | 48936/569592 [42:41<603:21:43, 4.17s/it]
9%|▊ | 48937/569592 [42:46<625:57:32, 4.33s/it]
9%|▊ | 48937/569592 [42:46<625:57:32, 4.33s/it]
9%|▊ | 48938/569592 [42:51<669:24:13, 4.63s/it]
9%|▊ | 48938/569592 [42:51<669:24:13, 4.63s/it]
9%|▊ | 48939/569592 [42:55<644:44:14, 4.46s/it]
9%|▊ | 48939/569592 [42:55<644:44:14, 4.46s/it]
9%|▊ | 48940/569592 [42:56<489:39:27, 3.39s/it]
9%|▊ | 48940/569592 [42:56<489:39:27, 3.39s/it]
9%|▊ | 48941/569592 [42:57<384:05:36, 2.66s/it]
9%|▊ | 48941/569592 [42:57<384:05:36, 2.66s/it]
9%|▊ | 48942/569592 [42:58<312:21:58, 2.16s/it]
9%|▊ | 48942/569592 [42:58<312:21:58, 2.16s/it]
9%|▊ | 48943/569592 [42:59<261:00:12, 1.80s/it]
9%|▊ | 48943/569592 [42:59<261:00:12, 1.80s/it]
9%|▊ | 48944/569592 [43:00<224:17:21, 1.55s/it]
9%|▊ | 48944/569592 [43:00<224:17:21, 1.55s/it]
9%|▊ | 48945/569592 [43:01<197:46:48, 1.37s/it]
9%|▊ | 48945/569592 [43:01<197:46:48, 1.37s/it]
9%|▊ | 48946/569592 [43:02<179:44:04, 1.24s/it]
9%|▊ | 48946/569592 [43:02<179:44:04, 1.24s/it]
9%|▊ | 48947/569592 [43:03<191:11:42, 1.32s/it]
9%|▊ | 48947/569592 [43:04<191:11:42, 1.32s/it]
9%|▊ | 48948/569592 [43:09<387:51:45, 2.68s/it]
9%|▊ | 48948/569592 [43:09<387:51:45, 2.68s/it]
9%|▊ | 48949/569592 [43:10<315:15:47, 2.18s/it]
9%|▊ | 48949/569592 [43:10<315:15:47, 2.18s/it]
9%|▊ | 48950/569592 [43:11<262:33:01, 1.82s/it]
9%|▊ | 48950/569592 [43:11<262:33:01, 1.82s/it]
9%|▊ | 48951/569592 [43:14<301:49:35, 2.09s/it]
9%|▊ | 48951/569592 [43:14<301:49:35, 2.09s/it]
9%|▊ | 48952/569592 [43:20<459:00:59, 3.17s/it]
9%|▊ | 48952/569592 [43:20<459:00:59, 3.17s/it]
9%|▊ | 48953/569592 [43:21<363:40:46, 2.51s/it]
9%|▊ | 48953/569592 [43:21<363:40:46, 2.51s/it]
9%|▊ | 48954/569592 [43:22<296:43:58, 2.05s/it]
9%|▊ | 48954/569592 [43:22<296:43:58, 2.05s/it]
9%|▊ | 48955/569592 [43:24<321:57:12, 2.23s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▊ | 48955/569592 [43:24<321:57:12, 2.23s/it]
9%|▊ | 48956/569592 [43:29<416:33:28, 2.88s/it]
9%|▊ | 48956/569592 [43:29<416:33:28, 2.88s/it]
9%|▊ | 48957/569592 [43:32<440:32:44, 3.05s/it]
9%|▊ | 48957/569592 [43:32<440:32:44, 3.05s/it]
9%|▊ | 48958/569592 [43:33<351:05:33, 2.43s/it]
9%|▊ | 48958/569592 [43:33<351:05:33, 2.43s/it]
9%|▊ | 48959/569592 [43:35<304:58:29, 2.11s/it]
9%|▊ | 48959/569592 [43:35<304:58:29, 2.11s/it]
9%|▊ | 48960/569592 [43:40<457:27:01, 3.16s/it]
9%|▊ | 48960/569592 [43:40<457:27:01, 3.16s/it]
9%|▊ | 48961/569592 [43:42<382:03:48, 2.64s/it]
9%|▊ | 48961/569592 [43:42<382:03:48, 2.64s/it]
9%|▊ | 48962/569592 [43:43<309:12:48, 2.14s/it]
9%|▊ | 48962/569592 [43:43<309:12:48, 2.14s/it]
9%|▊ | 48963/569592 [43:45<309:43:53, 2.14s/it]
9%|▊ | 48963/569592 [43:45<309:43:53, 2.14s/it]
9%|▊ | 48964/569592 [43:49<420:41:49, 2.91s/it]
9%|▊ | 48964/569592 [43:49<420:41:49, 2.91s/it]
9%|▊ | 48965/569592 [43:52<390:38:47, 2.70s/it]
9%|▊ | 48965/569592 [43:52<390:38:47, 2.70s/it]
9%|▊ | 48966/569592 [43:53<318:12:01, 2.20s/it]
9%|▊ | 48966/569592 /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (103149431 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
[43:53<318:12:01, 2.20s/it]
9%|▊ | 48967/569592 [43:55<343:30:38, 2.38s/it]
9%|▊ | 48967/569592 [43:55<343:30:38, 2.38s/it]
9%|▊ | 48968/569592 [44:00<451:11:22, 3.12s/it]
9%|▊ | 48968/569592 [44:00<451:11:22, 3.12s/it]
9%|▊ | 48969/569592 [44:02<389:18:11, 2.69s/it]
9%|▊ | 48969/569592 [44:02<389:18:11, 2.69s/it]
9%|▊ | 48970/569592 [44:03<314:53:11, 2.18s/it]
9%|▊ | 48970/569592 [44:03<314:53:11, 2.18s/it]
9%|▊ | 48971/569592 [44:05<307:18:46, 2.13s/it]
9%|▊ | 48971/569592 [44:05<307:18:46, 2.13s/it]
9%|▊ | 48972/569592 [44:10<440:46:48, 3.05s/it]
9%|▊ | 48972/569592 [44:10<440:46:48, 3.05s/it]
9%|▊ | 48973/569592 [44:13<449:49:14, 3.11s/it]
9%|▊ | 48973/569592 [44:13<449:49:14, 3.11s/it]
9%|▊ | 48974/569592 [44:14<356:15:25, 2.46s/it]
9%|▊ | 48974/569592 [44:14<356:15:25, 2.46s/it]
9%|▊ | 48975/569592 [44:17<345:52:36, 2.39s/it]
9%|▊ | 48975/569592 [44:17<345:52:36, 2.39s/it]
9%|▊ | 48976/569592 [44:20<405:31:22, 2.80s/it]
9%|▊ | 48976/569592 [44:20<405:31:22, 2.80s/it]
9%|▊ | 48977/569592 [44:24<421:16:50, 2.91s/it]
9%|▊ | 48977/569592 [44:24<421:16:50, 2.91s/it]
9%|▊ | 48978/569592 [44:25<338:23:13, 2.34s/it]
9%|▊ | 48978/569592 [44:25<338:23:13, 2.34s/it]
9%|▊ | 48979/569592 [44:26<280:25:31, 1.94s/it]
9%|▊ | 48979/569592 [44:26<280:25:31, 1.94s/it]
9%|▊ | 48980/569592 [44:31<416:26:12, 2.88s/it]
9%|▊ | 48980/569592 [44:31<416:26:12, 2.88s/it]
9%|▊ | 48981/569592 [44:33<375:37:48, 2.60s/it]
9%|▊ | 48981/569592 [44:33<375:37:48, 2.60s/it]
9%|▊ | 48982/569592 [44:34<311:41:21, 2.16s/it]
9%|▊ | 48982/569592 [44:34<311:41:21, 2.16s/it]
9%|▊ | 48983/569592 [44:36<314:26:56, 2.17s/it]
9%|▊ /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (115022592 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
| 48983/569592 [44:36<314:26:56, 2.17s/it]
9%|▊ | 48984/569592 [44:41<422:39:21, 2.92s/it]
9%|▊ | 48984/569592 [44:41<422:39:21, 2.92s/it]
9%|▊ | 48985/569592 [44:42<355:50:30, 2.46s/it]
9%|▊ | 48985/569592 [44:42<355:50:30, 2.46s/it]
9%|▊ | 48986/569592 [44:44<333:10:19, 2.30s/it]
9%|▊ | 48986/569592 [44:44<333:10:19, 2.30s/it]
9%|▊ | 48987/569592 [44:47<351:18:26, 2.43s/it]
9%|▊ | 48987/569592 [44:47<351:18:26, 2.43s/it]
9%|▊ | 48988/569592 [44:50<399:52:49, 2.77s/it]
9%|▊ | 48988/569592 [44:50<399:52:49, 2.77s/it]
9%|▊ | 48989/569592 [44:55<471:21:42, 3.26s/it]
9%|▊ | 48989/569592 [44:55<471:21:42, 3.26s/it]
9%|▊ | 48990/569592 [44:55<370:10:18, 2.56s/it]
9%|▊ | 48990/569592 [44:55<370:10:18, 2.56s/it]
9%|▊ | 48991/569592 [44:58<352:35:10, 2.44s/it]
9%|▊ | 48991/569592 [44:58<352:35:10, 2.44s/it]
9%|▊ | 48992/569592 [45:01<389:52:07, 2.70s/it]
9%|▊ | 48992/569592 [45:01<389:52:07, 2.70s/it]
9%|▊ | 48993/569592 [45:02<315:38:50, 2.18s/it]
9%|▊ | 48993/569592 [45:02<315:38:50, 2.18s/it]
9%|▊ | 48994/569592 [45:03<264:45:43, 1.83s/it]
9%|▊ | 48994/569592 [45:03<264:45:43, 1.83s/it]
9%|▊ | 48995/569592 [45:08<395:43:25, 2.74s/it]
9%|▊ | 48995/569592 [45:08<395:43:25, 2.74s/it]
9%|▊ | 48996/569592 [45:13<500:57:10, 3.46s/it]
9%|▊ | 48996/569592 [45:13<500:57:10, 3.46s/it]
9%|▊ | 48997/569592 [45:14<415:26:48, 2.87s/it]
9%|▊ | 48997/569592 [45:14<415:26:48, 2.87s/it]
9%|▊ | 48998/569592 [45:15<332:04:51, 2.30s/it]
9%|▊ | 48998/569592 [45:15<332:04:51, 2.30s/it]
9%|▊ | 48999/569592 [45:19<373:21:16, 2.58s/it]
9%|▊ | 48999/569592 [45:19<373:21:16, 2.58s/it]
9%|▊ | 49000/569592 [45:24<499:32:33, 3.45s/it]
9%|▊ | 49000/569592 [45:24<499:32:33, 3.45s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-49000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-49000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-49000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-49000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-49000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-49000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-48000] due to args.save_total_limit
model-00001-of-00006.safetensors: 0%| | 0.00/4.97G [00:00, ?B/s][A
model-00004-of-00006.safetensors: 0%| | 0.00/5.00G [00:00, ?B/s][A[A
rng_state_0.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A
model-00001-of-00006.safetensors: 0%| | 803k/4.97G [00:00<14:10, 5.83MB/s][A
model-00004-of-00006.safetensors: 0%| | 344k/5.00G [00:00<28:58, 2.88MB/s][A[A
rng_state_10.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_0.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 80.4kB/s]
Upload 132 LFS files: 0%| | 0/132 [00:00, ?it/s][A[A[A
rng_state_1.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 0%| | 639k/5.00G [00:00<36:25, 2.29MB/s][A[A
model-00001-of-00006.safetensors: 0%| | 1.39M/4.97G [00:00<18:01, 4.59MB/s][A
rng_state_1.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 337kB/s]
rng_state_10.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 149kB/s]
model-00004-of-00006.safetensors: 0%| | 16.0M/5.00G [00:00<01:27, 57.0MB/s][A[A
rng_state_100.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_101.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_100.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 430kB/s]
rng_state_102.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_101.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 402kB/s]
rng_state_102.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 179kB/s]
model-00001-of-00006.safetensors: 0%| | 16.0M/4.97G [00:00<02:27, 33.5MB/s][A
rng_state_103.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_104.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_103.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 247kB/s]
rng_state_104.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 253kB/s]
rng_state_105.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_106.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_107.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_106.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 387kB/s]
model-00004-of-00006.safetensors: 0%| | 22.4M/5.00G [00:00<02:56, 28.2MB/s][A[A
model-00001-of-00006.safetensors: 1%| | 32.0M/4.97G [00:00<01:47, 46.0MB/s][A
rng_state_107.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 189kB/s]
rng_state_105.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 147kB/s]
rng_state_108.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_109.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_108.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 269kB/s]
rng_state_11.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 1%| | 32.0M/5.00G [00:01<02:27, 33.6MB/s][A[A
rng_state_109.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 270kB/s]
rng_state_11.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 311kB/s]
rng_state_110.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_111.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_112.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_110.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 345kB/s]
model-00001-of-00006.safetensors: 1%| | 48.0M/4.97G [00:01<01:47, 45.8MB/s][A
rng_state_112.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 313kB/s]
rng_state_113.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_111.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 128kB/s]
rng_state_113.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 443kB/s]
rng_state_114.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_115.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 1%| | 48.0M/5.00G [00:01<02:09, 38.3MB/s][A[A
model-00001-of-00006.safetensors: 1%|▏ | 64.0M/4.97G [00:01<01:32, 52.9MB/s][A
rng_state_115.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 309kB/s]
rng_state_114.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 187kB/s]
rng_state_116.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_116.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 197kB/s]
rng_state_117.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_118.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_117.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 416kB/s]
model-00004-of-00006.safetensors: 1%|▏ | 64.0M/5.00G [00:01<01:42, 48.3MB/s][A[A
rng_state_119.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_118.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 149kB/s]
model-00001-of-00006.safetensors: 2%|▏ | 80.0M/4.97G [00:01<01:30, 54.3MB/s][A
rng_state_12.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_119.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 279kB/s]
rng_state_12.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 212kB/s]
rng_state_120.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_121.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_120.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 199kB/s]
rng_state_122.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_121.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 222kB/s]
rng_state_122.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 271kB/s]
model-00001-of-00006.safetensors: 2%|▏ | 96.0M/4.97G [00:02<01:31, 53.3MB/s][A
rng_state_123.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_124.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_125.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_124.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 316kB/s]
rng_state_123.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 205kB/s]
rng_state_125.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 189kB/s]
rng_state_126.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_127.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 2%|▏ | 80.0M/5.00G [00:02<02:18, 35.6MB/s][A[A
rng_state_127.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 338kB/s]
model-00001-of-00006.safetensors: 2%|▏ | 112M/4.97G [00:02<01:31, 53.0MB/s] [A
rng_state_13.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_126.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 151kB/s]
rng_state_13.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 347kB/s]
rng_state_14.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 2%|▏ | 96.0M/5.00G [00:02<01:49, 44.9MB/s][A[A
rng_state_14.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 296kB/s]
rng_state_15.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_16.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_17.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_15.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 202kB/s]
rng_state_16.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 298kB/s]
rng_state_17.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 250kB/s]
rng_state_18.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 128M/4.97G [00:02<01:38, 49.2MB/s][A
model-00004-of-00006.safetensors: 2%|▏ | 112M/5.00G [00:02<01:38, 49.5MB/s] [A[A
rng_state_19.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_2.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_18.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 175kB/s]
rng_state_19.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 183kB/s]
rng_state_2.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 186kB/s]
rng_state_20.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_20.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 318kB/s]
rng_state_21.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 144M/4.97G [00:02<01:33, 51.3MB/s][A
rng_state_22.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 128M/5.00G [00:03<01:37, 49.8MB/s][A[A
rng_state_22.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 238kB/s]
rng_state_23.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_23.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 316kB/s]
rng_state_21.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 107kB/s]
rng_state_24.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_24.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 370kB/s]
rng_state_25.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_26.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_26.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 279kB/s]
model-00004-of-00006.safetensors: 3%|▎ | 144M/5.00G [00:03<01:31, 52.8MB/s][A[A
rng_state_25.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 206kB/s]
rng_state_27.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_28.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_29.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_27.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 278kB/s]
rng_state_28.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 224kB/s]
rng_state_29.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 296kB/s]
rng_state_3.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_30.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 160M/4.97G [00:03<02:03, 39.1MB/s][A
rng_state_3.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 294kB/s]
rng_state_30.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 270kB/s]
model-00004-of-00006.safetensors: 3%|▎ | 160M/5.00G [00:03<01:41, 47.9MB/s][A[A
rng_state_31.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_32.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_32.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 322kB/s]
rng_state_31.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 153kB/s]
model-00001-of-00006.safetensors: 4%|▎ | 176M/4.97G [00:03<01:48, 44.1MB/s][A
model-00004-of-00006.safetensors: 4%|▎ | 176M/5.00G [00:03<01:32, 52.1MB/s][A[A
rng_state_33.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_34.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_35.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_33.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 322kB/s]
rng_state_34.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 271kB/s]
rng_state_35.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 195kB/s]
model-00001-of-00006.safetensors: 4%|▍ | 192M/4.97G [00:04<01:34, 50.7MB/s][A
rng_state_36.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_37.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_36.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 309kB/s]
rng_state_38.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_37.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 288kB/s]
rng_state_38.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 406kB/s]
rng_state_39.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 192M/5.00G [00:04<01:35, 50.4MB/s][A[A
rng_state_4.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00001-of-00006.safetensors: 4%|▍ | 208M/4.97G [00:04<01:27, 54.5MB/s][A
rng_state_4.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 407kB/s]
rng_state_39.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 236kB/s]
rng_state_40.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_40.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 321kB/s]
rng_state_41.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_41.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 363kB/s]
rng_state_42.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 208M/5.00G [00:04<01:32, 51.6MB/s][A[A
rng_state_42.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 352kB/s]
rng_state_43.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_43.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 156kB/s]
rng_state_44.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_45.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_45.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 327kB/s]
rng_state_44.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 228kB/s]
rng_state_46.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 224M/5.00G [00:04<01:29, 53.2MB/s][A[A
rng_state_47.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_46.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 209kB/s]
rng_state_48.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_49.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_47.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 127kB/s]
model-00001-of-00006.safetensors: 5%|▍ | 224M/4.97G [00:05<02:04, 38.2MB/s][A
rng_state_48.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 158kB/s]
rng_state_49.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 234kB/s]
model-00004-of-00006.safetensors: 5%|▍ | 240M/5.00G [00:05<01:22, 57.5MB/s][A[A
rng_state_5.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_50.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_5.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 264kB/s]
rng_state_51.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_50.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 212kB/s]
model-00001-of-00006.safetensors: 5%|▍ | 240M/4.97G [00:05<01:49, 43.1MB/s][A
rng_state_52.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_51.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 181kB/s]
rng_state_53.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_52.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 238kB/s]
rng_state_54.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_53.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 161kB/s]
model-00004-of-00006.safetensors: 5%|▌ | 256M/5.00G [00:05<01:35, 49.6MB/s][A[A
model-00001-of-00006.safetensors: 5%|▌ | 256M/4.97G [00:05<01:36, 48.7MB/s][A
rng_state_55.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_54.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 157kB/s]
rng_state_55.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 187kB/s]
rng_state_56.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00001-of-00006.safetensors: 5%|▌ | 272M/4.97G [00:05<01:25, 54.6MB/s][A
rng_state_57.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_58.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_57.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 402kB/s]
rng_state_56.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 174kB/s]
rng_state_58.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 206kB/s]
rng_state_59.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_6.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00001-of-00006.safetensors: 6%|▌ | 288M/4.97G [00:06<01:24, 55.1MB/s][A
rng_state_60.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_6.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 321kB/s]
rng_state_59.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 159kB/s]
rng_state_60.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 240kB/s]
rng_state_61.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 5%|▌ | 272M/5.00G [00:06<02:08, 36.8MB/s][A[A
rng_state_62.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00001-of-00006.safetensors: 6%|▌ | 304M/4.97G [00:06<01:18, 59.1MB/s][A
rng_state_61.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 228kB/s]
rng_state_63.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_62.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 154kB/s]
rng_state_63.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 265kB/s]
rng_state_64.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_64.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 351kB/s]
rng_state_65.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_66.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_65.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 303kB/s]
model-00004-of-00006.safetensors: 6%|▌ | 288M/5.00G [00:06<01:55, 40.9MB/s][A[A
rng_state_66.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 268kB/s]
rng_state_67.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_67.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 417kB/s]
rng_state_68.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_69.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_7.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_68.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 231kB/s]
rng_state_69.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 241kB/s]
rng_state_7.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 167kB/s]
rng_state_70.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 6%|▌ | 304M/5.00G [00:06<01:44, 44.8MB/s][A[A
rng_state_70.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 217kB/s]
rng_state_71.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_71.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 259kB/s]
rng_state_72.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_73.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_72.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 349kB/s]
rng_state_73.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 414kB/s]
model-00004-of-00006.safetensors: 6%|▋ | 320M/5.00G [00:06<01:34, 49.6MB/s][A[A
rng_state_74.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00001-of-00006.safetensors: 6%|▋ | 320M/4.97G [00:07<02:04, 37.4MB/s][A
rng_state_75.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_76.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_75.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 326kB/s]
rng_state_76.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 211kB/s]
rng_state_74.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 113kB/s]
rng_state_77.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 336M/5.00G [00:07<01:27, 53.5MB/s][A[A
rng_state_78.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_79.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_77.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 260kB/s]
rng_state_78.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 195kB/s]
model-00004-of-00006.safetensors: 7%|▋ | 347M/5.00G [00:07<01:16, 61.1MB/s][A[A
rng_state_79.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 211kB/s]
rng_state_8.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_8.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 309kB/s]
rng_state_80.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_81.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 355M/5.00G [00:07<01:26, 53.9MB/s][A[A
rng_state_81.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 336kB/s]
rng_state_80.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 233kB/s]
model-00001-of-00006.safetensors: 7%|▋ | 336M/4.97G [00:07<02:17, 33.7MB/s][A
rng_state_82.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_83.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_82.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 305kB/s]
rng_state_84.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 368M/5.00G [00:07<01:24, 55.1MB/s][A[A
rng_state_84.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 395kB/s]
rng_state_83.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 178kB/s]
model-00001-of-00006.safetensors: 7%|▋ | 352M/4.97G [00:07<01:56, 39.6MB/s][A
rng_state_85.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_86.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_87.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_86.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 356kB/s]
rng_state_85.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 171kB/s]
rng_state_87.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 125kB/s]
rng_state_88.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 384M/5.00G [00:08<01:21, 56.5MB/s][A[A
model-00001-of-00006.safetensors: 7%|▋ | 368M/4.97G [00:08<01:40, 46.0MB/s][A
rng_state_89.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_88.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 208kB/s]
rng_state_9.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_89.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 292kB/s]
rng_state_9.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 390kB/s]
model-00004-of-00006.safetensors: 8%|▊ | 400M/5.00G [00:08<01:12, 63.2MB/s][A[A
rng_state_90.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_91.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_92.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_91.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 281kB/s]
rng_state_90.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 176kB/s]
rng_state_92.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 368kB/s]
rng_state_93.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 416M/5.00G [00:08<01:11, 64.2MB/s][A[A
rng_state_94.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_95.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_93.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 276kB/s]
rng_state_94.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 333kB/s]
rng_state_95.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 318kB/s]
model-00001-of-00006.safetensors: 8%|▊ | 384M/4.97G [00:08<01:54, 40.2MB/s][A
rng_state_96.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_97.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_96.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 302kB/s]
model-00004-of-00006.safetensors: 9%|▊ | 432M/5.00G [00:08<01:06, 68.3MB/s][A[A
rng_state_98.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
rng_state_97.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 323kB/s]
rng_state_98.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 256kB/s]
rng_state_99.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A
model-00001-of-00006.safetensors: 8%|▊ | 400M/4.97G [00:08<01:40, 45.4MB/s][A
scheduler.pt: 0%| | 0.00/1.06k [00:00, ?B/s][A[A[A[A[A
scheduler.pt: 100%|██████████| 1.06k/1.06k [00:00<00:00, 31.5kB/s]
rng_state_99.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 352kB/s]
training_args.bin: 0%| | 0.00/7.29k [00:00, ?B/s][A[A[A[A
model-00004-of-00006.safetensors: 9%|▉ | 448M/5.00G [00:08<01:06, 68.6MB/s][A[A
training_args.bin: 100%|██████████| 7.29k/7.29k [00:00<00:00, 164kB/s]
model-00001-of-00006.safetensors: 8%|▊ | 416M/4.97G [00:09<01:29, 50.8MB/s][A
model-00004-of-00006.safetensors: 9%|▉ | 464M/5.00G [00:09<01:07, 67.4MB/s][A[A
model-00001-of-00006.safetensors: 9%|▊ | 432M/4.97G [00:09<01:20, 56.6MB/s][A
model-00004-of-00006.safetensors: 10%|▉ | 480M/5.00G [00:09<01:06, 68.1MB/s][A[A
model-00001-of-00006.safetensors: 9%|▉ | 448M/4.97G [00:09<01:18, 57.6MB/s][A
model-00004-of-00006.safetensors: 10%|▉ | 496M/5.00G [00:09<01:04, 69.3MB/s][A[A
model-00001-of-00006.safetensors: 9%|▉ | 464M/4.97G [00:09<01:10, 63.5MB/s][A
model-00004-of-00006.safetensors: 10%|█ | 512M/5.00G [00:09<01:01, 72.8MB/s][A[A
model-00001-of-00006.safetensors: 10%|▉ | 480M/4.97G [00:09<01:08, 65.4MB/s][A
model-00004-of-00006.safetensors: 11%|█ | 528M/5.00G [00:10<01:01, 72.2MB/s][A[A
model-00001-of-00006.safetensors: 10%|▉ | 496M/4.97G [00:10<01:13, 60.8MB/s][A
model-00004-of-00006.safetensors: 11%|█ | 544M/5.00G [00:10<01:03, 70.4MB/s][A[A
model-00004-of-00006.safetensors: 11%|█ | 560M/5.00G [00:10<00:58, 75.4MB/s][A[A
model-00001-of-00006.safetensors: 10%|█ | 512M/4.97G [00:10<01:52, 39.5MB/s][A
model-00004-of-00006.safetensors: 11%|█▏ | 568M/5.00G [00:10<01:43, 42.8MB/s][A[A
model-00004-of-00006.safetensors: 12%|█▏ | 576M/5.00G [00:11<01:46, 41.5MB/s][A[A
model-00001-of-00006.safetensors: 11%|█ | 528M/4.97G [00:11<02:08, 34.5MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 592M/5.00G [00:11<01:54, 38.5MB/s][A[A
model-00001-of-00006.safetensors: 11%|█ | 544M/4.97G [00:11<01:44, 42.3MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 608M/5.00G [00:11<01:36, 45.6MB/s][A[A
model-00001-of-00006.safetensors: 11%|█▏ | 560M/4.97G [00:11<01:28, 49.9MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 624M/5.00G [00:12<01:28, 49.3MB/s][A[A
model-00004-of-00006.safetensors: 13%|█▎ | 640M/5.00G [00:12<01:19, 55.1MB/s][A[A
model-00001-of-00006.safetensors: 12%|█▏ | 576M/4.97G [00:12<01:41, 43.1MB/s][A
model-00004-of-00006.safetensors: 13%|█▎ | 656M/5.00G [00:12<01:18, 55.6MB/s][A[A
model-00001-of-00006.safetensors: 12%|█▏ | 592M/4.97G [00:12<01:38, 44.6MB/s][A
model-00004-of-00006.safetensors: 13%|█▎ | 672M/5.00G [00:12<01:11, 60.2MB/s][A[A
model-00001-of-00006.safetensors: 12%|█▏ | 608M/4.97G [00:13<01:27, 49.7MB/s][A
model-00004-of-00006.safetensors: 14%|█▍ | 688M/5.00G [00:13<01:07, 64.0MB/s][A[A
model-00001-of-00006.safetensors: 13%|█▎ | 624M/4.97G [00:13<01:19, 54.7MB/s][A
model-00004-of-00006.safetensors: 14%|█▍ | 704M/5.00G [00:13<01:05, 65.9MB/s][A[A
model-00001-of-00006.safetensors: 13%|█▎ | 640M/4.97G [00:13<01:13, 59.0MB/s][A
model-00004-of-00006.safetensors: 14%|█▍ | 720M/5.00G [00:13<01:07, 63.7MB/s][A[A
model-00001-of-00006.safetensors: 13%|█▎ | 656M/4.97G [00:13<01:09, 61.9MB/s][A
model-00004-of-00006.safetensors: 15%|█▍ | 736M/5.00G [00:13<01:01, 68.8MB/s][A[A
model-00001-of-00006.safetensors: 14%|█▎ | 672M/4.97G [00:13<01:05, 65.8MB/s][A
model-00004-of-00006.safetensors: 15%|█▌ | 752M/5.00G [00:13<00:57, 73.6MB/s][A[A
model-00001-of-00006.safetensors: 14%|█▍ | 688M/4.97G [00:14<01:01, 69.6MB/s][A
model-00004-of-00006.safetensors: 15%|█▌ | 768M/5.00G [00:14<00:56, 75.1MB/s][A[A
model-00001-of-00006.safetensors: 14%|█▍ | 704M/4.97G [00:14<01:02, 68.4MB/s][A
model-00004-of-00006.safetensors: 16%|█▌ | 784M/5.00G [00:14<00:59, 71.3MB/s][A[A
model-00001-of-00006.safetensors: 14%|█▍ | 720M/4.97G [00:14<01:03, 66.8MB/s][A
model-00004-of-00006.safetensors: 16%|█▌ | 800M/5.00G [00:14<01:04, 65.2MB/s][A[A
model-00001-of-00006.safetensors: 15%|█▍ | 736M/4.97G [00:14<01:05, 64.3MB/s][A
model-00004-of-00006.safetensors: 16%|█▋ | 816M/5.00G [00:14<01:04, 65.2MB/s][A[A
model-00001-of-00006.safetensors: 15%|█▌ | 752M/4.97G [00:15<01:09, 60.9MB/s][A
model-00004-of-00006.safetensors: 17%|█▋ | 832M/5.00G [00:15<01:02, 67.1MB/s][A[A
model-00004-of-00006.safetensors: 17%|█▋ | 848M/5.00G [00:15<01:00, 68.3MB/s][A[A
model-00004-of-00006.safetensors: 17%|█▋ | 864M/5.00G [00:15<00:58, 70.4MB/s][A[A
model-00001-of-00006.safetensors: 15%|█▌ | 768M/4.97G [00:15<01:30, 46.3MB/s][A
model-00004-of-00006.safetensors: 18%|█▊ | 880M/5.00G [00:15<00:58, 70.1MB/s][A[A
model-00001-of-00006.safetensors: 16%|█▌ | 784M/4.97G [00:15<01:19, 52.3MB/s][A
model-00004-of-00006.safetensors: 18%|█▊ | 896M/5.00G [00:16<00:57, 71.4MB/s][A[A
model-00001-of-00006.safetensors: 16%|█▌ | 800M/4.97G [00:16<01:15, 55.5MB/s][A
model-00004-of-00006.safetensors: 18%|█▊ | 912M/5.00G [00:16<00:54, 75.3MB/s][A[A
model-00001-of-00006.safetensors: 16%|█▋ | 816M/4.97G [00:16<01:14, 55.8MB/s][A
model-00004-of-00006.safetensors: 19%|█▊ | 928M/5.00G [00:16<00:56, 71.8MB/s][A[A
model-00001-of-00006.safetensors: 17%|█▋ | 832M/4.97G [00:16<01:05, 63.4MB/s][A
model-00001-of-00006.safetensors: 17%|█▋ | 848M/4.97G [00:16<01:00, 68.2MB/s][A
model-00004-of-00006.safetensors: 19%|█▉ | 944M/5.00G [00:16<01:01, 65.8MB/s][A[A
model-00001-of-00006.safetensors: 17%|█▋ | 864M/4.97G [00:17<00:58, 70.0MB/s][A
model-00004-of-00006.safetensors: 19%|█▉ | 960M/5.00G [00:17<01:00, 67.3MB/s][A[A
model-00001-of-00006.safetensors: 18%|█▊ | 880M/4.97G [00:17<00:59, 68.3MB/s][A
model-00004-of-00006.safetensors: 20%|█▉ | 976M/5.00G [00:17<00:59, 67.6MB/s][A[A
model-00001-of-00006.safetensors: 18%|█▊ | 896M/4.97G [00:17<00:58, 70.1MB/s][A
model-00004-of-00006.safetensors: 20%|█▉ | 992M/5.00G [00:17<00:57, 69.5MB/s][A[A
model-00004-of-00006.safetensors: 20%|██ | 1.01G/5.00G [00:17<00:55, 71.9MB/s][A[A
model-00001-of-00006.safetensors: 18%|█▊ | 912M/4.97G [00:17<01:00, 67.4MB/s][A
model-00004-of-00006.safetensors: 20%|██ | 1.02G/5.00G [00:17<00:58, 68.5MB/s][A[A
model-00001-of-00006.safetensors: 19%|█▊ | 928M/4.97G [00:18<01:07, 59.7MB/s][A
model-00004-of-00006.safetensors: 21%|██ | 1.04G/5.00G [00:18<01:02, 63.6MB/s][A[A
model-00001-of-00006.safetensors: 19%|█▉ | 944M/4.97G [00:18<01:04, 62.3MB/s][A
model-00004-of-00006.safetensors: 21%|██ | 1.06G/5.00G [00:18<01:01, 64.4MB/s][A[A
model-00001-of-00006.safetensors: 19%|█▉ | 960M/4.97G [00:18<01:00, 66.0MB/s][A
model-00001-of-00006.safetensors: 20%|█▉ | 976M/4.97G [00:18<00:59, 67.6MB/s][A
model-00004-of-00006.safetensors: 21%|██▏ | 1.07G/5.00G [00:18<01:00, 64.5MB/s][A[A
model-00001-of-00006.safetensors: 20%|█▉ | 992M/4.97G [00:18<00:58, 67.4MB/s][A
model-00004-of-00006.safetensors: 22%|██▏ | 1.09G/5.00G [00:18<01:00, 64.8MB/s][A[A
model-00001-of-00006.safetensors: 20%|██ | 1.01G/4.97G [00:19<00:56, 69.5MB/s][A
model-00004-of-00006.safetensors: 22%|██▏ | 1.10G/5.00G [00:19<00:58, 66.7MB/s][A[A
model-00001-of-00006.safetensors: 21%|██ | 1.02G/4.97G [00:19<00:57, 68.9MB/s][A
model-00004-of-00006.safetensors: 22%|██▏ | 1.12G/5.00G [00:19<01:11, 53.9MB/s][A[A
model-00001-of-00006.safetensors: 21%|██ | 1.04G/4.97G [00:19<00:56, 69.4MB/s][A
model-00001-of-00006.safetensors: 21%|██▏ | 1.06G/4.97G [00:19<01:00, 64.6MB/s][A
model-00004-of-00006.safetensors: 23%|██▎ | 1.14G/5.00G [00:19<01:13, 52.8MB/s][A[A
model-00001-of-00006.safetensors: 22%|██▏ | 1.07G/4.97G [00:20<00:58, 67.0MB/s][A
model-00001-of-00006.safetensors: 22%|██▏ | 1.09G/4.97G [00:20<00:55, 69.7MB/s][A
model-00004-of-00006.safetensors: 23%|██▎ | 1.15G/5.00G [00:20<01:25, 45.0MB/s][A[A
model-00001-of-00006.safetensors: 22%|██▏ | 1.10G/4.97G [00:20<00:53, 72.0MB/s][A
model-00004-of-00006.safetensors: 23%|██▎ | 1.17G/5.00G [00:20<01:17, 49.3MB/s][A[A
model-00001-of-00006.safetensors: 23%|██▎ | 1.12G/4.97G [00:20<00:54, 70.5MB/s][A
model-00004-of-00006.safetensors: 24%|██▎ | 1.18G/5.00G [00:20<01:09, 54.9MB/s][A[A
model-00001-of-00006.safetensors: 23%|██▎ | 1.14G/4.97G [00:21<00:56, 67.8MB/s][A
model-00004-of-00006.safetensors: 24%|██▍ | 1.20G/5.00G [00:21<01:07, 56.6MB/s][A[A
model-00001-of-00006.safetensors: 23%|██▎ | 1.15G/4.97G [00:21<01:01, 62.4MB/s][A
model-00004-of-00006.safetensors: 24%|██▍ | 1.22G/5.00G [00:21<01:03, 60.0MB/s][A[A
model-00001-of-00006.safetensors: 24%|██▎ | 1.17G/4.97G [00:21<00:58, 65.1MB/s][A
model-00004-of-00006.safetensors: 25%|██▍ | 1.23G/5.00G [00:21<01:14, 50.8MB/s][A[A
model-00004-of-00006.safetensors: 25%|██▍ | 1.25G/5.00G [00:22<01:12, 51.5MB/s][A[A
model-00001-of-00006.safetensors: 24%|██▍ | 1.18G/4.97G [00:22<01:19, 47.6MB/s][A
model-00004-of-00006.safetensors: 25%|██▌ | 1.26G/5.00G [00:22<01:07, 55.2MB/s][A[A
model-00001-of-00006.safetensors: 24%|██▍ | 1.20G/4.97G [00:22<01:11, 52.7MB/s][A
model-00004-of-00006.safetensors: 26%|██▌ | 1.28G/5.00G [00:22<01:01, 60.7MB/s][A[A
model-00001-of-00006.safetensors: 24%|██▍ | 1.22G/4.97G [00:22<01:09, 54.3MB/s][A
model-00004-of-00006.safetensors: 26%|██▌ | 1.30G/5.00G [00:22<01:04, 57.8MB/s][A[A
model-00001-of-00006.safetensors: 25%|██▍ | 1.23G/4.97G [00:22<01:05, 57.4MB/s][A
model-00001-of-00006.safetensors: 25%|██▌ | 1.25G/4.97G [00:23<00:59, 62.8MB/s][A
model-00004-of-00006.safetensors: 26%|██▌ | 1.31G/5.00G [00:23<01:02, 59.3MB/s][A[A
model-00001-of-00006.safetensors: 25%|██▌ | 1.26G/4.97G [00:23<00:57, 64.6MB/s][A
model-00004-of-00006.safetensors: 27%|██▋ | 1.33G/5.00G [00:23<00:57, 63.5MB/s][A[A
model-00001-of-00006.safetensors: 26%|██▌ | 1.28G/4.97G [00:23<00:55, 66.4MB/s][A
model-00004-of-00006.safetensors: 27%|██▋ | 1.34G/5.00G [00:23<00:58, 62.9MB/s][A[A
model-00001-of-00006.safetensors: 26%|██▌ | 1.30G/4.97G [00:23<00:54, 67.9MB/s][A
model-00004-of-00006.safetensors: 27%|██▋ | 1.36G/5.00G [00:23<00:56, 64.9MB/s][A[A
model-00001-of-00006.safetensors: 26%|██▋ | 1.31G/4.97G [00:23<00:53, 68.8MB/s][A
model-00004-of-00006.safetensors: 28%|██▊ | 1.38G/5.00G [00:24<00:58, 61.8MB/s][A[A
model-00001-of-00006.safetensors: 27%|██▋ | 1.33G/4.97G [00:24<00:58, 62.3MB/s][A
model-00004-of-00006.safetensors: 28%|██▊ | 1.39G/5.00G [00:24<00:58, 61.8MB/s][A[A
model-00001-of-00006.safetensors: 27%|██▋ | 1.34G/4.97G [00:24<00:59, 61.3MB/s][A
model-00004-of-00006.safetensors: 28%|██▊ | 1.41G/5.00G [00:24<00:55, 64.7MB/s][A[A
model-00004-of-00006.safetensors: 28%|██▊ | 1.42G/5.00G [00:24<00:50, 70.5MB/s][A[A
model-00001-of-00006.safetensors: 27%|██▋ | 1.36G/4.97G [00:24<00:57, 63.2MB/s][A
model-00004-of-00006.safetensors: 29%|██▉ | 1.44G/5.00G [00:25<00:52, 68.3MB/s][A[A
model-00001-of-00006.safetensors: 28%|██▊ | 1.38G/4.97G [00:25<01:08, 52.5MB/s][A
model-00004-of-00006.safetensors: 29%|██▉ | 1.46G/5.00G [00:25<00:51, 68.8MB/s][A[A
model-00004-of-00006.safetensors: 29%|██▉ | 1.47G/5.00G [00:25<00:49, 71.6MB/s][A[A
model-00001-of-00006.safetensors: 28%|██▊ | 1.39G/4.97G [00:25<01:03, 56.3MB/s][A
model-00004-of-00006.safetensors: 30%|██▉ | 1.49G/5.00G [00:25<00:48, 73.1MB/s][A[A
model-00001-of-00006.safetensors: 28%|██▊ | 1.41G/4.97G [00:25<00:59, 59.8MB/s][A
model-00001-of-00006.safetensors: 29%|██▊ | 1.42G/4.97G [00:25<00:54, 65.0MB/s][A
model-00004-of-00006.safetensors: 30%|███ | 1.50G/5.00G [00:25<00:53, 65.4MB/s][A[A
model-00001-of-00006.safetensors: 29%|██▉ | 1.44G/4.97G [00:26<00:53, 65.8MB/s][A
model-00004-of-00006.safetensors: 30%|███ | 1.52G/5.00G [00:26<00:50, 69.3MB/s][A[A
model-00001-of-00006.safetensors: 29%|██▉ | 1.46G/4.97G [00:26<00:56, 62.2MB/s][A
model-00001-of-00006.safetensors: 30%|██▉ | 1.47G/4.97G [00:26<00:55, 62.6MB/s][A
model-00004-of-00006.safetensors: 31%|███ | 1.54G/5.00G [00:26<01:08, 50.8MB/s][A[A
model-00004-of-00006.safetensors: 31%|███ | 1.55G/5.00G [00:26<01:03, 53.9MB/s][A[A
model-00001-of-00006.safetensors: 30%|██▉ | 1.49G/4.97G [00:26<00:56, 61.3MB/s][A
model-00001-of-00006.safetensors: 30%|███ | 1.50G/4.97G [00:27<00:54, 64.0MB/s][A
model-00004-of-00006.safetensors: 31%|███▏ | 1.57G/5.00G [00:27<01:01, 56.0MB/s][A[A
model-00001-of-00006.safetensors: 31%|███ | 1.52G/4.97G [00:27<00:52, 65.5MB/s][A
model-00001-of-00006.safetensors: 31%|███ | 1.54G/4.97G [00:27<00:54, 63.1MB/s][A
model-00001-of-00006.safetensors: 31%|███▏ | 1.55G/4.97G [00:27<00:52, 65.6MB/s][A
model-00004-of-00006.safetensors: 32%|███▏ | 1.58G/5.00G [00:27<01:33, 36.3MB/s][A[A
model-00001-of-00006.safetensors: 32%|███▏ | 1.57G/4.97G [00:28<00:50, 67.4MB/s][A
model-00001-of-00006.safetensors: 32%|███▏ | 1.58G/4.97G [00:28<00:49, 67.7MB/s][A
model-00004-of-00006.safetensors: 32%|███▏ | 1.60G/5.00G [00:28<01:28, 38.5MB/s][A[A
model-00004-of-00006.safetensors: 32%|███▏ | 1.62G/5.00G [00:28<01:15, 44.9MB/s][A[A
model-00001-of-00006.safetensors: 32%|███▏ | 1.60G/4.97G [00:28<00:48, 68.9MB/s][A
model-00001-of-00006.safetensors: 33%|███▎ | 1.62G/4.97G [00:28<00:47, 70.8MB/s][A
model-00004-of-00006.safetensors: 33%|███▎ | 1.63G/5.00G [00:28<01:06, 50.8MB/s][A[A
model-00004-of-00006.safetensors: 33%|███▎ | 1.65G/5.00G [00:28<00:59, 56.0MB/s][A[A
model-00001-of-00006.safetensors: 33%|███▎ | 1.63G/4.97G [00:29<00:48, 68.6MB/s][A
model-00004-of-00006.safetensors: 33%|███▎ | 1.66G/5.00G [00:29<00:55, 59.9MB/s][A[A
model-00001-of-00006.safetensors: 33%|███▎ | 1.65G/4.97G [00:29<00:48, 68.8MB/s][A
model-00001-of-00006.safetensors: 34%|███▎ | 1.66G/4.97G [00:29<00:45, 71.8MB/s][A
model-00004-of-00006.safetensors: 34%|███▎ | 1.68G/5.00G [00:29<00:55, 60.3MB/s][A[A
model-00004-of-00006.safetensors: 34%|███▍ | 1.70G/5.00G [00:29<00:53, 62.0MB/s][A[A
model-00004-of-00006.safetensors: 34%|███▍ | 1.71G/5.00G [00:29<00:50, 65.4MB/s][A[A
model-00004-of-00006.safetensors: 35%|███▍ | 1.73G/5.00G [00:30<00:49, 65.5MB/s][A[A
model-00001-of-00006.safetensors: 34%|███▍ | 1.68G/4.97G [00:30<01:26, 38.1MB/s][A
model-00004-of-00006.safetensors: 35%|███▍ | 1.74G/5.00G [00:30<00:49, 65.3MB/s][A[A
model-00001-of-00006.safetensors: 34%|███▍ | 1.69G/4.97G [00:30<01:08, 48.0MB/s][A
model-00001-of-00006.safetensors: 34%|███▍ | 1.70G/4.97G [00:30<01:12, 45.2MB/s][A
model-00004-of-00006.safetensors: 35%|███▌ | 1.76G/5.00G [00:30<00:54, 59.4MB/s][A[A
model-00001-of-00006.safetensors: 34%|███▍ | 1.71G/4.97G [00:30<01:15, 43.2MB/s][A
model-00004-of-00006.safetensors: 36%|███▌ | 1.78G/5.00G [00:30<00:50, 63.2MB/s][A[A
model-00001-of-00006.safetensors: 35%|███▍ | 1.73G/4.97G [00:31<01:07, 48.2MB/s][A
model-00004-of-00006.safetensors: 36%|███▌ | 1.79G/5.00G [00:31<00:51, 61.7MB/s][A[A
model-00001-of-00006.safetensors: 35%|███▌ | 1.74G/4.97G [00:31<01:02, 51.7MB/s][A
model-00004-of-00006.safetensors: 36%|███▌ | 1.81G/5.00G [00:31<00:48, 65.7MB/s][A[A
model-00004-of-00006.safetensors: 36%|███▋ | 1.82G/5.00G [00:31<00:47, 66.2MB/s][A[A
model-00001-of-00006.safetensors: 35%|███▌ | 1.76G/4.97G [00:31<00:58, 55.1MB/s][A
model-00001-of-00006.safetensors: 36%|███▌ | 1.78G/4.97G [00:31<00:54, 58.5MB/s][A
model-00004-of-00006.safetensors: 37%|███▋ | 1.84G/5.00G [00:31<00:52, 60.3MB/s][A[A
model-00001-of-00006.safetensors: 36%|███▌ | 1.79G/4.97G [00:32<00:51, 62.1MB/s][A
model-00004-of-00006.safetensors: 37%|███▋ | 1.86G/5.00G [00:32<00:49, 63.1MB/s][A[A
model-00001-of-00006.safetensors: 36%|███▋ | 1.81G/4.97G [00:32<00:49, 64.3MB/s][A
model-00004-of-00006.safetensors: 37%|███▋ | 1.87G/5.00G [00:32<00:46, 67.8MB/s][A[A
model-00004-of-00006.safetensors: 38%|███▊ | 1.89G/5.00G [00:32<00:44, 69.2MB/s][A[A
model-00001-of-00006.safetensors: 37%|███▋ | 1.82G/4.97G [00:32<00:50, 62.6MB/s][A
model-00001-of-00006.safetensors: 37%|███▋ | 1.84G/4.97G [00:32<00:48, 64.6MB/s][A
model-00001-of-00006.safetensors: 37%|███▋ | 1.86G/4.97G [00:33<00:44, 70.6MB/s][A
model-00004-of-00006.safetensors: 38%|███▊ | 1.90G/5.00G [00:33<01:06, 46.6MB/s][A[A
model-00001-of-00006.safetensors: 38%|███▊ | 1.87G/4.97G [00:33<00:48, 63.5MB/s][A
model-00004-of-00006.safetensors: 38%|███▊ | 1.92G/5.00G [00:33<00:57, 53.3MB/s][A[A
model-00004-of-00006.safetensors: 39%|███▊ | 1.94G/5.00G [00:33<00:52, 58.0MB/s][A[A
model-00004-of-00006.safetensors: 39%|███▉ | 1.95G/5.00G [00:33<00:49, 61.2MB/s][A[A
model-00004-of-00006.safetensors: 39%|███▉ | 1.97G/5.00G [00:34<00:47, 64.4MB/s][A[A
model-00004-of-00006.safetensors: 40%|███▉ | 1.98G/5.00G [00:34<00:44, 67.6MB/s][A[A
model-00004-of-00006.safetensors: 40%|████ | 2.00G/5.00G [00:34<00:43, 69.7MB/s][A[A
model-00001-of-00006.safetensors: 38%|███▊ | 1.89G/4.97G [00:34<01:43, 29.8MB/s][A
model-00004-of-00006.safetensors: 40%|████ | 2.02G/5.00G [00:34<00:41, 71.8MB/s][A[A
model-00001-of-00006.safetensors: 38%|███▊ | 1.90G/4.97G [00:34<01:27, 34.9MB/s][A
model-00004-of-00006.safetensors: 41%|████ | 2.03G/5.00G [00:34<00:41, 72.1MB/s][A[A
model-00001-of-00006.safetensors: 39%|███▊ | 1.92G/4.97G [00:35<01:13, 41.2MB/s][A
model-00004-of-00006.safetensors: 41%|████ | 2.05G/5.00G [00:35<00:42, 69.2MB/s][A[A
model-00001-of-00006.safetensors: 39%|███▉ | 1.94G/4.97G [00:35<01:04, 47.2MB/s][A
model-00004-of-00006.safetensors: 41%|████▏ | 2.06G/5.00G [00:35<00:44, 66.5MB/s][A[A
model-00001-of-00006.safetensors: 39%|███▉ | 1.95G/4.97G [00:35<00:55, 53.9MB/s][A
model-00001-of-00006.safetensors: 40%|███▉ | 1.97G/4.97G [00:35<00:49, 61.0MB/s][A
model-00004-of-00006.safetensors: 42%|████▏ | 2.08G/5.00G [00:35<00:44, 65.5MB/s][A[A
model-00001-of-00006.safetensors: 40%|███▉ | 1.98G/4.97G [00:35<00:48, 62.0MB/s][A
model-00001-of-00006.safetensors: 40%|████ | 2.00G/4.97G [00:36<00:46, 63.7MB/s][A
model-00001-of-00006.safetensors: 41%|████ | 2.02G/4.97G [00:36<00:46, 63.8MB/s][A
model-00004-of-00006.safetensors: 42%|████▏ | 2.10G/5.00G [00:36<01:16, 37.7MB/s][A[A
model-00001-of-00006.safetensors: 41%|████ | 2.03G/4.97G [00:36<00:45, 64.5MB/s][A
model-00004-of-00006.safetensors: 42%|████▏ | 2.11G/5.00G [00:36<01:06, 43.3MB/s][A[A
model-00001-of-00006.safetensors: 41%|████ | 2.05G/4.97G [00:36<00:45, 64.0MB/s][A
model-00004-of-00006.safetensors: 43%|████▎ | 2.13G/5.00G [00:36<00:56, 50.9MB/s][A[A
model-00001-of-00006.safetensors: 42%|████▏ | 2.06G/4.97G [00:37<00:44, 65.6MB/s][A
model-00004-of-00006.safetensors: 43%|████▎ | 2.14G/5.00G [00:37<00:50, 56.4MB/s][A[A
model-00001-of-00006.safetensors: 42%|████▏ | 2.08G/4.97G [00:37<00:43, 67.1MB/s][A
model-00004-of-00006.safetensors: 43%|████▎ | 2.16G/5.00G [00:37<00:49, 57.4MB/s][A[A
model-00001-of-00006.safetensors: 42%|████▏ | 2.10G/4.97G [00:37<00:41, 69.0MB/s][A
model-00004-of-00006.safetensors: 44%|████▎ | 2.18G/5.00G [00:37<00:47, 59.0MB/s][A[A
model-00001-of-00006.safetensors: 43%|████▎ | 2.11G/4.97G [00:37<00:44, 63.5MB/s][A
model-00004-of-00006.safetensors: 44%|████▍ | 2.19G/5.00G [00:37<00:45, 62.3MB/s][A[A
model-00001-of-00006.safetensors: 43%|████▎ | 2.13G/4.97G [00:38<00:44, 63.8MB/s][A
model-00004-of-00006.safetensors: 44%|████▍ | 2.21G/5.00G [00:38<00:43, 63.9MB/s][A[A
model-00001-of-00006.safetensors: 43%|████▎ | 2.14G/4.97G [00:38<00:42, 65.9MB/s][A
model-00004-of-00006.safetensors: 44%|████▍ | 2.22G/5.00G [00:38<00:42, 65.3MB/s][A[A
model-00001-of-00006.safetensors: 43%|████▎ | 2.16G/4.97G [00:38<00:44, 63.2MB/s][A
model-00004-of-00006.safetensors: 45%|████▍ | 2.24G/5.00G [00:38<00:41, 67.2MB/s][A[A
model-00001-of-00006.safetensors: 44%|████▍ | 2.18G/4.97G [00:38<00:43, 64.5MB/s][A
model-00004-of-00006.safetensors: 45%|████▌ | 2.26G/5.00G [00:38<00:42, 64.1MB/s][A[A
model-00004-of-00006.safetensors: 45%|████▌ | 2.27G/5.00G [00:39<00:40, 66.8MB/s][A[A
model-00001-of-00006.safetensors: 44%|████▍ | 2.19G/4.97G [00:39<00:54, 50.7MB/s][A
model-00004-of-00006.safetensors: 46%|████▌ | 2.29G/5.00G [00:39<00:41, 66.0MB/s][A[A
model-00001-of-00006.safetensors: 44%|████▍ | 2.21G/4.97G [00:39<00:50, 54.2MB/s][A
model-00004-of-00006.safetensors: 46%|████▌ | 2.30G/5.00G [00:39<00:46, 57.4MB/s][A[A
model-00001-of-00006.safetensors: 45%|████▍ | 2.22G/4.97G [00:39<00:45, 60.4MB/s][A
model-00001-of-00006.safetensors: 45%|████▌ | 2.24G/4.97G [00:40<00:43, 62.6MB/s][A
model-00004-of-00006.safetensors: 46%|████▋ | 2.32G/5.00G [00:40<00:48, 55.3MB/s][A[A
model-00001-of-00006.safetensors: 45%|████▌ | 2.26G/4.97G [00:40<00:39, 69.4MB/s][A
model-00004-of-00006.safetensors: 47%|████▋ | 2.34G/5.00G [00:40<00:44, 59.3MB/s][A[A
model-00001-of-00006.safetensors: 46%|████▌ | 2.27G/4.97G [00:40<00:39, 68.8MB/s][A
model-00004-of-00006.safetensors: 47%|████▋ | 2.35G/5.00G [00:40<00:43, 61.4MB/s][A[A
model-00001-of-00006.safetensors: 46%|████▌ | 2.29G/4.97G [00:40<00:37, 70.9MB/s][A
model-00004-of-00006.safetensors: 47%|████▋ | 2.37G/5.00G [00:40<00:41, 63.8MB/s][A[A
model-00001-of-00006.safetensors: 46%|████▋ | 2.30G/4.97G [00:40<00:40, 66.3MB/s][A
model-00004-of-00006.safetensors: 48%|████▊ | 2.38G/5.00G [00:41<00:41, 63.7MB/s][A[A
model-00001-of-00006.safetensors: 47%|████▋ | 2.32G/4.97G [00:41<00:39, 66.6MB/s][A
model-00004-of-00006.safetensors: 48%|████▊ | 2.40G/5.00G [00:41<00:38, 67.4MB/s][A[A
model-00001-of-00006.safetensors: 47%|████▋ | 2.34G/4.97G [00:41<00:37, 69.5MB/s][A
model-00004-of-00006.safetensors: 48%|████▊ | 2.42G/5.00G [00:41<00:36, 71.1MB/s][A[A
model-00001-of-00006.safetensors: 47%|████▋ | 2.35G/4.97G [00:41<00:36, 72.2MB/s][A
model-00004-of-00006.safetensors: 49%|████▊ | 2.43G/5.00G [00:41<00:36, 69.9MB/s][A[A
model-00001-of-00006.safetensors: 48%|████▊ | 2.37G/4.97G [00:41<00:37, 68.9MB/s][A
model-00001-of-00006.safetensors: 48%|████▊ | 2.38G/4.97G [00:42<00:36, 70.6MB/s][A
model-00004-of-00006.safetensors: 49%|████▉ | 2.45G/5.00G [00:42<00:43, 59.0MB/s][A[A
model-00004-of-00006.safetensors: 49%|████▉ | 2.46G/5.00G [00:42<00:39, 63.9MB/s][A[A
model-00001-of-00006.safetensors: 48%|████▊ | 2.40G/4.97G [00:42<00:36, 70.7MB/s][A
model-00004-of-00006.safetensors: 50%|████▉ | 2.48G/5.00G [00:42<00:36, 68.7MB/s][A[A
model-00001-of-00006.safetensors: 49%|████▊ | 2.42G/4.97G [00:42<00:34, 73.0MB/s][A
model-00001-of-00006.safetensors: 49%|████▉ | 2.43G/4.97G [00:42<00:32, 78.2MB/s][A
model-00004-of-00006.safetensors: 50%|████▉ | 2.50G/5.00G [00:42<00:38, 65.7MB/s][A[A
model-00001-of-00006.safetensors: 49%|████▉ | 2.45G/4.97G [00:42<00:34, 73.7MB/s][A
model-00004-of-00006.safetensors: 50%|█████ | 2.51G/5.00G [00:42<00:35, 69.5MB/s][A[A
model-00004-of-00006.safetensors: 51%|█████ | 2.53G/5.00G [00:43<00:37, 66.0MB/s][A[A
model-00001-of-00006.safetensors: 50%|████▉ | 2.46G/4.97G [00:44<01:26, 29.1MB/s][A
model-00004-of-00006.safetensors: 51%|█████ | 2.54G/5.00G [00:44<01:23, 29.5MB/s][A[A
model-00001-of-00006.safetensors: 50%|████▉ | 2.48G/4.97G [00:44<01:15, 32.9MB/s][A
model-00004-of-00006.safetensors: 51%|█████ | 2.56G/5.00G [00:44<01:07, 36.0MB/s][A[A
model-00001-of-00006.safetensors: 50%|█████ | 2.50G/4.97G [00:44<01:05, 37.5MB/s][A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.58G/5.00G [00:45<01:06, 36.4MB/s][A[A
model-00001-of-00006.safetensors: 51%|█████ | 2.51G/4.97G [00:45<00:56, 43.5MB/s][A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.59G/5.00G [00:45<00:56, 42.4MB/s][A[A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.61G/5.00G [00:45<00:49, 48.3MB/s][A[A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.62G/5.00G [00:45<00:42, 55.5MB/s][A[A
model-00004-of-00006.safetensors: 53%|█████▎ | 2.64G/5.00G [00:45<00:39, 60.5MB/s][A[A
model-00004-of-00006.safetensors: 53%|█████▎ | 2.66G/5.00G [00:46<00:35, 66.0MB/s][A[A
model-00004-of-00006.safetensors: 53%|█████▎ | 2.67G/5.00G [00:46<00:35, 65.8MB/s][A[A
model-00004-of-00006.safetensors: 54%|█████▍ | 2.69G/5.00G [00:46<00:34, 67.0MB/s][A[A
model-00004-of-00006.safetensors: 54%|█████▍ | 2.70G/5.00G [00:46<00:37, 61.9MB/s][A[A
model-00004-of-00006.safetensors: 54%|█████▍ | 2.72G/5.00G [00:47<00:37, 61.3MB/s][A[A
model-00004-of-00006.safetensors: 55%|█████▍ | 2.74G/5.00G [00:47<00:36, 62.0MB/s][A[A
model-00004-of-00006.safetensors: 55%|█████▌ | 2.75G/5.00G [00:47<00:33, 66.7MB/s][A[A
model-00004-of-00006.safetensors: 55%|█████▌ | 2.77G/5.00G [00:47<00:33, 66.4MB/s][A[A
model-00004-of-00006.safetensors: 56%|█████▌ | 2.78G/5.00G [00:48<00:32, 68.4MB/s][A[A
model-00004-of-00006.safetensors: 56%|█████▌ | 2.80G/5.00G [00:48<00:55, 39.5MB/s][A[A
model-00004-of-00006.safetensors: 56%|█████▋ | 2.82G/5.00G [00:49<00:47, 45.5MB/s][A[A
model-00004-of-00006.safetensors: 57%|█████▋ | 2.83G/5.00G [00:49<00:42, 51.1MB/s][A[A
model-00004-of-00006.safetensors: 57%|█████▋ | 2.85G/5.00G [00:49<00:38, 55.2MB/s][A[A
model-00004-of-00006.safetensors: 57%|█████▋ | 2.86G/5.00G [00:49<00:37, 57.4MB/s][A[A
model-00004-of-00006.safetensors: 58%|█████▊ | 2.88G/5.00G [00:50<00:35, 60.2MB/s][A[A
model-00001-of-00006.safetensors: 51%|█████ | 2.53G/4.97G [00:50<04:36, 8.81MB/s][A
model-00004-of-00006.safetensors: 58%|█████▊ | 2.90G/5.00G [00:50<00:34, 60.5MB/s][A[A
model-00001-of-00006.safetensors: 51%|█████ | 2.54G/4.97G [00:50<03:21, 12.0MB/s][A
model-00004-of-00006.safetensors: 58%|█████▊ | 2.91G/5.00G [00:50<00:35, 58.6MB/s][A[A
model-00004-of-00006.safetensors: 59%|█████▊ | 2.93G/5.00G [00:50<00:34, 60.4MB/s][A[A
model-00004-of-00006.safetensors: 59%|█████▉ | 2.94G/5.00G [00:51<00:32, 63.2MB/s][A[A
model-00001-of-00006.safetensors: 52%|█████▏ | 2.56G/4.97G [00:51<02:48, 14.3MB/s][A
model-00004-of-00006.safetensors: 59%|█████▉ | 2.96G/5.00G [00:51<00:31, 65.1MB/s][A[A
model-00004-of-00006.safetensors: 60%|█████▉ | 2.98G/5.00G [00:51<00:30, 67.4MB/s][A[A
model-00001-of-00006.safetensors: 52%|█████▏ | 2.58G/4.97G [00:51<02:16, 17.6MB/s][A
model-00004-of-00006.safetensors: 60%|█████▉ | 2.99G/5.00G [00:51<00:29, 68.0MB/s][A[A
model-00001-of-00006.safetensors: 52%|█████▏ | 2.59G/4.97G [00:51<01:48, 22.0MB/s][A
model-00004-of-00006.safetensors: 60%|██████ | 3.01G/5.00G [00:51<00:30, 64.5MB/s][A[A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.61G/4.97G [00:52<01:26, 27.1MB/s][A
model-00004-of-00006.safetensors: 60%|██████ | 3.02G/5.00G [00:52<00:30, 64.0MB/s][A[A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.62G/4.97G [00:52<01:09, 33.7MB/s][A
model-00004-of-00006.safetensors: 61%|██████ | 3.04G/5.00G [00:52<00:28, 68.1MB/s][A[A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.64G/4.97G [00:52<00:59, 39.1MB/s][A
model-00004-of-00006.safetensors: 61%|██████ | 3.06G/5.00G [00:52<00:26, 72.2MB/s][A[A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.66G/4.97G [00:52<00:50, 45.9MB/s][A
model-00004-of-00006.safetensors: 61%|██████▏ | 3.07G/5.00G [00:52<00:26, 72.5MB/s][A[A
model-00001-of-00006.safetensors: 54%|█████▍ | 2.67G/4.97G [00:53<00:48, 47.3MB/s][A
model-00004-of-00006.safetensors: 62%|██████▏ | 3.09G/5.00G [00:53<00:31, 61.3MB/s][A[A
model-00001-of-00006.safetensors: 54%|█████▍ | 2.69G/4.97G [00:53<00:43, 52.5MB/s][A
model-00001-of-00006.safetensors: 54%|█████▍ | 2.70G/4.97G [00:53<00:37, 59.7MB/s][A
model-00004-of-00006.safetensors: 62%|██████▏ | 3.10G/5.00G [00:53<00:30, 62.2MB/s][A[A
model-00001-of-00006.safetensors: 55%|█████▍ | 2.72G/4.97G [00:53<00:36, 61.9MB/s][A
model-00004-of-00006.safetensors: 62%|██████▏ | 3.12G/5.00G [00:53<00:29, 64.1MB/s][A[A
model-00004-of-00006.safetensors: 63%|██████▎ | 3.14G/5.00G [00:53<00:27, 67.9MB/s][A[A
model-00001-of-00006.safetensors: 55%|█████▌ | 2.74G/4.97G [00:53<00:34, 64.5MB/s][A
model-00004-of-00006.safetensors: 63%|██████▎ | 3.15G/5.00G [00:54<00:26, 70.0MB/s][A[A
model-00001-of-00006.safetensors: 55%|█████▌ | 2.75G/4.97G [00:54<00:34, 64.5MB/s][A
model-00004-of-00006.safetensors: 63%|██████▎ | 3.17G/5.00G [00:54<00:24, 73.6MB/s][A[A
model-00001-of-00006.safetensors: 56%|█████▌ | 2.77G/4.97G [00:54<00:37, 59.2MB/s][A
model-00004-of-00006.safetensors: 64%|██████▎ | 3.18G/5.00G [00:54<00:24, 73.8MB/s][A[A
model-00001-of-00006.safetensors: 56%|█████▌ | 2.78G/4.97G [00:54<00:33, 64.8MB/s][A
model-00004-of-00006.safetensors: 64%|██████▍ | 3.20G/5.00G [00:54<00:25, 71.2MB/s][A[A
model-00004-of-00006.safetensors: 64%|██████▍ | 3.22G/5.00G [00:55<00:26, 66.7MB/s][A[A
model-00001-of-00006.safetensors: 56%|█████▋ | 2.80G/4.97G [00:55<00:39, 55.0MB/s][A
model-00004-of-00006.safetensors: 65%|██████▍ | 3.23G/5.00G [00:55<00:28, 62.5MB/s][A[A
model-00001-of-00006.safetensors: 57%|█████▋ | 2.82G/4.97G [00:55<00:37, 56.7MB/s][A
model-00001-of-00006.safetensors: 57%|█████▋ | 2.83G/4.97G [00:55<00:33, 63.3MB/s][A
model-00001-of-00006.safetensors: 57%|█████▋ | 2.85G/4.97G [00:55<00:30, 69.9MB/s][A
model-00004-of-00006.safetensors: 65%|██████▍ | 3.25G/5.00G [00:55<00:32, 54.6MB/s][A[A
model-00001-of-00006.safetensors: 58%|█████▊ | 2.86G/4.97G [00:55<00:30, 68.5MB/s][A
model-00004-of-00006.safetensors: 65%|██████▌ | 3.26G/5.00G [00:55<00:29, 58.1MB/s][A[A
model-00001-of-00006.safetensors: 58%|█████▊ | 2.88G/4.97G [00:56<00:29, 69.7MB/s][A
model-00004-of-00006.safetensors: 66%|██████▌ | 3.28G/5.00G [00:56<00:31, 54.7MB/s][A[A
model-00001-of-00006.safetensors: 58%|█████▊ | 2.90G/4.97G [00:56<00:29, 69.9MB/s][A
model-00004-of-00006.safetensors: 66%|██████▌ | 3.30G/5.00G [00:56<00:30, 55.4MB/s][A[A
model-00001-of-00006.safetensors: 59%|█████▊ | 2.91G/4.97G [00:56<00:27, 74.0MB/s][A
model-00001-of-00006.safetensors: 59%|█████▉ | 2.93G/4.97G [00:56<00:28, 71.2MB/s][A
model-00004-of-00006.safetensors: 66%|██████▌ | 3.31G/5.00G [00:56<00:31, 53.0MB/s][A[A
model-00001-of-00006.safetensors: 59%|█████▉ | 2.94G/4.97G [00:57<00:29, 69.0MB/s][A
model-00004-of-00006.safetensors: 67%|██████▋ | 3.33G/5.00G [00:57<00:29, 57.1MB/s][A[A
model-00001-of-00006.safetensors: 60%|█████▉ | 2.96G/4.97G [00:57<00:29, 68.5MB/s][A
model-00004-of-00006.safetensors: 67%|██████▋ | 3.34G/5.00G [00:57<00:26, 63.0MB/s][A[A
model-00004-of-00006.safetensors: 67%|██████▋ | 3.36G/5.00G [00:57<00:26, 62.7MB/s][A[A
model-00001-of-00006.safetensors: 60%|█████▉ | 2.98G/4.97G [00:57<00:31, 63.8MB/s][A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.38G/5.00G [00:57<00:24, 67.1MB/s][A[A
model-00001-of-00006.safetensors: 60%|██████ | 2.99G/4.97G [00:57<00:29, 66.0MB/s][A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.39G/5.00G [00:57<00:23, 68.6MB/s][A[A
model-00001-of-00006.safetensors: 61%|██████ | 3.01G/4.97G [00:58<00:30, 63.5MB/s][A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.41G/5.00G [00:58<00:24, 64.2MB/s][A[A
model-00001-of-00006.safetensors: 61%|██████ | 3.02G/4.97G [00:58<00:29, 65.4MB/s][A
model-00001-of-00006.safetensors: 61%|██████ | 3.04G/4.97G [00:58<00:35, 54.6MB/s][A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.42G/5.00G [00:58<00:36, 43.0MB/s][A[A
model-00001-of-00006.safetensors: 62%|██████▏ | 3.06G/4.97G [00:59<00:41, 46.5MB/s][A
model-00004-of-00006.safetensors: 69%|██████▊ | 3.43G/5.00G [00:59<00:41, 37.5MB/s][A[A
model-00004-of-00006.safetensors: 69%|██████▉ | 3.44G/5.00G [00:59<00:38, 40.3MB/s][A[A
model-00001-of-00006.safetensors: 62%|██████▏ | 3.07G/4.97G [00:59<00:37, 51.2MB/s][A
model-00001-of-00006.safetensors: 62%|██████▏ | 3.09G/4.97G [00:59<00:33, 56.4MB/s][A
model-00004-of-00006.safetensors: 69%|██████▉ | 3.46G/5.00G [00:59<00:32, 47.2MB/s][A[A
model-00004-of-00006.safetensors: 69%|██████▉ | 3.47G/5.00G [00:59<00:28, 53.0MB/s][A[A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.10G/4.97G [00:59<00:31, 58.8MB/s][A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.12G/4.97G [01:00<00:31, 58.6MB/s][A
model-00004-of-00006.safetensors: 70%|██████▉ | 3.49G/5.00G [01:00<00:28, 52.7MB/s][A[A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.14G/4.97G [01:00<00:29, 62.6MB/s][A
model-00004-of-00006.safetensors: 70%|███████ | 3.50G/5.00G [01:00<00:25, 59.5MB/s][A[A
model-00004-of-00006.safetensors: 70%|███████ | 3.52G/5.00G [01:00<00:23, 62.7MB/s][A[A
model-00004-of-00006.safetensors: 71%|███████ | 3.54G/5.00G [01:00<00:20, 70.2MB/s][A[A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.15G/4.97G [01:01<00:41, 43.3MB/s][A
model-00004-of-00006.safetensors: 71%|███████ | 3.55G/5.00G [01:01<00:21, 67.2MB/s][A[A
model-00004-of-00006.safetensors: 71%|███████▏ | 3.57G/5.00G [01:01<00:20, 68.6MB/s][A[A
model-00001-of-00006.safetensors: 64%|██████▍ | 3.17G/4.97G [01:01<00:37, 48.3MB/s][A
model-00001-of-00006.safetensors: 64%|██████▍ | 3.18G/4.97G [01:01<00:31, 56.4MB/s][A
model-00004-of-00006.safetensors: 72%|███████▏ | 3.58G/5.00G [01:01<00:23, 59.2MB/s][A[A
model-00001-of-00006.safetensors: 64%|██████▍ | 3.20G/4.97G [01:01<00:31, 56.7MB/s][A
model-00001-of-00006.safetensors: 65%|██████▍ | 3.22G/4.97G [01:01<00:28, 62.5MB/s][A
model-00004-of-00006.safetensors: 72%|███████▏ | 3.60G/5.00G [01:01<00:25, 55.0MB/s][A[A
model-00001-of-00006.safetensors: 65%|██████▌ | 3.23G/4.97G [01:02<00:28, 61.5MB/s][A
model-00004-of-00006.safetensors: 72%|███████▏ | 3.62G/5.00G [01:02<00:24, 56.1MB/s][A[A
model-00004-of-00006.safetensors: 73%|███████▎ | 3.63G/5.00G [01:02<00:23, 57.5MB/s][A[A
model-00001-of-00006.safetensors: 65%|██████▌ | 3.25G/4.97G [01:02<00:29, 57.6MB/s][A
model-00004-of-00006.safetensors: 73%|███████▎ | 3.65G/5.00G [01:02<00:22, 61.2MB/s][A[A
model-00001-of-00006.safetensors: 66%|██████▌ | 3.26G/4.97G [01:02<00:30, 56.2MB/s][A
model-00001-of-00006.safetensors: 66%|██████▌ | 3.28G/4.97G [01:03<00:28, 60.0MB/s][A
model-00004-of-00006.safetensors: 73%|███████▎ | 3.66G/5.00G [01:03<00:23, 57.9MB/s][A[A
model-00001-of-00006.safetensors: 66%|██████▋ | 3.30G/4.97G [01:03<00:25, 66.6MB/s][A
model-00004-of-00006.safetensors: 74%|███████▎ | 3.68G/5.00G [01:03<00:21, 60.2MB/s][A[A
model-00001-of-00006.safetensors: 67%|██████▋ | 3.31G/4.97G [01:03<00:24, 67.4MB/s][A
model-00004-of-00006.safetensors: 74%|███████▍ | 3.70G/5.00G [01:03<00:21, 61.8MB/s][A[A
model-00001-of-00006.safetensors: 67%|██████▋ | 3.33G/4.97G [01:03<00:24, 66.9MB/s][A
model-00004-of-00006.safetensors: 74%|███████▍ | 3.71G/5.00G [01:03<00:18, 68.0MB/s][A[A
model-00001-of-00006.safetensors: 67%|██████▋ | 3.34G/4.97G [01:03<00:23, 69.2MB/s][A
model-00004-of-00006.safetensors: 75%|███████▍ | 3.73G/5.00G [01:03<00:18, 68.8MB/s][A[A
model-00001-of-00006.safetensors: 68%|██████▊ | 3.36G/4.97G [01:04<00:24, 65.9MB/s][A
model-00001-of-00006.safetensors: 68%|██████▊ | 3.38G/4.97G [01:04<00:22, 69.4MB/s][A
model-00004-of-00006.safetensors: 75%|███████▍ | 3.74G/5.00G [01:04<00:24, 51.6MB/s][A[A
model-00004-of-00006.safetensors: 75%|███████▌ | 3.76G/5.00G [01:04<00:21, 57.3MB/s][A[A
model-00004-of-00006.safetensors: 76%|███████▌ | 3.78G/5.00G [01:04<00:19, 61.9MB/s][A[A
model-00001-of-00006.safetensors: 68%|██████▊ | 3.39G/4.97G [01:04<00:32, 49.0MB/s][A
model-00001-of-00006.safetensors: 69%|██████▊ | 3.41G/4.97G [01:05<00:28, 54.7MB/s][A
model-00004-of-00006.safetensors: 76%|███████▌ | 3.79G/5.00G [01:05<00:22, 52.8MB/s][A[A
model-00001-of-00006.safetensors: 69%|██████▉ | 3.42G/4.97G [01:05<00:25, 61.5MB/s][A
model-00001-of-00006.safetensors: 69%|██████▉ | 3.44G/4.97G [01:05<00:22, 66.6MB/s][A
model-00004-of-00006.safetensors: 76%|███████▌ | 3.81G/5.00G [01:05<00:23, 50.4MB/s][A[A
model-00001-of-00006.safetensors: 70%|██████▉ | 3.46G/4.97G [01:05<00:22, 67.5MB/s][A
model-00004-of-00006.safetensors: 76%|███████▋ | 3.82G/5.00G [01:05<00:20, 56.2MB/s][A[A
model-00001-of-00006.safetensors: 70%|██████▉ | 3.47G/4.97G [01:05<00:22, 66.6MB/s][A
model-00004-of-00006.safetensors: 77%|███████▋ | 3.84G/5.00G [01:06<00:22, 50.9MB/s][A[A
model-00001-of-00006.safetensors: 70%|███████ | 3.49G/4.97G [01:06<00:22, 66.3MB/s][A
model-00004-of-00006.safetensors: 77%|███████▋ | 3.85G/5.00G [01:06<00:18, 61.6MB/s][A[A
model-00001-of-00006.safetensors: 71%|███████ | 3.50G/4.97G [01:06<00:21, 67.0MB/s][A
model-00004-of-00006.safetensors: 77%|███████▋ | 3.86G/5.00G [01:06<00:20, 54.7MB/s][A[A
model-00001-of-00006.safetensors: 71%|███████ | 3.52G/4.97G [01:06<00:21, 65.8MB/s][A
model-00004-of-00006.safetensors: 77%|███████▋ | 3.87G/5.00G [01:06<00:22, 49.3MB/s][A[A
model-00001-of-00006.safetensors: 71%|███████ | 3.54G/4.97G [01:06<00:21, 66.9MB/s][A
model-00004-of-00006.safetensors: 78%|███████▊ | 3.89G/5.00G [01:06<00:19, 55.8MB/s][A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.55G/4.97G [01:07<00:21, 67.2MB/s][A
model-00004-of-00006.safetensors: 78%|███████▊ | 3.90G/5.00G [01:07<00:20, 54.7MB/s][A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.57G/4.97G [01:07<00:21, 64.8MB/s][A
model-00004-of-00006.safetensors: 78%|███████▊ | 3.92G/5.00G [01:07<00:19, 56.1MB/s][A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.58G/4.97G [01:07<00:21, 65.6MB/s][A
model-00004-of-00006.safetensors: 79%|███████▊ | 3.94G/5.00G [01:07<00:18, 58.8MB/s][A[A
model-00004-of-00006.safetensors: 79%|███████▉ | 3.95G/5.00G [01:07<00:16, 63.0MB/s][A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.60G/4.97G [01:08<00:23, 59.0MB/s][A
model-00004-of-00006.safetensors: 79%|███████▉ | 3.97G/5.00G [01:08<00:15, 66.7MB/s][A[A
model-00001-of-00006.safetensors: 73%|███████▎ | 3.62G/4.97G [01:08<00:21, 62.4MB/s][A
model-00004-of-00006.safetensors: 80%|███████▉ | 3.98G/5.00G [01:08<00:14, 70.9MB/s][A[A
model-00001-of-00006.safetensors: 73%|███████▎ | 3.63G/4.97G [01:08<00:22, 59.8MB/s][A
model-00004-of-00006.safetensors: 80%|████████ | 4.00G/5.00G [01:08<00:14, 69.9MB/s][A[A
model-00001-of-00006.safetensors: 73%|███████▎ | 3.65G/4.97G [01:08<00:21, 59.9MB/s][A
model-00004-of-00006.safetensors: 80%|████████ | 4.02G/5.00G [01:08<00:14, 69.7MB/s][A[A
model-00001-of-00006.safetensors: 74%|███████▍ | 3.66G/4.97G [01:09<00:21, 60.5MB/s][A
model-00004-of-00006.safetensors: 81%|████████ | 4.03G/5.00G [01:09<00:13, 70.8MB/s][A[A
model-00004-of-00006.safetensors: 81%|████████ | 4.05G/5.00G [01:09<00:13, 69.3MB/s][A[A
model-00001-of-00006.safetensors: 74%|███████▍ | 3.68G/4.97G [01:09<00:22, 57.4MB/s][A
model-00004-of-00006.safetensors: 81%|████████▏ | 4.06G/5.00G [01:09<00:13, 70.4MB/s][A[A
model-00001-of-00006.safetensors: 74%|███████▍ | 3.70G/4.97G [01:09<00:23, 53.2MB/s][A
model-00004-of-00006.safetensors: 82%|████████▏ | 4.08G/5.00G [01:09<00:12, 72.6MB/s][A[A
model-00001-of-00006.safetensors: 75%|███████▍ | 3.71G/4.97G [01:09<00:22, 56.0MB/s][A
model-00004-of-00006.safetensors: 82%|████████▏ | 4.10G/5.00G [01:09<00:12, 70.3MB/s][A[A
model-00001-of-00006.safetensors: 75%|███████▌ | 3.73G/4.97G [01:10<00:20, 60.2MB/s][A
model-00004-of-00006.safetensors: 82%|████████▏ | 4.11G/5.00G [01:10<00:12, 71.0MB/s][A[A
model-00001-of-00006.safetensors: 75%|███████▌ | 3.74G/4.97G [01:10<00:19, 61.8MB/s][A
model-00004-of-00006.safetensors: 83%|████████▎ | 4.13G/5.00G [01:10<00:13, 64.0MB/s][A[A
model-00001-of-00006.safetensors: 76%|███████▌ | 3.76G/4.97G [01:10<00:18, 65.1MB/s][A
model-00004-of-00006.safetensors: 83%|████████▎ | 4.14G/5.00G [01:10<00:12, 67.6MB/s][A[A
model-00001-of-00006.safetensors: 76%|███████▌ | 3.78G/4.97G [01:10<00:18, 62.7MB/s][A
model-00004-of-00006.safetensors: 83%|████████▎ | 4.16G/5.00G [01:10<00:12, 66.5MB/s][A[A
model-00001-of-00006.safetensors: 76%|███████▋ | 3.79G/4.97G [01:11<00:18, 62.9MB/s][A
model-00004-of-00006.safetensors: 84%|████████▎ | 4.18G/5.00G [01:11<00:13, 59.9MB/s][A[A
model-00001-of-00006.safetensors: 77%|███████▋ | 3.81G/4.97G [01:11<00:19, 59.3MB/s][A
model-00004-of-00006.safetensors: 84%|████████▍ | 4.19G/5.00G [01:11<00:13, 61.9MB/s][A[A
model-00001-of-00006.safetensors: 77%|███████▋ | 3.82G/4.97G [01:11<00:18, 62.7MB/s][A
model-00004-of-00006.safetensors: 84%|████████▍ | 4.21G/5.00G [01:11<00:11, 67.3MB/s][A[A
model-00001-of-00006.safetensors: 77%|███████▋ | 3.84G/4.97G [01:11<00:17, 62.8MB/s][A
model-00004-of-00006.safetensors: 84%|████████▍ | 4.22G/5.00G [01:11<00:11, 67.3MB/s][A[A
model-00001-of-00006.safetensors: 78%|███████▊ | 3.86G/4.97G [01:12<00:17, 63.0MB/s][A
model-00001-of-00006.safetensors: 78%|███████▊ | 3.87G/4.97G [01:12<00:16, 66.9MB/s][A
model-00004-of-00006.safetensors: 85%|████████▍ | 4.24G/5.00G [01:12<00:15, 50.1MB/s][A[A
model-00001-of-00006.safetensors: 78%|███████▊ | 3.89G/4.97G [01:12<00:16, 66.0MB/s][A
model-00004-of-00006.safetensors: 85%|████████▌ | 4.26G/5.00G [01:12<00:13, 54.1MB/s][A[A
model-00001-of-00006.safetensors: 79%|███████▊ | 3.90G/4.97G [01:13<00:18, 58.9MB/s][A
model-00001-of-00006.safetensors: 79%|███████▉ | 3.92G/4.97G [01:13<00:17, 61.2MB/s][A
model-00004-of-00006.safetensors: 85%|████████▌ | 4.27G/5.00G [01:13<00:18, 38.8MB/s][A[A
model-00001-of-00006.safetensors: 79%|███████▉ | 3.94G/4.97G [01:13<00:15, 67.2MB/s][A
model-00004-of-00006.safetensors: 86%|████████▌ | 4.29G/5.00G [01:13<00:16, 44.1MB/s][A[A
model-00004-of-00006.safetensors: 86%|████████▌ | 4.30G/5.00G [01:13<00:13, 52.6MB/s][A[A
model-00001-of-00006.safetensors: 80%|███████▉ | 3.95G/4.97G [01:13<00:18, 53.9MB/s][A
model-00001-of-00006.safetensors: 80%|███████▉ | 3.97G/4.97G [01:14<00:16, 60.0MB/s][A
model-00004-of-00006.safetensors: 86%|████████▋ | 4.32G/5.00G [01:14<00:12, 56.6MB/s][A[A
model-00004-of-00006.safetensors: 87%|████████▋ | 4.34G/5.00G [01:14<00:11, 59.5MB/s][A[A
model-00001-of-00006.safetensors: 80%|████████ | 3.98G/4.97G [01:14<00:16, 57.8MB/s][A
model-00001-of-00006.safetensors: 81%|████████ | 4.00G/4.97G [01:14<00:15, 60.8MB/s][A
model-00004-of-00006.safetensors: 87%|████████▋ | 4.35G/5.00G [01:14<00:11, 55.4MB/s][A[A
model-00001-of-00006.safetensors: 81%|████████ | 4.02G/4.97G [01:14<00:15, 62.9MB/s][A
model-00004-of-00006.safetensors: 87%|████████▋ | 4.37G/5.00G [01:14<00:11, 56.5MB/s][A[A
model-00001-of-00006.safetensors: 81%|████████ | 4.03G/4.97G [01:15<00:14, 66.1MB/s][A
model-00004-of-00006.safetensors: 88%|████████▊ | 4.38G/5.00G [01:15<00:10, 60.8MB/s][A[A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.05G/4.97G [01:15<00:14, 65.0MB/s][A
model-00004-of-00006.safetensors: 88%|████████▊ | 4.40G/5.00G [01:15<00:09, 63.5MB/s][A[A
model-00004-of-00006.safetensors: 88%|████████▊ | 4.42G/5.00G [01:15<00:08, 68.5MB/s][A[A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.06G/4.97G [01:15<00:17, 51.0MB/s][A
model-00004-of-00006.safetensors: 89%|████████▊ | 4.43G/5.00G [01:15<00:08, 67.8MB/s][A[A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.08G/4.97G [01:15<00:16, 55.1MB/s][A
model-00004-of-00006.safetensors: 89%|████████▉ | 4.45G/5.00G [01:16<00:08, 67.2MB/s][A[A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.10G/4.97G [01:16<00:14, 59.6MB/s][A
model-00004-of-00006.safetensors: 89%|████████▉ | 4.46G/5.00G [01:16<00:07, 67.2MB/s][A[A
model-00001-of-00006.safetensors: 83%|████████▎ | 4.11G/4.97G [01:16<00:13, 62.8MB/s][A
model-00004-of-00006.safetensors: 90%|████████▉ | 4.48G/5.00G [01:16<00:07, 67.4MB/s][A[A
model-00001-of-00006.safetensors: 83%|████████▎ | 4.13G/4.97G [01:16<00:12, 66.9MB/s][A
model-00004-of-00006.safetensors: 90%|████████▉ | 4.50G/5.00G [01:16<00:07, 71.6MB/s][A[A
model-00004-of-00006.safetensors: 90%|█████████ | 4.51G/5.00G [01:16<00:06, 72.1MB/s][A[A
model-00001-of-00006.safetensors: 83%|████████▎ | 4.14G/4.97G [01:16<00:13, 60.0MB/s][A
model-00004-of-00006.safetensors: 91%|█████████ | 4.53G/5.00G [01:17<00:06, 73.5MB/s][A[A
model-00001-of-00006.safetensors: 84%|████████▍ | 4.16G/4.97G [01:17<00:13, 61.2MB/s][A
model-00004-of-00006.safetensors: 91%|█████████ | 4.54G/5.00G [01:17<00:06, 67.1MB/s][A[A
model-00001-of-00006.safetensors: 84%|████████▍ | 4.18G/4.97G [01:17<00:13, 60.6MB/s][A
model-00004-of-00006.safetensors: 91%|█████████ | 4.56G/5.00G [01:17<00:06, 63.0MB/s][A[A
model-00001-of-00006.safetensors: 84%|████████▍ | 4.19G/4.97G [01:17<00:12, 61.8MB/s][A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.58G/5.00G [01:17<00:06, 66.3MB/s][A[A
model-00001-of-00006.safetensors: 85%|████████▍ | 4.21G/4.97G [01:17<00:12, 63.0MB/s][A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.59G/5.00G [01:18<00:06, 66.5MB/s][A[A
model-00001-of-00006.safetensors: 85%|████████▌ | 4.22G/4.97G [01:18<00:11, 67.2MB/s][A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.61G/5.00G [01:18<00:05, 71.2MB/s][A[A
model-00001-of-00006.safetensors: 85%|████████▌ | 4.24G/4.97G [01:18<00:10, 69.8MB/s][A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.62G/5.00G [01:18<00:05, 75.0MB/s][A[A
model-00001-of-00006.safetensors: 86%|████████▌ | 4.26G/4.97G [01:18<00:10, 69.0MB/s][A
model-00001-of-00006.safetensors: 86%|████████▌ | 4.27G/4.97G [01:18<00:09, 71.4MB/s][A
model-00004-of-00006.safetensors: 93%|█████████▎| 4.64G/5.00G [01:18<00:06, 53.5MB/s][A[A
model-00001-of-00006.safetensors: 86%|████████▋ | 4.29G/4.97G [01:19<00:09, 71.6MB/s][A
model-00004-of-00006.safetensors: 93%|█████████▎| 4.66G/5.00G [01:19<00:05, 57.5MB/s][A[A
model-00001-of-00006.safetensors: 87%|████████▋ | 4.30G/4.97G [01:19<00:10, 63.9MB/s][A
model-00001-of-00006.safetensors: 87%|████████▋ | 4.32G/4.97G [01:19<00:10, 61.3MB/s][A
model-00001-of-00006.safetensors: 87%|████████▋ | 4.34G/4.97G [01:19<00:09, 64.6MB/s][A
model-00001-of-00006.safetensors: 88%|████████▊ | 4.35G/4.97G [01:20<00:09, 67.1MB/s][A
model-00004-of-00006.safetensors: 93%|█████████▎| 4.67G/5.00G [01:20<00:09, 35.5MB/s][A[A
model-00001-of-00006.safetensors: 88%|████████▊ | 4.37G/4.97G [01:20<00:08, 68.1MB/s][A
model-00004-of-00006.safetensors: 94%|█████████▍| 4.69G/5.00G [01:20<00:07, 41.1MB/s][A[A
model-00004-of-00006.safetensors: 94%|█████████▍| 4.70G/5.00G [01:20<00:06, 47.1MB/s][A[A
model-00001-of-00006.safetensors: 88%|████████▊ | 4.38G/4.97G [01:20<00:09, 64.3MB/s][A
model-00001-of-00006.safetensors: 89%|████████▊ | 4.40G/4.97G [01:20<00:08, 68.4MB/s][A
model-00004-of-00006.safetensors: 94%|█████████▍| 4.72G/5.00G [01:20<00:05, 50.0MB/s][A[A
model-00004-of-00006.safetensors: 95%|█████████▍| 4.74G/5.00G [01:21<00:04, 53.0MB/s][A[A
model-00001-of-00006.safetensors: 89%|████████▉ | 4.42G/4.97G [01:21<00:08, 62.2MB/s][A
model-00001-of-00006.safetensors: 89%|████████▉ | 4.43G/4.97G [01:21<00:08, 64.5MB/s][A
model-00004-of-00006.safetensors: 95%|█████████▌| 4.75G/5.00G [01:21<00:04, 56.9MB/s][A[A
model-00004-of-00006.safetensors: 95%|█████████▌| 4.77G/5.00G [01:21<00:03, 60.6MB/s][A[A
model-00001-of-00006.safetensors: 90%|████████▉ | 4.45G/4.97G [01:21<00:08, 64.6MB/s][A
model-00001-of-00006.safetensors: 90%|████████▉ | 4.46G/4.97G [01:21<00:08, 62.0MB/s][A
model-00001-of-00006.safetensors: 90%|█████████ | 4.48G/4.97G [01:22<00:07, 65.7MB/s][A
model-00004-of-00006.safetensors: 96%|█████████▌| 4.78G/5.00G [01:22<00:05, 42.1MB/s][A[A
model-00001-of-00006.safetensors: 91%|█████████ | 4.50G/4.97G [01:22<00:07, 61.6MB/s][A
model-00004-of-00006.safetensors: 96%|█████████▌| 4.80G/5.00G [01:22<00:04, 48.7MB/s][A[A
model-00001-of-00006.safetensors: 91%|█████████ | 4.51G/4.97G [01:22<00:06, 65.8MB/s][A
model-00004-of-00006.safetensors: 96%|█████████▋| 4.82G/5.00G [01:22<00:03, 55.5MB/s][A[A
model-00001-of-00006.safetensors: 91%|█████████ | 4.53G/4.97G [01:22<00:06, 68.9MB/s][A
model-00004-of-00006.safetensors: 97%|█████████▋| 4.83G/5.00G [01:22<00:03, 55.3MB/s][A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.54G/4.97G [01:22<00:05, 74.1MB/s][A
model-00004-of-00006.safetensors: 97%|█████████▋| 4.85G/5.00G [01:23<00:02, 61.2MB/s][A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.56G/4.97G [01:23<00:05, 78.9MB/s][A
model-00004-of-00006.safetensors: 97%|█████████▋| 4.86G/5.00G [01:23<00:02, 64.6MB/s][A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.58G/4.97G [01:23<00:05, 73.9MB/s][A
model-00004-of-00006.safetensors: 98%|█████████▊| 4.88G/5.00G [01:23<00:01, 67.1MB/s][A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.59G/4.97G [01:23<00:04, 75.5MB/s][A
model-00004-of-00006.safetensors: 98%|█████████▊| 4.90G/5.00G [01:23<00:01, 70.5MB/s][A[A
model-00001-of-00006.safetensors: 93%|█████████▎| 4.61G/4.97G [01:23<00:05, 69.2MB/s][A
model-00004-of-00006.safetensors: 98%|█████████▊| 4.91G/5.00G [01:23<00:01, 69.7MB/s][A[A
model-00004-of-00006.safetensors: 99%|█████████▊| 4.93G/5.00G [01:24<00:01, 68.6MB/s][A[A
model-00004-of-00006.safetensors: 99%|█████████▉| 4.94G/5.00G [01:24<00:00, 65.9MB/s][A[A
model-00004-of-00006.safetensors: 99%|█████████▉| 4.96G/5.00G [01:24<00:00, 68.2MB/s][A[A
model-00001-of-00006.safetensors: 93%|█████████▎| 4.62G/4.97G [01:24<00:09, 37.5MB/s][A
model-00004-of-00006.safetensors: 100%|█████████▉| 4.98G/5.00G [01:24<00:00, 70.5MB/s][A[A
model-00001-of-00006.safetensors: 93%|█████████▎| 4.64G/4.97G [01:24<00:07, 43.1MB/s][A
model-00001-of-00006.safetensors: 94%|█████████▍| 4.66G/4.97G [01:25<00:06, 47.1MB/s][A
model-00004-of-00006.safetensors: 100%|█████████▉| 4.99G/5.00G [01:25<00:00, 57.6MB/s][A[A
model-00001-of-00006.safetensors: 94%|█████████▍| 4.67G/4.97G [01:25<00:05, 55.0MB/s][A
model-00004-of-00006.safetensors: 100%|██████████| 5.00G/5.00G [01:25<00:00, 58.5MB/s]
model-00001-of-00006.safetensors: 94%|█████████▍| 4.69G/4.97G [01:25<00:04, 59.6MB/s][A
model-00001-of-00006.safetensors: 95%|█████████▍| 4.70G/4.97G [01:25<00:03, 66.3MB/s][A
model-00001-of-00006.safetensors: 95%|█████████▌| 4.72G/4.97G [01:26<00:03, 70.5MB/s][A
model-00001-of-00006.safetensors: 95%|█████████▌| 4.74G/4.97G [01:26<00:03, 71.9MB/s][A
model-00001-of-00006.safetensors: 96%|█████████▌| 4.75G/4.97G [01:26<00:03, 66.2MB/s][A
model-00001-of-00006.safetensors: 96%|█████████▌| 4.77G/4.97G [01:26<00:02, 66.8MB/s][A
model-00001-of-00006.safetensors: 96%|█████████▋| 4.78G/4.97G [01:26<00:02, 71.9MB/s][A
model-00001-of-00006.safetensors: 97%|█████████▋| 4.80G/4.97G [01:27<00:02, 72.2MB/s][A
model-00001-of-00006.safetensors: 97%|█████████▋| 4.82G/4.97G [01:27<00:02, 67.8MB/s][A
model-00001-of-00006.safetensors: 97%|█████████▋| 4.83G/4.97G [01:27<00:02, 66.3MB/s][A
model-00001-of-00006.safetensors: 98%|█████████▊| 4.85G/4.97G [01:27<00:01, 67.4MB/s][A
model-00001-of-00006.safetensors: 98%|█████████▊| 4.86G/4.97G [01:28<00:01, 66.4MB/s][A
model-00001-of-00006.safetensors: 98%|█████████▊| 4.88G/4.97G [01:28<00:01, 69.7MB/s][A
model-00001-of-00006.safetensors: 99%|█████████▊| 4.90G/4.97G [01:29<00:02, 32.9MB/s][A
model-00001-of-00006.safetensors: 99%|█████████▉| 4.91G/4.97G [01:29<00:01, 39.2MB/s][A
model-00001-of-00006.safetensors: 99%|█████████▉| 4.93G/4.97G [01:29<00:00, 45.0MB/s][A
model-00001-of-00006.safetensors: 100%|█████████▉| 4.94G/4.97G [01:30<00:00, 51.2MB/s][A
model-00001-of-00006.safetensors: 100%|█████████▉| 4.96G/4.97G [01:30<00:00, 56.5MB/s][A
model-00001-of-00006.safetensors: 100%|██████████| 4.97G/4.97G [01:30<00:00, 54.9MB/s]
Upload 132 LFS files: 1%| | 1/132 [01:30<3:17:33, 90.49s/it][A[A[A
Upload 132 LFS files: 100%|██████████| 132/132 [01:30<00:00, 1.46it/s]
9%|▊ | 49001/569592 [49:43<11579:31:23, 80.07s/it]
9%|▊ | 49001/569592 [49:43<11579:31:23, 80.07s/it]
9%|▊ | 49002/569592 [49:44<8149:35:33, 56.36s/it]
9%|▊ | 49002/569592 [49:44<8149:35:33, 56.36s/it]
9%|▊ | 49003/569592 [49:45<5744:19:07, 39.72s/it]
9%|▊ | 49003/569592 [49:45<5744:19:07, 39.72s/it]
9%|▊ | 49004/569592 [49:46<4061:03:49, 28.08s/it]
9%|▊ | 49004/569592 [49:46<4061:03:49, 28.08s/it]
9%|▊ | 49005/569592 [49:47<2886:39:52, 19.96s/it]
9%|▊ | 49005/569592 [49:47<2886:39:52, 19.96s/it]
9%|▊ | 49006/569592 [49:48<2063:49:15, 14.27s/it]
9%|▊ | 49006/569592 [49:48<2063:49:15, 14.27s/it]
9%|▊ | 49007/569592 [49:49<1487:05:21, 10.28s/it]
9%|▊ | 49007/569592 [49:49<1487:05:21, 10.28s/it]
9%|▊ | 49008/569592 [49:50<1081:51:29, 7.48s/it]
9%|▊ | 49008/569592 [49:50<1081:51:29, 7.48s/it]
9%|▊ | 49009/569592 [49:53<896:46:00, 6.20s/it]
9%|▊ | 49009/569592 [49:53<896:46:00, 6.20s/it]
9%|▊ | 49010/569592 [49:57<786:02:45, 5.44s/it]
9%|▊ | 49010/569592 [49:57<786:02:45, 5.44s/it]
9%|▊ | 49011/569592 [49:59<647:57:03, 4.48s/it]
9%|▊ | 49011/569592 [49:59<647:57:03, 4.48s/it]
9%|▊ | 49012/569592 [50:00<495:31:58, 3.43s/it]
9%|▊ | 49012/569592 [50:00<495:31:58, 3.43s/it]
9%|▊ | 49013/569592 [50:01<416:09:05, 2.88s/it]
9%|▊ | 49013/569592 [50:01<416:09:05, 2.88s/it]
9%|▊ | 49014/569592 [50:08<571:18:48, 3.95s/it]
9%|▊ | 49014/569592 [50:08<571:18:48, 3.95s/it]
9%|▊ | 49015/569592 [50:09<444:45:43, 3.08s/it]
9%|▊ | 49015/569592 [50:09<444:45:43, 3.08s/it]
9%|▊ | 49016/569592 [50:15<555:47:53, 3.84s/it]
9%|▊ | 49016/569592 [50:15<555:47:53, 3.84s/it]
9%|▊ | 49017/569592 [50:16<433:07:18, 3.00s/it]
9%|▊ | 49017/569592 [50:16<433:07:18, 3.00s/it]
9%|▊ | 49018/569592 [50:18<423:05:43, 2.93s/it]
9%|▊ | 49018/569592 [50:18<423:05:43, 2.93s/it]
9%|▊ | 49019/569592 [50:19<338:13:01, 2.34s/it]
9%|▊ | 49019/569592 [50:19<338:13:01, 2.34s/it]
9%|▊ | 49020/569592 [50:20<286:56:23, 1.98s/it]
9%|▊ | 49020/569592 [50:20<286:56:23, 1.98s/it]
9%|▊ | 49021/569592 [50:21<241:51:35, 1.67s/it]
9%|▊ | 49021/569592 [50:21<241:51:35, 1.67s/it]
9%|▊ | 49022/569592 [50:28<466:51:30, 3.23s/it]
9%|▊ | 49022/569592 [50:28<466:51:30, 3.23s/it]
9%|▊ | 49023/569592 [50:29<369:21:51, 2.55s/it]
9%|▊ | 49023/569592 [50:29<369:21:51, 2.55s/it]
9%|▊ | 49024/569592 [50:30<301:20:10, 2.08s/it]
9%|▊ | 49024/569592 [50:30<301:20:10, 2.08s/it]
9%|▊ | 49025/569592 [50:37<489:56:15, 3.39s/it]
9%|▊ | 49025/569592 [50:37<489:56:15, 3.39s/it]
9%|▊ | 49026/569592 [50:38<384:59:35, 2.66s/it]
9%|▊ | 49026/569592 [50:38<384:59:35, 2.66s/it]
9%|▊ | 49027/569592 [50:42<439:40:42, 3.04s/it]
9%|▊ | 49027/569592 [50:42<439:40:42, 3.04s/it]
9%|▊ | 49028/569592 [50:47<529:04:11, 3.66s/it]
9%|▊ | 49028/569592 [50:47<529:04:11, 3.66/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
s/it]
9%|▊ | 49029/569592 [50:48<414:02:30, 2.86s/it]
9%|▊ | 49029/569592 [50:48<414:02:30, 2.86s/it]
9%|▊ | 49030/569592 [50:53<518:50:26, 3.59s/it]
9%|▊ | 49030/569592 [50:53<518:50:26, 3.59s/it]
9%|▊ | 49031/569592 [50:57<539:26:48, 3.73s/it]
9%|▊ | 49031/569592 [50:57<539:26:48, 3.73s/it]
9%|▊ | 49032/569592 [51:00<525:18:49, 3.63s/it]
9%|▊ | 49032/569592 [51:00<525:18:49, 3.63s/it]
9%|▊ | 49033/569592 [51:05<547:00:41, 3.78s/it]
9%|▊ | 49033/569592 [51:05<547:00:41, 3.78s/it]
9%|▊ | 49034/569592 [51:09<589:04:31, 4.07s/it]
9%|▊ | 49034/569592 [51:09<589:04:31, 4.07s/it]
9%|▊ | 49035/569592 [51:10<459:16:57, 3.18s/it]
9%|▊ | 49035/569592 [51:10<459:16:57, 3.18s/it]
9%|▊ | 49036/569592 [51:16<562:57:09, 3.89s/it]
9%|▊ | 49036/569592 [51:16<562:57:09, 3.89s/it]
9%|▊ | 49037/569592 [51:20<569:56:07, 3.94s/it]
9%|▊ | 49037/569592 [51:20<569:56:07, 3.94s/it]
9%|▊ | 49038/569592 [51:24<574:15:45, 3.97s/it]
9%|▊ | 49038/569592 [51:24<574:15:45, 3.97s/it]
9%|▊ | 49039/569592 [51:28<586:29:48, 4.06s/it]
9%|▊ | 49039/569592 [51:28<586:29:48, 4.06s/it]
9%|▊ | 49040/569592 [51:33<632:02:29, 4.37s/it]
9%|▊ | 49040/569592 [51:33<632:02:29, 4.37s/it]
9%|▊ | 49041/569592 [51:38<663:05:59, 4.59s/it]
9%|▊ | 49041/569592 [51:39<663:05:59, 4.59s/it]
9%|▊ | 49042/569592 [51:42<616:15:59, 4.26s/it]
9%|▊ | 49042/569592 [51:42<616:15:59, 4.26s/it]
9%|▊ | 49043/569592 [51:47<643:31:25, 4.45s/it]
9%|▊ | 49043/569592 [51:47<643:31:25, 4.45s/it]
9%|▊ | 49044/569592 [51:52<655:39:43, 4.53s/it]
9%|▊ | 49044/569592 [51:52<655:39:43, 4.53s/it]
9%|▊ | 49045/569592 [51:57<687:11:27, 4.75s/it]
9%|▊ | 49045/569592 [51:57<687:11:27, 4.75s/it]
9%|▊ | 49046/569592 [52:02<702:04:40, 4.86s/it]
9%|▊ | 49046/569592 [52:02<702:04:40, 4.86s/it]
9%|▊ | 49047/569592 [52:07<704:36:45, 4.87s/it]
9%|▊ | 49047/569592 [52:07<704:36:45, 4.87s/it]
9%|▊ | 49048/569592 [52:12<701:22:35, 4.85s/it]
9%|▊ | 49048/569592 [52:12<701:22:35, 4.85s/it]
9%|▊ | 49049/569592 [52:16<689:04:56, 4.77s/it]
9%|▊ | 49049/569592 [52:16<689:04:56, 4.77s/it]
9%|▊ | 49050/569592 [52:19<614:29:46, 4.25s/it]
9%|▊ | 49050/569592 [52:19<614:29:46, 4.25s/it]
9%|▊ | 49051/569592 [52:25<660:38:40, 4.57s/it]
9%|▊ | 49051/569592 [52:25<660:38:40, 4.57s/it]
9%|▊ | 49052/569592 [52:29<653:40:44, 4.52s/it]
9%|▊ | 49052/569592 [52:29<653:40:44, 4.52s/it]
9%|▊ | 49053/569592 [52:34<685:52:48, 4.74s/it]
9%|▊ | 49053/569592 [52:34<685:52:48, 4.74s/it]
9%|▊ | 49054/569592 [52:39<694:32:34, 4.80s/it]
9%|▊ | 49054/569592 [52:39<694:32:34, 4.80s/it]
9%|▊ | 49055/569592 [52:40<524:31:18, 3.63s/it]
9%|▊ | 49055/569592 [52:40<524:31:18, 3.63s/it]
9%|▊ | 49056/569592 [52:44<518:33:17, 3.59s/it]
9%|▊ | 49056/569592 [52:44<518:33:17, 3.59s/it]
9%|▊ | 49057/569592 [52:45<402:57:03, 2.79s/it]
9%|▊ | 49057/569592 [52:45<402:57:03, 2.79s/it]
9%|▊ | 49058/569592 [52:45<323:39:53, 2.24s/it]
9%|▊ | 49058/569592 [52:46<323:39:53, 2.24s/it]
9%|▊ | 49059/569592 [52:46<269:55:56, 1.87s/it]
9%|▊ | 49059/569592 [52:47<269:55:56, 1.87s/it]
9%|▊ | 49060/569592 [52:47<229:53:42, 1.59s/it]
9%|▊ | 49060/569592 [52:47<229:53:42, 1.59s/it]
9%|▊ | 49061/569592 [52:48<202:24:18, 1.40s/it]
9%|▊ | 49061/569592 [52:48<202:24:18, 1.40s/it]
9%|▊ | 49062/569592 [52:49<184:58:39, 1.28s/it]
9%|▊ | 49062/569592 [52:49<184:58:39, 1.28s/it]
9%|▊ | 49063/569592 [52:53<270:53:05, 1.87s/it]
9%|▊ | 49063/569592 [52:53<270:53:05, 1.87s/it]
9%|▊ | 49064/569592 [52:54<231:49:48, 1.60s/it]
9%|▊ | 49064/569592 [52:54<231:49:48, 1.60s/it]
9%|▊ | 49065/569592 [52:58<338:38:47, 2.34s/it]
9%|▊ | 49065/569592 [52:58<338:38:47, 2.34s/it]
9%|▊ | 49066/569592 [52:59<277:58:27, 1.92s/it]
9%|▊ | 49066/569592 [52:59<277:58:27, 1.92s/it]
9%|▊ | 49067/569592 [53:02<346:11:58, 2.39s/it]
9%|▊ | 49067/569592 [53:02<346:11:58, 2.39s/it]
9%|▊ | 49068/569592 [53:03<287:55:24, 1.99s/it]
9%|▊ | 49068/569592 [53:03<287:55:24, 1.99s/it]
9%|▊ | 49069/569592 [53:07<373:23:05, 2.58s/it]
9%|▊ | 49069/569592 [53:07<373:23:05, 2.58s/it]
9%|▊ | 49070/569592 [53:10<368:50:26, 2.55s/it]
9%|▊ | 49070/569592 [53:10<368:50:26, 2.55s/it]
9%|▊ | 49071/569592 [53:13<389:30:21, 2.69s/it]
9%|▊ | 49071/569592 [53:13<389:30:21, 2.69s/it]
9%|▊ | 49072/569592 [53:14<314:19:42, 2.17s/it]
9%|▊ | 49072/569592 [53:14<314:19:42, 2.17s/it]
9%|▊ | 49073/569592 [53:18<414:20:30, 2.87s/it]
9%|▊ | 49073/569592 [53:18<414:20:30, 2.87s/it]
9%|▊ | 49074/569592 [53:19<335:19:08, 2.32s/it]
9%|▊ | 49074/569592 [53:19<335:19:08, 2.32s/it]
9%|▊ | 49075/569592 [53:22<375:27:57, 2.60s/it]
9%|▊ | 49075/569592 [53:22<375:27:57, 2.60s/it]
9%|▊ | 49076/569592 [53:24<338:41:25, 2.34s/it]
9%|▊ | 49076/569592 [53:24<338:41:25, 2.34s/it]
9%|▊ | 49077/569592 [53:28<408:35:20, 2.83s/it]
9%|▊ | 49077/569592 [53:28<408:35:20, 2.83s/it]
9%|▊ | 49078/569592 [53:30<361:02:20, 2.50s/it]
9%|▊ | 49078/569592 [53:30<361:02:20, 2.50s/it]
9%|▊ | 49079/569592 [53:32<358:57:22, 2.48s/it]
9%|▊ | 49079/569592 [53:32<358:57:22, 2.48s/it]
9%|▊ | 49080/569592 [53:34<334:56:04, 2.32s/it]
9%|▊ | 49080/569592 [53:34<334:56:04, 2.32s/it]
9%|▊ | 49081/569592 [53:38<392:59:48, 2.72s/it]
9%|▊ | 49081/569592 [53:38<392:59:48, 2.72s/it]
9%|▊ | 49082/569592 [53:40<369:44:45, 2.56s/it]
9%|▊ | 49082/569592 [53:40<369:44:45, 2.56s/it]
9%|▊ | 49083/569592 [53:43<379:15:24, 2.62s/it]
9%|▊ | 49083/569592 [53:43<379:15:24, 2.62s/it]
9%|▊ | 49084/569592 [53:44<334:43:32, 2.32s/it]
9%|▊ | 49084/569592 [53:44<334:43:32, 2.32s/it]
9%|▊ | 49085/569592 [53:48<389/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
:31:46, 2.69s/it]
9%|▊ | 49085/569592 [53:48<389:31:46, 2.69s/it]
9%|▊ | 49086/569592 [53:50<370:07:35, 2.56s/it]
9%|▊ | 49086/569592 [53:50<370:07:35, 2.56s/it]
9%|▊ | 49087/569592 [53:53<379:28:06, 2.62s/it]
9%|▊ | 49087/569592 [53:53<379:28:06, 2.62s/it]
9%|▊ | 49088/569592 [53:55<332:57:01, 2.30s/it]
9%|▊ | 49088/569592 [53:55<332:57:01, 2.30s/it]
9%|▊ | 49089/569592 [53:59<413:26:37, 2.86s/it]
9%|▊ | 49089/569592 [53:59<413:26:37, 2.86s/it]
9%|▊ | 49090/569592 [54:01<391:35:29, 2.71s/it]
9%|▊ | 49090/569592 [54:01<391:35:29, 2.71s/it]
9%|▊ | 49091/569592 [54:05<430:22:44, 2.98s/it]
9%|▊ | 49091/569592 [54:05<430:22:44, 2.98s/it]
9%|▊ | 49092/569592 [54:06<371:51:39, 2.57s/it]
9%|▊ | 49092/569592 [54:06<371:51:39, 2.57s/it]
9%|▊ | 49093/569592 [54:09<378:52:05, 2.62s/it]
9%|▊ | 49093/569592 [54:09<378:52:05, 2.62s/it]
9%|▊ | 49094/569592 [54:12<375:08:30, 2.59s/it]
9%|▊ | 49094/569592 [54:12<375:08:30, 2.59s/it]
9%|▊ | 49095/569592 [54:14<386:52:48, 2.68s/it]
9%|▊ | 49095/569592 [54:14<386:52:48, 2.68s/it]
9%|▊ | 49096/569592 [54:17<371:55:05, 2.57s/it]
9%|▊ | 49096/569592 [54:17<371:55:05, 2.57s/it]
9%|▊ | 49097/569592 [54:19<368:30:57, 2.55s/it]
9%|▊ | 49097/569592 [54:19<368:30:57, 2.55s/it]
9%|▊ | 49098/569592 [54:23<404:40:10, 2.80s/it]
9%|▊ | 49098/569592 [54:23<404:40:10, 2.80s/it]
9%|▊ | 49099/569592 [54:24<361:53:17, 2.50s/it]
9%|▊ | 49099/569592 [54:24<361:53:17, 2.50s/it]
9%|▊ | 49100/569592 [54:27<367:38:32, 2.54s/it]
9%|▊ | 49100/569592 [54:27<367:38:32, 2.54s/it]
9%|▊ | 49101/569592 [54:31<430:40:56, 2.98s/it]
9%|▊ | 49101/569592 [54:31<430:40:56, 2.98s/it]
9%|▊ | 49102/569592 [54:33<394:20:29, 2.73s/it]
9%|▊ | 49102/569592 [54:33<394:20:29, 2.73s/it]
9%|▊ | 49103/569592 [54:35<342:42:21, 2.37s/it]
9%|▊ | 49103/569592 [54:35<342:42:21, 2.37s/it]
9%|▊ | 49104/569592 [54:37<353:34:23, 2.45s/it]
9%|▊ | 49104/569592 [54:37<353:34:23, 2.45s/it]
9%|▊ | 49105/569592 [54:40<362:36:50, 2.51s/it]
9%|▊ | 49105/569592 [54:40<362:36:50, 2.51s/it]
9%|▊ | 49106/569592 [54:43<396:27:34, 2.74s/it]
9%|▊ | 49106/569592 [54:43<396:27:34, 2.74s/it]
9%|▊ | 49107/569592 [54:45<347:35:22, 2.40s/it]
9%|▊ | 49107/569592 [54:45<347:35:22, 2.40s/it]
9%|▊ | 49108/569592 [54:47<350:29:20, 2.42s/it]
9%|▊ | 49108/569592 [54:47<350:29:20, 2.42s/it]
9%|▊ | 49109/569592 [54:50<374:36:11, 2.59s/it]
9%|▊ | 49109/569592 [54:50<374:36:11, 2.59s/it]
9%|▊ | 49110/569592 [54:53<392:35:34, 2.72s/it]
9%|▊ | 49110/569592 [54:53<392:35:34, 2.72s/it]
9%|▊ | 49111/569592 [54:55<340:27:14, 2.35s/it]
9%|▊ | 49111/569592 [54:55<340:27:14, 2.35s/it]
9%|▊ | 49112/569592 [54:58<370:05:58, 2.56s/it]
9%|▊ | 49112/569592 [54:58<370:05:58, 2.56s/it]
9%|▊ | 49113/569592 [55:01<385:01:03, 2.66s/it]
9%|▊ | 49113/569592 [55:01<385:01:03, 2.66s/it]
9%|▊ | 49114/569592 [55:04<402:39:10, 2.79s/it]
9%|▊ | 49114/569592 [55:04<402:39:10, 2.79s/it]
9%|▊ | 49115/569592 [55:05<323:11:36, 2.24s/it]
9%|▊ | 49115/569592 [55:05<323:11:36, 2.24s/it]
9%|▊ | 49116/569592 [55:08<383:02:47, 2.65s/it]
9%|▊ | 49116/569592 [55:08<383:02:47, 2.65s/it]
9%|▊ | 49117/569592 [55:11<376:15:46, 2.60s/it]
9%|▊ | 49117/569592 [55:11<376:15:46, 2.60s/it]
9%|▊ | 49118/569592 [55:14<395:00:56, 2.73s/it]
9%|▊ | 49118/569592 [55:14<395:00:56, 2.73s/it]
9%|▊ | 49119/569592 [55:15<318:04:40, 2.20s/it]
9%|▊ | 49119/569592 [55:15<318:04:40, 2.20s/it]
9%|▊ | 49120/569592 [55:20<420:46:28, 2.91s/it]
9%|▊ | 49120/569592 [55:20<420:46:28, 2.91s/it]
9%|▊ | 49121/569592 [55:21<370:22:56, 2.56s/it]
9%|▊ | 49121/569592 [55:21<370:22:56, 2.56s/it]
9%|▊ | 49122/569592 [55:24<380:14:56, 2.63s/it]
9%|▊ | 49122/569592 [55:24<380:14:56, 2.63s/it]
9%|▊ | 49123/569592 [55:25<305:55:54, 2.12s/it]
9%|▊ | 49123/569592 [55:25<305:55:54, 2.12s/it]
9%|▊ | 49124/569592 [55:28<359:39:30, 2.49s/it]
9%|▊ | 49124/569592 [55:28<359:39:30, 2.49s/it]
9%|▊ | 49125/569592 [55:31<368:12:09, 2.55s/it]
9%|▊ | 49125/569592 [55:31<368:12:09, 2.55s/it]
9%|▊ | 49126/569592 [55:34<396:59:27, 2.75s/it]
9%|▊ | 49126/569592 [55:34<396:59:27, 2.75s/it]
9%|▊ | 49127/569592 [55:35<318:45:50, 2.20s/it]
9%|▊ | 49127/569592 [55:35<318:45:50, 2.20s/it]
9%|▊ | 49128/569592 [55:39<409:18:36, 2.83s/it]
9%|▊ | 49128/569592 [55:39<409:18:36, 2.83s/it]
9%|▊ | 49129/569592 [55:42<400:31:07, 2.77s/it]
9%|▊ | 49129/569592 [55:42<400:31:07, 2.77s/it]
9%|▊ | 49130/569592 [55:43<321:50:33, 2.23s/it]
9%|▊ | 49130/569592 [55:43<321:50:33, 2.23s/it]
9%|▊ | 49131/569592 [55:45<313:43:39, 2.17s/it]
9%|▊ | 49131/569592 [55:45<313:43:39, 2.17s/it]
9%|▊ | 49132/569592 [55:49<396:04:35, 2.74s/it]
9%|▊ | 49132/569592 [55:49<396:04:35, 2.74s/it]
9%|▊ | 49133/569592 [55:52<407:36:31, 2.82s/it]
9%|▊ | 49133/569592 [55:52<407:36:31, 2.82s/it]
9%|▊ | 49134/569592 [55:55<425:49:20, 2.95s/it]
9%|▊ | 49134/569592 [55:55<425:49:20, 2.95s/it]
9%|▊ | 49135/569592 [55:56<339:10:10, 2.35s/it]
9%|▊ | 49135/569592 [55:56<339:10:10, 2.35s/it]
9%|▊ | 49136/569592 [55:59<350:50:31, 2.43s/it]
9%|▊ | 49136/569592 [55:59<350:50:31, 2.43s/it]
9%|▊ | 49137/569592 [56:01<352:16:26, 2.44s/it]
9%|▊ | 49137/569592 [56:01<352:16:26, 2.44s/it]
9%|▊ | 49138/569592 [56:07<470:43:25, 3.26s/it]
9%|▊ | 49138/569592 [56:07<470:43:25, 3.26s/it]
9%|▊ | 49139/569592 [56:10<488:09:38, 3.38s/it]
9%|▊ | 49139/569592 [56:10<488:09:38, 3.38s/it]
9%|▊ | 49140/569592 [56:14<489:02:54, 3.38s/it]
9%|▊ | 49140/569592 [56:14<489:02:54, 3.38s/it]
9%|▊ | 49141/569592 [56:18<510:11:29, 3.53s/it]
9%|▊ | 49141/569592 [56:18<510:11:29, 3.53s/it]
9%|▊ | 49142/569592 [56:18<396:59:57, 2.75s/it]
9%|▊ | 49142/569592 [56:18<396:59:57, 2.75s/it]
9%|▊ | 49143/569592 [56:22<443:51:51, 3.07s/it]
9%|▊ | 49143/569592 [56:22<443:51:51, 3.07s/it]
9%|▊ | 49144/569592 [56:26<467:33:06, 3.23s/it]
9%|▊ | 49144/569592 [56:26<467:33:06, 3.23s/it]
9%|▊ | 49145/569592 [56:27<367:19:58, 2.54s/it]
9%|▊ | 49145/569592 [56:27<367:19:58, 2.54s/it]
9%|▊ | 49146/569592 [56:31<418:29:21, 2.89s/it]
9%|▊ | 49146/569592 [56:31<418:29:21, 2.89s/it]
9%|▊ | 49147/569592 [56:35<502:24:43, 3.48s/it]
9%|▊ | 49147/569592 [56:35<502:24:43, 3.48s/it]
9%|▊ | 49148/569592 [56:40<543:49:41, 3.76s/it]
9%|▊ | 49148/569592 [56:40<543:49:41, 3.76s/it]
9%|▊ | 49149/569592 [56:43<513:05:25, 3.55s/it]
9%|▊ | 49149/569592 [56:43<513:05:25, 3.55s/it]
9%|▊ | 49150/569592 [56:44<397:41:58, 2.75s/it]
9%|▊ | 49150/569592 [56:44<397:41:58, 2.75s/it]
9%|▊ | 49151/569592 [56:48<471:21:41, 3.26s/it]
9%|▊ | 49151/569592 [56:48<471:21:41, 3.26s/it]
9%|▊ | 49152/569592 [56:52<503:56:41, 3.49s/it]
9%|▊ | 49152/569592 [56:52<503:56:41, 3.49s/it]
9%|▊ | 49153/569592 [56:57<565:48:12, 3.91s/it]
9%|▊ | 49153/569592 [56:57<565:48:12, 3.91s/it]
9%|▊ | 49154/569592 [57:02<604:07:42, 4.18s/it]
9%|▊ | 49154/569592 [57:02<604:07:42, 4.18s/it]
9%|▊ | 49155/569592 [57:06<586:53:00, 4.06s/it]
9%|▊ | 49155/569592 [57:06<586:53:00, 4.06s/it]
9%|▊ | 49156/569592 [57:09<569:28:15, 3.94s/it]
9%|▊ | 49156/569592 [57:09<569:28:15, 3.94s/it]
9%|▊ | 49157/569592 [57:14<615:42:22, 4.26s/it]
9%|▊ | 49157/569592 [57:14<615:42:22, 4.26s/it]
9%|▊ | 49158/569592 [57:20<672:38:16, 4.65s/it]
9%|▊ | 49158/569592 [57:20<672:38:16, 4.65s/it]
9%|▊ | 49159/569592 [57:25<699:03:37, 4.84s/it]
9%|▊ | 49159/569592 [57:25<699:03:37, 4.84s/it]
9%|▊ | 49160/569592 [57:28<629:01:29, 4.35s/it]
9%|▊ | 49160/569592 [57:28<629:01:29, 4.35s/it]
9%|▊ | 49161/569592 [57:32<600:43:43, 4.16s/it]
9%|▊ | 49161/569592 [57:32<600:43:43, 4.16s/it]
9%|▊ | 49162/569592 [57:37<621:52:46, 4.30s/it]
9%|▊ | 49162/569592 [57:37<621:52:46, 4.30s/it]
9%|▊ | 49163/569592 [57:38<474:03:54, 3.28s/it]
9%|▊ | 49163/569592 [57:38<474:03:54, 3.28s/it]
9%|▊ | 49164/569592 [57:43<554:00:07, 3.83s/it]
9%|▊ | 49164/569592 [57:43<554:00:07, 3.83s/it]
9%|▊ | 49165/569592 [57:48<596:21:41, 4.13s/it]
9%|▊ | 49165/569592 [57:48<596:21:41, 4.13s/it]
9%|▊ | 49166/569592 [57:52<624:36:14, 4.32s/it]
9%|▊ | 49166/569592 [57:52<624:36:14, 4.32s/it]
9%|▊ | 49167/569592 [57:57<635:19:21, 4.39s/it]
9%|▊ | 49167/569592 [57:57<635:19:21, 4.39s/it]
9%|▊ | 49168/569592 [58:02<648:34:15, 4.49s/it]
9%|▊ | 49168/569592 [58:02<648:34:15, 4.49s/it]
9%|▊ | 49169/569592 [58:05<612:18:39, 4.24s/it]
9%|▊ | 49169/569592 [58:05<612:18:39, 4.24s/it]
9%|▊ | 49170/569592 [58:08<563:11:58, 3.90s/it]
9%|▊ | 49170/569592 [58:08<563:11:58, 3.90s/it]
9%|▊ | 49171/569592 [58:12<549:16:53, 3.80s/it]
9%|▊ | 49171/569592 [58:12<549:16:53, 3.80s/it]
9%|▊ | 49172/569592 [58:13<423:25:04, 2.93s/it]
9%|▊ | 49172/569592 [58:13<423:25:04, 2.93s/it]
9%|▊ | 49173/569592 [58:16<431:45:02, 2.99s/it]
9%|▊ | 49173/569592 [58:16<431:45:02, 2.99s/it]
9%|▊ | 49174/569592 [58:17<343:06:53, 2.37s/it]
9%|▊ | 49174/569592 [58:17<343:06:53, 2.37s/it]
9%|▊ | 49175/569592 [58:18<282:01:/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98911692 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
59, 1.95s/it]
9%|▊ | 49175/569592 [58:18<282:01:59, 1.95s/it]
9%|▊ | 49176/569592 [58:19<239:13:53, 1.65s/it]
9%|▊ | 49176/569592 [58:19<239:13:53, 1.65s/it]
9%|▊ | 49177/569592 [58:20<208:33:58, 1.44s/it]
9%|▊ | 49177/569592 [58:20<208:33:58, 1.44s/it]
9%|▊ | 49178/569592 [58:21<192:09:54, 1.33s/it]
9%|▊ | 49178/569592 [58:21<192:09:54, 1.33s/it]
9%|▊ | 49179/569592 [58:22<175:01:06, 1.21s/it]
9%|▊ | 49179/569592 [58:22<175:01:06, 1.21s/it]
9%|▊ | 49180/569592 [58:26<291:42:46, 2.02s/it]
9%|▊ | 49180/569592 [58:26<291:42:46, 2.02s/it]
9%|▊ | 49181/569592 [58:27<247:11:34, 1.71s/it]
9%|▊ | 49181/569592 [58:27<247:11:34, 1.71s/it]
9%|▊ | 49182/569592 [58:30<310:59:35, 2.15s/it]
9%|▊ | 49182/569592 [58:30<310:59:35, 2.15s/it]
9%|▊ | 49183/569592 [58:34<396:21:18, 2.74s/it]
9%|▊ | 49183/569592 [58:34<396:21:18, 2.74s/it]
9%|▊ | 49184/569592 [58:36<356:13:43, 2.46s/it]
9%|▊ | 49184/569592 [58:36<356:13:43, 2.46s/it]
9%|▊ | 49185/569592 [58:37<289:23:11, 2.00s/it]
9%|▊ | 49185/569592 [58:37<289:23:11, 2.00s/it]
9%|▊ | 49186/569592 [58:41<387:03:43, 2.68s/it]
9%|▊ | 49186/569592 [58:41<387:03:43, 2.68s/it]
9%|▊ | 49187/569592 [58:44<408:01:48, 2.82s/it]
9%|▊ | 49187/569592 [58:44<408:01:48, 2.82s/it]
9%|▊ | 49188/569592 [58:46<347:49:00, 2.41s/it]
9%|▊ | 49188/569592 [58:46<347:49:00, 2.41s/it]
9%|▊ | 49189/569592 [58:47<288:18:49, 1.99s/it]
9%|▊ | 49189/569592 [58:47<288:18:49, 1.99s/it]
9%|▊ | 49190/569592 [58:50<355:21:28, 2.46s/it]
9%|▊ | 49190/569592 [58:50<355:21:28, 2.46s/it]
9%|▊ | 49191/569592 [58:54<408:54:34, 2.83s/it]
9%|▊ | 49191/569592 [58:54<408:54:34, 2.83s/it]
9%|▊ | 49192/569592 [58:56<387:17:31, 2.68s/it]
9%|▊ | 49192/569592 [58:56<387:17:31, 2.68s/it]
9%|▊ | 49193/569592 [58:57<324:19:46, 2.24s/it]
9%|▊ | 49193/569592 [58:57<324:19:46, 2.24s/it]
9%|▊ | 49194/569592 [59:01<374:06:39, 2.59s/it]
9%|▊ | 49194/569592 [59:01<374:06:39, 2.59s/it]
9%|▊ | 49195/569592 [59:03<377:00:54, 2.61s/it]
9%|▊ | 49195/569592 [59:04<377:00:54, 2.61s/it]
9%|▊ | 49196/569592 [59:07<415:25:45, 2.87s/it]
9%|▊ | 49196/569592 [59:07<415:25:45, 2.87s/it]
9%|▊ | 49197/569592 [59:08<336:27:41, 2.33s/it]
9%|▊ | 49197/569592 [59:08<336:27:41, 2.33s/it]
9%|▊ | 49198/569592 [59:11<367:01:20, 2.54s/it]
9%|▊ | 49198/569592 [59:11<367:01:20, 2.54s/it]
9%|▊ | 49199/569592 [59:13<359:26:02, 2.49s/it]
9%|▊ | 49199/569592 [59:13<359:26:02, 2.49s/it]
9%|▊ | 49200/569592 [59:17<412:18:43, 2.85s/it]
9%|▊ | 49200/569592 [59:17<412:18:43, 2.85s/it]
9%|▊ | 49201/569592 [59:18<331:52:48, 2.30s/it]
9%|▊ | 49201/569592 [59:18<331:52:48, 2.30s/it]
9%|▊ | 49202/569592 [59:22<382:39:15, 2.65s/it]
9%|▊ | 49202/569592 [59:22<382:39:15, 2.65s/it]
9%|▊ | 49203/569592 [59:24<371:18:47, 2.57s/it]
9%|▊ | 49203/569592 [59:24<371:18:47, 2.57s/it]
9%|▊ | 49204/569592 [59:27<402:05:55, 2.78s/it]
9%|▊ | 49204/569592 [59:27<402:05:55, 2.78s/it]
9%|▊ | 49205/569592 [59:29<378:02:37, 2.62s/it]
9%|▊ | 49205/569592 [59:29<378:02:37, 2.62s/it]
9%|▊ | 49206/569592 [59:32<376:12:57, 2.60s/it]
9%|▊ | 49206/569592 [59:32<376:12:57, 2.60s/it]
9%|▊ | 49207/569592 [59:33<326:39:01, 2.26s/it]
9%|▊ | 49207/569592 [59:33<326:39:01, 2.26s/it]
9%|▊ | 49208/569592 [59:37<385:34:59, 2.67s/it]
9%|▊ | 49208/569592 [59:37<385:34:59, 2.67s/it]
9%|▊ | 49209/569592 [59:39<372:58:42, 2.58s/it]
9%|▊ | 49209/569592 [59:39<372:58:42, 2.58s/it]
9%|▊ | 49210/569592 [59:42<372:58:22, 2.58s/it]
9%|▊ | 49210/569592 [59:42<372:58:22, 2.58s/it]
9%|▊ | 49211/569592 [59:44<346:32:50, 2.40s/it]
9%|▊ | 49211/569592 [59:44<346:32:50, 2.40s/it]
9%|▊ | 49212/569592 [59:46<342:41:14, 2.37s/it]
9%|▊ | 49212/569592 [59:46<342:41:14, 2.37s/it]
9%|▊ | 49213/569592 [59:49<371:28:52, 2.57s/it]
9%|▊ | 49213/569592 [59:49<371:28:52, 2.57s/it]
9%|▊ | 49214/569592 [59:52<370:16:30, 2.56s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▊ | 49214/569592 [59:52<370:16:30, 2.56s/it]
9%|▊ | 49215/569592 [59:54<348:52:17, 2.41s/it]
9%|▊ | 49215/569592 [59:54<348:52:17, 2.41s/it]
9%|▊ | 49216/569592 [59:58<403:44:58, 2.79s/it]
9%|▊ | 49216/569592 [59:58<403:44:58, 2.79s/it]
9%|▊ | 49217/569592 [59:59<362:10:24, 2.51s/it]
9%|▊ | 49217/569592 [59:59<362:10:24, 2.51s/it]
9%|▊ | 49218/569592 [1:00:03<417:48:38, 2.89s/it]
9%|▊ | 49218/569592 [1:00:03<417:48:38, 2.89s/it]
9%|▊ | 49219/569592 [1:00:05<384:59:13, 2.66s/it]
9%|▊ | 49219/569592 [1:00:05<384:59:13, 2.66s/it]
9%|▊ | 49220/569592 [1:00:07<354:06:24, 2.45s/it]
9%|▊ | 49220/569592 [1:00:07<354:06:24, 2.45s/it]
9%|▊ | 49221/569592 [1:00:10<383:24:11, 2.65s/it]
9%|▊ | 49221/569592 [1:00:11<383:24:11, 2.65s/it]
9%|▊ | 49222/569592 [1:00:13<363:43:42, 2.52s/it]
9%|▊ | 49222/569592 [1:00:13<363:43:42, 2.52s/it]
9%|▊ | 49223/569592 [1:00:14<318:50:50, 2.21s/it]
9%|▊ | 49223/569592 [1:00:14<318:50:50, 2.21s/it]
9%|▊ | 49224/569592 [1:00:18<408:07:25, 2.82s/it]
9%|▊ | 49224/569592 [1:00:18<408:07:25, 2.82s/it]
9%|▊ | 49225/569592 [1:00:21<396:28:05, 2.74s/it]
9%|▊ | 49225/569592 [1:00:21<396:28:05, 2.74s/it]
9%|▊ | 49226/569592 [1:00:24<387:14:36, 2.68s/it]
9%|▊ | 49226/569592 [1:00:24<387:14:36, 2.68s/it]
9%|▊ | 49227/569592 [1:00:25<322:41:59, 2.23s/it]
9%|▊ | 49227/569592 [1:00:25<322:41:59, 2.23s/it]
9%|▊ | 49228/569592 [1:00:28<388:26:57, 2.69s/it]
9%|▊ | 49228/569592 [1:00:28<388:26:57, 2.69s/it]
9%|▊ | 49229/569592 [1:00:30<332:31:51, 2.30s/it]
9%|▊ | 49229/569592 [1:00:30<332:31:51, 2.30s/it]
9%|▊ | 49230/569592 [1:00:32<336:28:29, 2.33s/it]
9%|▊ | 49230/569592 [1:00:32<336:28:29, 2.33s/it]
9%|▊ | 49231/569592 [1:00:35<347:30:44, 2.40s/it]
9%|▊ | 49231/569592 [1:00:35<347:30:44, 2.40s/it]
9%|▊ | 49232/569592 [1:00:38<389:20:31, 2.69s/it]
9%|▊ | 49232/569592 [1:00:38<389:20:31, 2.69s/it]
9%|▊ | 49233/569592 [1:00:40<343:20:30, 2.38s/it]
9%|▊ | 49233/569592 [1:00:40<343:20:30, 2.38s/it]
9%|▊ | 49234/569592 [1:00:43<361:41:32, 2.50s/it]
9%|▊ | 49234/569592 [1:00:43<361:41:32, 2.50s/it]
9%|▊ | 49235/569592 [1:00:45<363:46:43, 2.52s/it]
9%|▊ | 49235/569592 [1:00:45<363:46:43, 2.52s/it]
9%|▊ | 49236/569592 [1:00:49<400:53:06, 2.77s/it]
9%|▊ | 49236/569592 [1:00:49<400:53:06, 2.77s/it]
9%|▊ | 49237/569592 [1:00:50<334:25:26, 2.31s/it]
9%|▊ | 49237/569592 [1:00:50<334:25:26, 2.31s/it]
9%|▊ | 49238/569592 [1:00:54<419:14:19, 2.90s/it]
9%|▊ | 49238/569592 [1:00:54<419:14:19, 2.90s/it]
9%|▊ | 49239/569592 [1:00:55<352:45:57, 2.44s/it]
9%|▊ | 49239/569592 [1:00:55<352:45:57, 2.44s/it]
9%|▊ | 49240/569592 [1:01:00<455:08:28, 3.15s/it]
9%|▊ | 49240/569592 [1:01:00<455:08:28, 3.15s/it]
9%|▊ | 49241/569592 [1:01:01<361:02:09, 2.50s/it]
9%|▊ | 49241/569592 [1:01:01<361:02:09, 2.50s/it]
9%|▊ | 49242/569592 [1:01:04<369:28:00, 2.56s/it]
9%|▊ | 49242/569592 [1:01:04<369:28:00, 2.56s/it]
9%|▊ | 49243/569592 [1:01:05<327:48:04, 2.27s/it]
9%|▊ | 49243/569592 [1:01:06<327:48:04, 2.27s/it]
9%|▊ | 49244/569592 [1:01:10<436:16:34, 3.02s/it]
9%|▊ | 49244/569592 [1:01:10<436:16:34, 3.02s/it]
9%|▊ | 49245/569592 [1:01:11<352:25:56, 2.44s/it]
9%|▊ | 49245/569592 [1:01:11<352:25:56, 2.44s/it]
9%|▊ | 49246/569592 [1:01:15<386:05:12, 2.67s/it]
9%|▊ | 49246/569592 [1:01:15<386:05:12, 2.67s/it]
9%|▊ | 49247/569592 [1:01:16<314:36:51, 2.18s/it]
9%|▊ | 49247/569592 [1:01:16<314:36:51, 2.18s/it]
9%|▊ | 49248/569592 [1:01:21<433:36:53, 3.00s/it]
9%|▊ | 49248/569592 [1:01:21<433:36:53, 3.00s/it]
9%|▊ | 49249/569592 [1:01:22<351:51:00, 2.43s/it]
9%|▊ | 49249/569592 [1:01:22<351:51:00, 2.43s/it]
9%|▊ | 49250/569592 [1:01:24<347:17:10, 2.40s/it]
9%|▊ | 49250/569592 [1:01:24<347:17:10, 2.40s/it]
9%|▊ | 49251/569592 [1:01:29<446:20:54, 3.09s/it]
9%|▊ | 49251/569592 [1:01:29<446:20:54, 3.09s/it]
9%|▊ | 49252/569592 [1:01:33<484:12:21, 3.35s/it]
9%|▊ | 49252/569592 [1:01:33<484:12:21, 3.35s/it]
9%|▊ | 49253/569592 [1:01:36<477:53:20, 3.31s/it]
9%|▊ | 49253/569592 [1:01:36<477:53:20, 3.31s/it]
9%|▊ | 49254/569592 [1:01:39<472:58:26, 3.27s/it]
9%|▊ | 49254/569592 [1:01:39<472:58:26, 3.27s/it]
9%|▊ | 49255/569592 [1:01:43<491:07:55, 3.40s/it]
9%|▊ | 49255/569592 [1:01:43<491:07:55, 3.40s/it]
9%|▊ | 49256/569592 [1:01:47<545:57:39, 3.78s/it]
9%|▊ | 49256/569592 [1:01:47<545:57:39, 3.78s/it]
9%|▊ | 49257/569592 [1:01:48<420:06:48, 2.91s/it]
9%|▊ | 49257/569592 [1:01:48<420:06:48, 2.91s/it]
9%|▊ | 49258/569592 [1:01:49<334:30:40, 2.31s/it]
9%|▊ | 49258/569592 [1:01:49<334:30:40, 2.31s/it]
9%|▊ | 49259/569592 [1:01:54<454:50:21, 3.15s/it]
9%|▊ | 49259/569592 [1:01:54<454:50:21, 3.15s/it]
9%|▊ | 49260/569592 [1:01:58<467:47:09, 3.24s/it]
9%|▊ | 49260/569592 [1:01:58<467:47:09, 3.24s/it]
9%|▊ | 49261/569592 [1:02:03<543:53:04, 3.76s/it]
9%|▊ | 49261/569592 [1:02:03<543:53:04, 3.76s/it]
9%|▊ | 49262/569592 [1:02:04<420:27:30, 2.91s/it]
9%|▊ | 49262/569592 [1:02:04<420:27:30, 2.91s/it]
9%|▊ | 49263/569592 [1:02:07<441:32:57, 3.05s/it]
9%|▊ | 49263/569592 [1:02:07<441:32:57, 3.05s/it]
9%|▊ | 49264/569592 [1:02:12<530:12:00, 3.67s/it]
9%|▊ | 49264/569592 [1:02:12<530:12:00, 3.67s/it]
9%|▊ | 49265/569592 [1:02:17<568:15:32, 3.93s/it]
9%|▊ | 49265/569592 [1:02:17<568:15:32, 3.93s/it]
9%|▊ | 49266/569592 [1:02:20<536:13:20, 3.71s/it]
9%|▊ | 49266/569592 [1:02:20<536:13:20, 3.71s/it]
9%|▊ | 49267/569592 [1:02:24<556:17:26, 3.85s/it]
9%|▊ | 49267/569592 [1:02:24<556:17:26, 3.85s/it]
9%|▊ | 49268/569592 [1:02:29<594:17:51, 4.11s/it]
9%|▊ | 49268/569592 [1:02:29<594:17:51, 4.11s/it]
9%|▊ | 49269/569592 [1:02:32<567:23:18, 3.93s/it]
9%|▊ | 49269/569592 [1:02:32<567:23:18, 3.93s/it]
9%|▊ | 49270/569592 [1:02:35<525:48:19, 3.64s/it]
9%|▊ | 49270/569592 [1:02:35<525:48:19, 3.64s/it]
9%|▊ | 49271/569592 [1:02:39<534:16:23, 3.70s/it]
9%|▊ | 49271/569592 [1:02:39<534:16:23, 3.70s/it]
9%|▊ | 49272/569592 [1:02:47<740:53:16, 5.13s/it]
9%|▊ | 49272/569592 [1:02:47<740:53:16, 5.13s/it]
9%|▊ | 49273/569592 [1:02:51<676:58:34, 4.68s/it]
9%|▊ | 49273/569592 [1:02:51<676:58:34, 4.68s/it]
9%|▊ | 49274/569592 [1:02:56<687:00:47, 4.75s/it]
9%|▊ | 49274/569592 [1:02:56<687:00:47, 4.75s/it]
9%|▊ | 49275/569592 [1:03:01<692:34:06, 4.79s/it]
9%|▊ | 49275/569592 [1:03:01<692:34:06, 4.79s/it]
9%|▊ | 49276/569592 [1:03:06<686:50:23, 4.75s/it]
9%|▊ | 49276/569592 [1:03:06<686:50:23, 4.75s/it]
9%|▊ | 49277/569592 [1:03:11<708:49:12, 4.90s/it]
9%|▊ | 49277/569592 [1:03:11<708:49:12, 4.90s/it]
9%|▊ | 49278/569592 [1:03:16<708:22:39, 4.90s/it]
9%|▊ | 49278/569592 [1:03:16<708:22:39, 4.90s/it]
9%|▊ | 49279/569592 [1:03:19<637:14:31, 4.41s/it]
9%|▊ | 49279/569592 [1:03:19<637:14:31, 4.41s/it]
9%|▊ | 49280/569592 [1:03:24<654:21:52, 4.53s/it]
9%|▊ | 49280/569592 [1:03:24<654:21:52, 4.53s/it]
9%|▊ | 49281/569592 [1:03:27<605:10:39, 4.19s/it]
9%|▊ | 49281/569592 [1:03:27<605:10:39, 4.19s/it]
9%|▊ | 49282/569592 [1:03:32<629:55:42, 4.36s/it]
9%|▊ | 49282/569592 [1:03:32<629:55:42, 4.36s/it]
9%|▊ | 49283/569592 [1:03:35<576:16:40, 3.99s/it]
9%|▊ | 49283/569592 [1:03:35<576:16:40, 3.99s/it]
9%|▊ | 49284/569592 [1:03:40<618:56:50, 4.28s/it]
9%|▊ | 49284/569592 [1:03:40<618:56:50, 4.28s/it]
9%|▊ | 49285/569592 [1:03:43<574:33:35, 3.98s/it]
9%|▊ | 49285/569592 [1:03:43<574:33:35, 3.98s/it]
9%|▊ | 49286/569592 [1:03:48<605:10:06, 4.19s/it]
9%|▊ | 49286/569592 [1:03:48<605:10:06, 4.19s/it]
9%|▊ | 49287/569592 [1:03:52<606:50:26, 4.20s/it]
9%|▊ | 49287/569592 [1:03:52<606:50:26, 4.20s/it]
9%|▊ | 49288/569592 [1:03:55<552:47:15, 3.82s/it]
9%|▊ | 49288/569592 [1:03:55<552:47:15, 3.82s/it]
9%|▊ | 49289/569592 [1:03:56<426:46:14, 2.95s/it]
9%|▊ | 49289/569592 [1:03:56<426:46:14, 2.95s/it]
9%|▊ | 49290/569592 [1:03:57<339:02:35, 2.35s/it]
9%|▊ | 49290/569592 [1:03:57<339:02:35, 2.35s/it]
9%|▊ | 49291/569592 [1:04:00<377:37:14, 2.61s/it]
9%|▊ | 49291/569592 [1:04:00<377:37:14, 2.61s/it]
9%|▊ | 49292/569592 [1:04:01<304:52:39, 2.11s/it]
9%|▊ | 49292/569592 [1:04:01<304:52:39, 2.11s/it]
9%|▊ | 49293/569592 [1:04:02<254:08:13, 1.76s/it]
9%|▊ | 49293/569592 [1:04:02<254:08:13, 1.76s/it]
9%|▊ | 49294/569592 [1:04:03<220:26:27, 1.53s/it]
9%|▊ | 49294/569592 [1:04:03<220:26:27, 1.53s/it]
9%|▊ | 49295/569592 [1:04:04<195:22:26, 1.35s/it]
9%|▊ | 49295/569592 [1:04:04<195:22:26, 1.35s/it]
9%|▊ | 49296/569592 [1:04:05<179:05:47, 1.24s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▊ | 49296/569592 [1:04:05<179:05:47, 1.24s/it]
9%|▊ | 49297/569592 [1:04:09<307:01:08, 2.12s/it]
9%|▊ | 49297/569592 [1:04:09<307:01:08, 2.12s/it]
9%|▊ | 49298/569592 [1:04:10<256:05:57, 1.77s/it]
9%|▊ | 49298/569592 [1:04:10<256:05:57, 1.77s/it]
9%|▊ | 49299/569592 [1:04:11<223:41:12, 1.55s/it]
9%|▊ | 49299/569592 [1:04:11<223:41:12, 1.55s/it]
9%|▊ | 49300/569592 [1:04:15<320:58:59, 2.22s/it]
9%|▊ | 49300/569592 [1:04:15<320:58:59, 2.22s/it]
9%|▊ | 49301/569592 [1:04:19<399:37:57, 2.77s/it]
9%|▊ | 49301/569592 [1:04:19<399:37:57, 2.77s/it]
9%|▊ | 49302/569592 [1:04:20<321:46:34, 2.23s/it]
9%|▊ | 49302/569592 [1:04:20<321:46:34, 2.23s/it]
9%|▊ | 49303/569592 [1:04:21<270:16:25, 1.87s/it]
9%|▊ | 49303/569592 [1:04:21<270:16:25, 1.87s/it]
9%|▊ | 49304/569592 [1:04:25<372:07:00, 2.57s/it]
9%|▊ | 49304/569592 [1:04:25<372:07:00, 2.57s/it]
9%|▊ | 49305/569592 [1:04:29<422:33:43, 2.92s/it]
9%|▊ | 49305/569592 [1:04:29<422:33:43, 2.92s/it]
9%|▊ | 49306/569592 [1:04:32<406:59:00, 2.82s/it]
9%|▊ | 49306/569592 [1:04:32<406:59:00, 2.82s/it]
9%|▊ | 49307/569592 [1:04:32<325:02:30, 2.25s/it]
9%|▊ | 49307/569592 [1:04:32<325:02:30, 2.25s/it]
9%|▊ | 49308/569592 [1:04:35<329:56:47, 2.28s/it]
9%|▊ | 49308/569592 [1:04:35<329:56:47, 2.28s/it]
9%|▊ | 49309/569592 [1:04:39<401:08:56, 2.78s/it]
9%|▊ | 49309/569592 [1:04:39<401:08:56, 2.78s/it]/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90750000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▊ | 49310/569592 [1:04:41<396:03:29, 2.74s/it]
9%|▊ | 49310/569592 [1:04:41<396:03:29, 2.74s/it]
9%|▊ | 49311/569592 [1:04:42<318:24:50, 2.20s/it]
9%|▊ | 49311/569592 [1:04:42<318:24:50, 2.20s/it]
9%|▊ | 49312/569592 [1:04:45<337:42:46, 2.34s/it]
9%|▊ | 49312/569592 [1:04:45<337:42:46, 2.34s/it]
9%|▊ | 49313/569592 [1:04:49<413:05:56, 2.86s/it]
9%|▊ | 49313/569592 [1:04:49<413:05:56, 2.86s/it]
9%|▊ | 49314/569592 [1:04:53<477:42:49, 3.31s/it]
9%|▊ | 49314/569592 [1:04:53<477:42:49, 3.31s/it]
9%|▊ | 49315/569592 [1:04:54<374:34:07, 2.59s/it]
9%|▊ | 49315/569592 [1:04:54<374:34:07, 2.59s/it]
9%|▊ | 49316/569592 [1:04:55<304:01:28, 2.10s/it]
9%|▊ | 49316/569592 [1:04:55<304:01:28, 2.10s/it]
9%|▊ | 49317/569592 [1:05:00<426:14:19, 2.95s/it]
9%|▊ | 49317/569592 [1:05:00<426:14:19, 2.95s/it]
9%|▊ | 49318/569592 [1:05:02<386:10:33, 2.67s/it]
9%|▊ | 49318/569592 [1:05:02<386:10:33, 2.67s/it]
9%|▊ | 49319/569592 [1:05:03<310:48:23, 2.15s/it]
9%|▊ | 49319/569592 [1:05:03<310:48:23, 2.15s/it]
9%|▊ | 49320/569592 [1:05:05<311:10:19, 2.15s/it]
9%|▊ | 49320/569592 [1:05:05<3/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
11:10:19, 2.15s/it]
9%|▊ | 49321/569592 [1:05:10<405:36:30, 2.81s/it]
9%|▊ | 49321/569592 [1:05:10<405:36:30, 2.81s/it]
9%|▊ | 49322/569592 [1:05:14<466:50:27, 3.23s/it]
9%|▊ | 49322/569592 [1:05:14<466:50:27, 3.23s/it]
9%|▊ | 49323/569592 [1:05:15<367:22:18, 2.54s/it]
9%|▊ | 49323/569592 [1:05:15<367:22:18, 2.54s/it]
9%|▊ | 49324/569592 [1:05:17<335:52:11, 2.32s/it]
9%|▊ | 49324/569592 [1:05:17<335:52:11, 2.32s/it]
9%|▊ | 49325/569592 [1:05:20<386:46:23, 2.68s/it]
9%|▊ | 49325/569592 [1:05:20<386:46:23, 2.68s/it]
9%|▊ | 49326/569592 [1:05:24<432:04:58, 2.99s/it]
9%|▊ | 49326/569592 [1:05:24<432:04:58, 2.99s/it]
9%|▊ | 49327/569592 [1:05:25<342:05:46, 2.37s/it]
9%|▊ | 49327/569592 [1:05:25<342:05:46, 2.37s/it]
9%|▊ | 49328/569592 [1:05:26<295:18:50, 2.04s/it]
9%|▊ | 49328/569592 [1:05:26<295:18:50, 2.04s/it]
9%|▊ | 49329/569592 [1:05:32<446:54:19, 3.09s/it]
9%|▊ | 49329/569592 [1:05:32<446:54:19, 3.09s/it]
9%|▊ | 49330/569592 [1:05:34<408:15:14, 2.82s/it]
9%|▊ | 49330/569592 [1:05:34<408:15:14, 2.82s/it]
9%|▊ | 49331/569592 [1:05:35<326:58:58, 2.26s/it]
9%|▊ | 49331/569592 [1:05:35<326:58:58, 2.26s/it]
9%|▊ | 49332/569592 [1:05:37<334:44:15, 2.32s/it]
9%|▊ | 49332/569592 [1:05:37<334:44:15, 2.32s/it]
9%|▊ | 49333/569592 [1:05:43<470:04:43, 3.25s/it]
9%|▊ | 49333/569592 [1:05:43<470:04:43, 3.25s/it]
9%|▊ | 49334/569592 [1:05:44<379:41:20, 2.63s/it]
9%|▊ | 49334/569592 [1:05:44<379:41:20, 2.63s/it]
9%|▊ | 49335/569592 [1:05:45<309:08:58, 2.14s/it]
9%|▊ | 49335/569592 [1:05:45<309:08:58, 2.14s/it]
9%|▊ | 49336/569592 [1:05:48<336:32:04, 2.33s/it]
9%|▊ | 49336/569592 [1:05:48<336:32:04, 2.33s/it]
9%|▊ | 49337/569592 [1:05:52<423:36:42, 2.93s/it]
9%|▊ | 49337/569592 [1:05:52<423:36:42, 2.93s/it]
9%|▊ | 49338/569592 [1:05:53<362:16:44, 2.51s/it]
9%|▊ | 49338/569592 [1:05:53<362:16:44, 2.51s/it]
9%|▊ | 49339/569592 [1:05:54<293:35:20, 2.03s/it]
9%|▊ | 49339/569592 [1:05:54<293:35:20, 2.03s/it]
9%|▊ | 49340/569592 [1:05:57<302:54:57, 2.10s/it]
9%|▊ | 49340/569592 [1:05:57<302:54:57, 2.10s/it]
9%|▊ | 49341/569592 [1:06:03<469:13:47, 3.25s/it]
9%|▊ | 49341/569592 [1:06:03<469:13:47, 3.25s/it]
9%|▊ | 49342/569592 [1:06:04<371:50:42, 2.57s/it]
9%|▊ | 49342/569592 [1:06:04<371:50:42, 2.57s/it]
9%|▊ | 49343/569592 [1:06:04<299:51:09, 2.07s/it]
9%|▊ | 49343/569592 [1:06:04<299:51:09, 2.07s/it]
9%|▊ | 49344/569592 [1:06:07<332:57:32, 2.30s/it]
9%|▊ | 49344/569592 [1:06:07<332:57:32, 2.30s/it]
9%|▊ | 49345/569592 [1:06:14<511:44:22, 3.54s/it]
9%|▊ | 49345/569592 [1:06:14<511:44:22, 3.54s/it]
9%|▊ | 49346/569592 [1:06:15<399:49:57, 2.77s/it]
9%|▊ | 49346/569592 [1:06:15<399:49:57, 2.77s/it]
9%|▊ | 49347/569592 [1:06:16<322:02:36, 2.23s/it]
9%|▊ | 49347/569592 [1:06:16<322:02:36, 2.23s/it]
9%|▊ | 49348/569592 [1:06:17<279:13:56, 1.93s/it]
9%|▊ | 49348/569592 [1:06:17<279:13:56, 1.93s/it]
9%|▊ | 49349/569592 [1:06:22<422:36:05, 2.92s/it]
9%|▊ | 49349/569592 [1:06:22<422:36:05, 2.92s/it]
9%|▊ | 49350/569592 [1:06:24<362:41:33, 2.51s/it]
9%|▊ | 49350/569592 [1:06:24<362:41:33, 2.51s/it]
9%|▊ | 49351/569592 [1:06:25<295:36:14, 2.05s/it]
9%|▊ | 49351/569592 [1:06:25<295:36:14, 2.05s/it]
9%|▊ | 49352/569592 [1:06:26<273:52:01, 1.90s/it]
9%|▊ | 49352/569592 [1:06:26<273:52:01, 1.90s/it]
9%|▊ | 49353/569592 [1:06:31<415:13:38, 2.87s/it]
9%|▊ | 49353/569592 [1:06:31<415:13:38, 2.87s/it]
9%|▊ | 49354/569592 [1:06:34<389:37:12, 2.70s/it]
9%|▊ | 49354/569592 [1:06:34<389:37:12, 2.70s/it]
9%|▊ | 49355/569592 [1:06:35<317:14:44, 2.20s/it]
9%|▊ | 49355/569592 [1:06:35<317:14:44, 2.20s/it]
9%|▊ | 49356/569592 [1:06:37<308:37:18, 2.14s/it]
9%|▊ | 49356/569592 [1:06:37<308:37:18, 2.14s/it]
9%|▊ | 49357/569592 [1:06:43<481:21:44, 3.33s/it]
9%|▊ | 49357/569592 [1:06:43<481:21:44, 3.33s/it]
9%|▊ | 49358/569592 [1:06:44<385:08:22, 2.67s/it]
9%|▊ | 49358/569592 [1:06:44<385:08:22, 2.67s/it]
9%|▊ | 49359/569592 [1:06:45<309:47:40, 2.14s/it]
9%|▊ | 49359/569592 [1:06:45<309:47:40, 2.14s/it]
9%|▊ | 49360/569592 [1:06:47<317:53:51, 2.20s/it]
9%|▊ | 49360/569592 [1:06:47<317:53:51, 2.20s/it]
9%|▊ | 49361/569592 [1:06:53<458:05:06, 3.17s/it]
9%|▊ | 49361/569592 [1:06:53<458:05:06, 3.17s/it]
9%|▊ | 49362/569592 [1:06:56<458:32:57, 3.17s/it]
9%|▊ | 49362/569592 [1:06:56<458:32:57, 3.17s/it]
9%|▊ | 49363/569592 [1:06:57<368:02:14, 2.55s/it]
9%|▊ | 49363/569592 [1:06:57<368:02:14, 2.55s/it]
9%|▊ | 49364/569592 [1:06:59<333:14:42, 2.31s/it]
9%|▊ | 49364/569592 [1:06:59<333:14:42, 2.31s/it]
9%|▊ | 49365/569592 [1:07:03<403:00:01, 2.79s/it]
9%|▊ | 49365/569592 [1:07:03<403:00:01, 2.79s/it]
9%|▊ | 49366/569592 [1:07:07<490:08:45, 3.39s/it]
9%|▊ | 49366/569592 [1:07:07<490:08:45, 3.39s/it]
9%|▊ | 49367/569592 [1:07:10<476:45:32, 3.30s/it]
9%|▊ | 49367/569592 [1:07:10<476:45:32, 3.30s/it]
9%|▊ | 49368/569592 [1:07:11<372:37:42, 2.58s/it]
9%|▊ | 49368/569592 [1:07:11<372:37:42, 2.58s/it]
9%|▊ | 49369/569592 [1:07:16<479:17:05, 3.32s/it]
9%|▊ | 49369/569592 [1:07:16<479:17:05, 3.32s/it]
9%|▊ | 49370/569592 [1:07:21<549:37:37, 3.80s/it]
9%|▊ | 49370/569592 [1:07:21<549:37:37, 3.80s/it]
9%|▊ | 49371/569592 [1:07:26<580:55:06, 4.02s/it]
9%|▊ | 49371/569592 [1:07:26<580:55:06, 4.02s/it]
9%|▊ | 49372/569592 [1:07:27<444:05:21, 3.07s/it]
9%|▊ | 49372/569592 [1:07:27<444:05:21, 3.07s/it]
9%|▊ | 49373/569592 [1:07:30<453:45:40, 3.14s/it]
9%|▊ | 49373/569592 [1:07:30<453:45:40, 3.14s/it]
9%|▊ | 49374/569592 [1:07:35<540:45:21, 3.74s/it]
9%|▊ | 49374/569592 [1:07:35<540:45:21, 3.74s/it]
9%|▊ | 49375/569592 [1:07:36<418:04:38, 2.89s/it]
9%|▊ | 49375/569592 [1:07:36<418:04:38, 2.89s/it]
9%|▊ | 49376/569592 [1:07:40<460:07:05, 3.18s/it]
9%|▊ | 49376/569592 [1:07:40<460:07:05, 3.18s/it]
9%|▊ | 49377/569592 [1:07:43<474:13:02, 3.28s/it]
9%|▊ | 49377/569592 [1:07:43<474:13:02, 3.28s/it]
9%|▊ | 49378/569592 [1:07:44<371:14:24, 2.57s/it]
9%|▊ | 49378/569592 [1:07:44<371:14:24, 2.57s/it]
9%|▊ | 49379/569592 [1:07:50<487:02:27, 3.37s/it]
9%|▊ | 49379/569592 [1:07:50<487:02:27, 3.37s/it]
9%|▊ | 49380/569592 [1:07:50<379:19:10, 2.62s/it]
9%|▊ | 49380/569592 [1:07:50<379:19:10, 2.62s/it]
9%|▊ | 49381/569592 [1:07:54<426:33:19, 2.95s/it]
9%|▊ | 49381/569592 [1:07:54<426:33:19, 2.95s/it]
9%|▊ | 49382/569592 [1:07:57<438:38:05, 3.04s/it]
9%|▊ | 49382/569592 [1:07:57<438:38:05, 3.04s/it]
9%|▊ | 49383/569592 [1:08:01<479:48:05, 3.32s/it]
9%|▊ | 49383/569592 [1:08:01<479:48:05, 3.32s/it]
9%|▊ | 49384/569592 [1:08:06<551:37:22, 3.82s/it]
9%|▊ | 49384/569592 [1:08:06<551:37:22, 3.82s/it]
9%|▊ | 49385/569592 [1:08:11<600:26:07, 4.16s/it]
9%|▊ | 49385/569592 [1:08:11<600:26:07, 4.16s/it]
9%|▊ | 49386/569592 [1:08:14<558:41/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
:02, 3.87s/it]
9%|▊ | 49386/569592 [1:08:14<558:41:02, 3.87s/it]
9%|▊ | 49387/569592 [1:08:18<544:46:27, 3.77s/it]
9%|▊ | 49387/569592 [1:08:18<544:46:27, 3.77s/it]
9%|▊ | 49388/569592 [1:08:21<527:20:18, 3.65s/it]
9%|▊ | 49388/569592 [1:08:21<527:20:18, 3.65s/it]
9%|▊ | 49389/569592 [1:08:25<531:25:12, 3.68s/it]
9%|▊ | 49389/569592 [1:08:25<531:25:12, 3.68s/it]
9%|▊ | 49390/569592 [1:08:30<591:16:54, 4.09s/it]
9%|▊ | 49390/569592 [1:08:30<591:16:54, 4.09s/it]
9%|▊ | 49391/569592 [1:08:37<730:45:39, 5.06s/it]
9%|▊ | 49391/569592 [1:08:38<730:45:39, 5.06s/it]
9%|▊ | 49392/569592 [1:08:42<722:23:35, 5.00s/it]
9%|▊ | 49392/569592 [1:08:42<722:23:35, 5.00s/it]
9%|▊ | 49393/569592 [1:08:47<720:10:56, 4.98s/it]
9%|▊ | 49393/569592 [1:08:47<720:10:56, 4.98s/it]
9%|▊ | 49394/569592 [1:08:52<705:48:00, 4.88s/it]
9%|▊ | 49394/569592 [1:08:52<705:48:00, 4.88s/it]
9%|▊ | 49395/569592 [1:08:57<725:52:25, 5.02s/it]
9%|▊ | 49395/569592 [1:08:57<725:52:25, 5.02s/it]
9%|▊ | 49396/569592 [1:08:58<546:21:06, 3.78s/it]
9%|▊ | 49396/569592 [1:08:58<546:21:06, 3.78s/it]
9%|▊ | 49397/569592 [1:09:02<528:20:44, 3.66s/it]
9%|▊ | 49397/569592 [1:09:02<528:20:44, 3.66s/it]
9%|▊ | 49398/569592 [1:09:05<528:24:16, 3.66s/it]
9%|▊ | 49398/569592 [1:09:05<528:24:16, 3.66s/it]
9%|▊ | 49399/569592 [1:09:10<580:52:00, 4.02s/it]
9%|▊ | 49399/569592 [1:09:10<580:52:00, 4.02s/it]
9%|▊ | 49400/569592 [1:09:14<591:14:30, 4.09s/it]
9%|▊ | 49400/569592 [1:09:14<591:14:30, 4.09s/it]
9%|▊ | 49401/569592 [1:09:19<607:11:36, 4.20s/it]
9%|▊ | 49401/569592 [1:09:19<607:11:36, 4.20s/it]
9%|▊ | 49402/569592 [1:09:22<567:55:20, 3.93s/it]
9%|▊ | 49402/569592 [1:09:22<567:55:20, 3.93s/it]
9%|▊ | 49403/569592 [1:09:27<595:27:15, 4.12s/it]
9%|▊ | 49403/569592 [1:09:27<595:27:15, 4.12s/it]
9%|▊ | 49404/569592 [1:09:31<615:29:31, 4.26s/it]
9%|▊ | 49404/569592 [1:09:31<615:29:31, 4.26s/it]
9%|▊ | 49405/569592 [1:09:35<572:42:18, 3.96s/it]
9%|▊ | 49405/569592 [1:09:35<572:42:18, 3.96s/it]
9%|▊ | 49406/569592 [1:09:37<529:49:31, 3.67s/it]
9%|▊ | 49406/569592 [1:09:38<529:49:31, 3.67s/it]
9%|▊ | 49407/569592 [1:09:38<409:22:45, 2.83s/it]
9%|▊ | 49407/569592 [1:09:38<409:22:45, 2.83s/it]
9%|▊ | 49408/569592 [1:09:39<328:11:44, 2.27s/it]
9%|▊ | 49408/569592 [1:09:39<328:11:44, 2.27s/it]
9%|▊ | 49409/569592 [1:09:43<373:08:53, 2.58s/it]
9%|▊ | 49409/569592 [1:09:43<373:08:53, 2.58s/it]
9%|▊ | 49410/569592 [1:09:44<302:51:07, 2.10s/it]
9%|▊ | 49410/569592 [1:09:44<302:51:07, 2.10s/it]
9%|▊ | 49411/569592 [1:09:45<255:00:46, 1.76s/it]
9%|▊ | 49411/569592 [1:09:45<255:00:46, 1.76s/it]
9%|▊ | 49412/569592 [1:09:46<219:45:27, 1.52s/it]
9%|▊ | 49412/569592 [1:09:46<219:45:27, 1.52s/it]
9%|▊ | 49413/569592 [1:09:46<194:39:23, 1.35s/it]
9%|▊ | 49413/569592 [1:09:47<194:39:23, 1.35s/it]
9%|▊ | 49414/569592 [1:09:48<179:59:34, 1.25s/it]
9%|▊ | 49414/569592 [1:09:48<179:59:34, 1.25s/it]
9%|▊ | 49415/569592 [1:09:51<270:31:50, 1.87s/it]
9%|▊ | 49415/569592 [1:09:51<270:31:50, 1.87s/it]
9%|▊ | 49416/569592 [1:09:52<243:38:31, 1.69s/it]
9%|▊ | 49416/569592 [1:09:52<243:38:31, 1.69s/it]
9%|▊ | 49417/569592 [1:09:54<240:19:08, 1.66s/it]
9%|▊ | 49417/569592 [1:09:54<240:19:08, 1.66s/it]
9%|▊ | 49418/569592 [1:09:57<314:51:19, 2.18s/it]
9%|▊ | 49418/569592 [1:09:57<314:51:19, 2.18s/it]
9%|▊ | 49419/569592 [1:10:01<411:19:28, 2.85s/it]
9%|▊ | 49419/569592 [1:10:02<411:19:28, 2.85s/it]
9%|▊ | 49420/569592 [1:10:02<328:43:38, 2.28s/it]
9%|▊ | 49420/569592 [1:10:02<328:43:38, 2.28s/it]
9%|▊ | 49421/569592 [1:10:03<269:39:40, 1.87s/it]
9%|▊ | 49421/569592 [1:10:03<269:39:40, 1.87s/it]
9%|▊ | 49422/569592 [1:10:09<419:52:11, 2.91s/it]
9%|▊ | 49422/569592 [1:10:09<419:52:11, 2.91s/it]
9%|▊ | 49423/569592 [1:10:12<422:59:10, 2.93s/it]
9%|▊ | 49423/569592 [1:10:12<422:59:10, 2.93s/it]
9%|▊ | 49424/569592 [1:10:13<339:31:56, 2.35s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▊ | 49424/569592 [1:10:13<339:31:56, 2.35s/it]
9%|▊ | 49425/569592 [1:10:14<296:42:33, 2.05s/it]
9%|▊ | 49425/569592 [1:10:14<296:42:33, 2.05s/it]
9%|▊ | 49426/569592 [1:10:19<422:59:49, 2.93s/it]
9%|▊ | 49426/569592 [1:10:19<422:59:49, 2.93s/it]
9%|▊ | 49427/569592 [1:10:21<393:12:31, 2.72s/it]
9%|▊ | 49427/569592 [1:10:21<393:12:31, 2.72s/it]
9%|▊ | 49428/569592 [1:10:22<324:25:41, 2.25s/it]
9%|▊ | 49428/569592 [1:10:22<324:25:41, 2.25s/it]
9%|▊ | 49429/569592 [1:10:26<386:00:45, 2.67s/it]
9%|▊ | 49429/569592 [1:10:26<386:00:45, 2.67s/it]
9%|▊ | 49430/569592 [1:10:28<364:57:43, 2.53s/it]
9%|▊ | 49430/569592 [1:10:28<364:57:43, 2.53s/it]
9%|▊ | 49431/569592 [1:10:31<373:01:42, 2.58s/it]
9%|▊ | 49431/569592 [1:10:31<373:01:42, 2.58s/it]
9%|▊ | 49432/569592 [1:10:32<321:29:37, 2.23s/it]
9%|▊ | 49432/569592 [1:10:32<321:29:37, 2.23s/it]
9%|▊ | 49433/569592 [1:10:36<366:47:16, 2.54s/it]
9%|▊ | 49433/569592 [1:10:36<366:47:16, 2.54s/it]
9%|▊ | 49434/569592 [1:10:39<412:02:07, 2.85s/it]
9%|▊ | 49434/569592 [1:10:39<412:02:07, 2.85s/it]
9%|▊ | 49435/569592 [1:10:42<407:29:27, 2.82s/it]
9%|▊ | 49435/569592 [1:10:42<407:29:27, 2.82s/it]
9%|▊ | 49436/569592 [1:10:43<325:00:34, 2.25s/it]
9%|▊ | 49436/569592 [1:10:43<325:00:34, 2.25s/it]
9%|▊ | 49437/569592 [1:10:46<376:10:00, 2.60s/it]
9%|▊ | 49437/569592 [1:10:46<376:10:00, 2.60s/it]
9%|▊ | 49438/569592 [1:10:49<381:53:54, 2.64s/it]
9%|▊ | 49438/569592 [1:10:49<381:53:54, 2.64s/it]
9%|▊ | 49439/569592 [1:10:53<427:12:39, 2.96s/it]
9%|▊ | 49439/569592 [1:10:53<427:12:39, 2.96s/it]
9%|▊ | 49440/569592 [1:10:54<340:02:23, 2.35s/it]
9%|▊ | 49440/569592 [1:10:54<340:02:23, 2.35s/it]
9%|▊ | 49441/569592 [1:10:55<315:16:53, 2.18s/it]
9%|▊ | 49441/569592 [1:10:55<315:16:53, 2.18s/it]
9%|▊ | 49442/569592 [1:11:00<420:00:07, 2.91s/it]
9%|▊ | 49442/569592 [1:11:00<420:00:07, 2.91s/it]
9%|▊ | 49443/569592 [1:11:03<419:04:49, 2.90s/it]
9%|▊ | 49443/569592 [1:11:03<419:04:49, 2.90s/it]
9%|▊ | 49444/569592 [1:11:04<333:37:40, 2.31s/it]
9%|▊ | 49444/569592 [1:11:04<333:37:40, 2.31s/it]
9%|▊ | 49445/569592 [1:11:05<294:17:24, 2.04s/it]
9%|▊ | 49445/569592 [1:11:05<294:17:24, 2.04s/it]
9%|▊ | 49446/569592 [1:11:11<473:12:56, 3.28s/it]
9%|▊ | 49446/569592 [1:11:12<473:12:56, 3.28s/it]
9%|▊ | 49447/569592 [1:11:14<462:58:08, 3.20s/it]
9%|▊ | 49447/569592 [1:11:14<462:58:08, 3.20s/it]
9%|▊ | 49448/569592 [1:11:15<365:48:51, 2.53s/it]
9%|▊ | 49448/569592 [1:11:15<365:48:51, 2.53s/it]
9%|▊ | 49449/569592 [1:11:16<299:20:16, 2.07s/it]
9%|▊ | 49449/569592 [1:11:16<299:20:16, 2.07s/it]
9%|▊ | 49450/569592 [1:11:22<431:31:54, 2.99s/it]
9%|▊ | 49450/569592 [1:11:22<431:31:54, 2.99s/it]
9%|▊ | 49451/569592 [1:11:24<428:20:15, 2.96s/it]
9%|▊ | 49451/569592 [1:11:24<428:20:15, 2.96s/it]
9%|▊ | 49452/569592 [1:11:25<340:49:30, 2.36s/it]
9%|▊ | 49452/569592 [1:11:25<340:49:30, 2.36s/it]
9%|▊ | 49453/569592 [1:11:26<279:43:24, 1.94s/it]
9%|▊ | 49453/569592 [1:11:26<279:43:24, 1.94s/it]
9%|▊ | 49454/569592 [1:11:31<384:20:23, 2.66s/it]
9%|▊ | 49454/569592 [1:11:31<384:20:23, 2.66s/it]
9%|▊ | 49455/569592 [1:11:33<388:27:40, 2.69s/it]
9%|▊ | 49455/569592 [1:11:33<388:27:40, 2.69s/it]
9%|▊ | 49456/569592 [1:11:34<313:30:40, 2.17s/it]
9%|▊ | 49456/569592 [1:11:34<313:30:40, 2.17s/it]
9%|▊ | 49457/569592 [1:11:36<272:10:19, 1.88s/it]
9%|▊ | 49457/569592 [1:11:36<272:10:19, 1.88s/it]
9%|▊ | 49458/569592 [1:11:41<416:03:28, 2.88s/it]
9%|▊ | 49458/569592 [1:11:41<416:03:28, 2.88s/it]
9%|▊ | 49459/569592 [1:11:45<488:21:54, 3.38s/it]
9%|▊ | 49459/569592 [1:11:45<488:21:54, 3.38s/it]
9%|▊ | 49460/569592 [1:11:46<382:11:03, 2.65s/it]
9%|▊ | 49460/569592 [1:11:46<382:11:03, 2.65s/it]
9%|▊ | 49461/569592 [1:11:47<309:52:42, 2.14s/it]
9%|▊ | 49461/569592 [1:11:47<309:52:42, 2.14s/it]
9%|▊ | 49462/569592 [1:11:52<431:07:36, 2.98s/it]
9%|▊ | 49462/569592 [1:11:52<431:07:36, 2.98s/it]
9%|▊ | 49463/569592 [1:11:56<457:31:06, 3.17s/it]
9%|▊ | 49463/569592 [1:11:56<457:31:06, 3.17s/it]
9%|▊ | 49464/569592 [1:11:57<359:57:38, 2.49s/it]
9%|▊ | 49464/569592 [1:11:57<359:57:38, 2.49s/it]
9%|▊ | 49465/569592 [1:11:58<292:49:16, 2.03s/it]
9%|▊ | 49465/569592 [1:11:58<292:49:16, 2.03s/it]
9%|▊ | 49466/569592 [1:12:01<360:33:40, 2.50s/it]
9%|▊ | 49466/569592 [1:12:01<360:33:40, 2.50s/it]
9%|▊ | 49467/569592 [1:12:05<402:35:01, 2.79s/it]
9%|▊ | 49467/569592 [1:12:05<402:35:01, 2.79s/it]
9%|▊ | 49468/569592 [1:12:06<321:55:06, 2.23s/it]
9%|▊ | 49468/569592 [1:12:06<321:55:06, 2.23s/it]
9%|▊ | 49469/569592 [1:12:07<282:31:08, 1.96s/it]
9%|▊ | 49469/569592 [1:12:07<282:31:08, 1.96s/it]
9%|▊ | 49470/569592 [1:12:12<403:10:29, 2.79s/it]
9%|▊ | 49470/569592 [1:12:12<403:10:29, 2.79s/it]
9%|▊ | 49471/569592 [1:12:16<473:16:23, 3.28s/it]
9%|▊ | 49471/569592 [1:12:16<473:16:23, 3.28s/it]
9%|▊ | 49472/569592 [1:12:17<372:55:18, 2.58s/it]
9%|▊ | 49472/569592 [1:12:17<372:55:18, 2.58s/it]
9%|▊ | 49473/569592 [1:12:18<306:11:48, 2.12s/it]
9%|▊ | 49473/569592 [1:12:18<306:11:48, 2.12s/it]
9%|▊ | 49474/569592 [1:12:22<376:23:02, 2.61s/it]
9%|▊ | 49474/569592 [1:12:22<376:23:02, 2.61s/it]
9%|▊ | 49475/569592 [1:12:24<364:21:59, 2.52s/it]
9%|▊ | 49475/569592 [1:12:24<364:21:59, 2.52s/it]
9%|▊ | 49476/569592 [1:12:26<334:56:19, 2.32s/it]
9%|▊ | 49476/569592 [1:12:26<334:56:19, 2.32s/it]
9%|▊ | 49477/569592 [1:12:28<331:18:33, 2.29s/it]
9%|▊ | 49477/569592 [1:12:28<331:18:33, 2.29s/it]
9%|▊ | 49478/569592 [1:12:33<451:26:58, 3.12s/it]
9%|▊ | 49478/569592 [1:12:33<451:26:58, 3.12s/it]
9%|▊ | 49479/569592 [1:12:37<464:21:20, 3.21s/it]
9%|▊ | 49479/569592 [1:12:37<464:21:20, 3.21s/it]
9%|▊ | 49480/569592 [1:12:38<365:13:18, 2.53s/it]
9%|▊ | 49480/569592 [1:12:38<365:13:18, 2.53s/it]
9%|▊ | 49481/569592 [1:12:41<391:46:06, 2.71s/it]
9%|▊ | 49481/569592 [1:12:41<391:46:06, 2.71s/it]
9%|▊ | 49482/569592 [1:12:46<492:27:35, 3.41s/it]
9%|▊ | 49482/569592 [1:12:46<492:27:35, 3.41s/it]
9%|▊ | 49483/569592 [1:12:49<491:51:57, 3.40s/it]
9%|▊ | 49483/569592 [1:12:49<491:51:57, 3.40s/it]
9%|▊ | 49484/569592 [1:12:52<480:48:20, 3.33s/it]
9%|▊ | 49484/569592 [1:12:52<480:48:20, 3.33s/it]
9%|▊ | 49485/569592 [1:12:57<540:20:53, 3.74s/it]
9%|▊ | 49485/569592 [1:12:57<540:20:53, 3.74s/it]
9%|▊ | 49486/569592 [1:12:58<416:55:14, 2.89s/it]
9%|▊ | 49486/569592 [1:12:58<416:55:14, 2.89s/it]
9%|▊ | 49487/569592 [1:13:02<460:55:16, 3.19s/it]
9%|▊ | 49487/569592 [1:13:02<460:55:16, 3.19s/it]
9%|▊ | 49488/569592 [1:13:03<361:12:14, 2.50s/it]
9%|▊ | 49488/569592 [1:13:03<361:12:14, 2.50s/it]
9%|▊ | 49489/569592 [1:13:08<482:00:26, 3.34s/it]
9%|▊ | 49489/569592 [1:13:08<482:00:26, 3.34s/it]
9%|▊ | 49490/569592 [1:13:14<583:34:58, 4.04s/it]
9%|▊ | 49490/569592 [1:13:14<583:34:58, 4.04s/it]
9%|▊ | 49491/569592 [1:13:15<447:14:00, 3.10s/it]
9%|▊ | 49491/569592 [1:13:15<447:14:00, 3.10s/it]
9%|▊ | 49492/569592 [1:13:16<351:48:39, 2.44s/it]
9%|▊ | 49492/569592 [1:13:16<351:48:39, 2.44s/it]
9%|▊ | 49493/569592 [1:13:16<287:56:07, 1.99s/it]
9%|▊ | 49493/569592 [1:13:16<287:56:07, 1.99s/it]
9%|▊ | 49494/569592 [1:13:22<436:52:17, 3.02s/it]
9%|▊ | 49494/569592 [1:13:22<436:52:17, 3.02s/it]
9%|▊ | 49495/569592 [1:13:26<469:13:27, 3.25s/it]
9%|▊ | 49495/569592 [1:13:26<469:13:27, 3.25s/it]
9%|▊ | 49496/569592 [1:13:30<533:15:13, 3.69s/it]
9%|▊ | 49496/569592 [1:13:30<533:15:13, 3.69s/it]
9%|▊ | 49497/569592 [1:13:35<552:51:06, 3.83s/it]
9%|▊ | 49497/569592 [1:13:35<552:51:06, 3.83s/it]
9%|▊ | 49498/569592 [1:13:39<599:56:01, 4.15s/it]
9%|▊ | 49498/569592 [1:13:39<599:56:01, 4.15s/it]
9%|▊ | 49499/569592 [1:13:43<558:21:26, 3.86s/it]
9%|▊ | 49499/569592 [1:13:43<558:21:26, 3.86s/it]
9%|▊ | 49500/569592 [1:13:46<525:43:10, 3.64s/it]
9%|▊ | 49500/569592 [1:13:46<525:43:10, 3.64s/it]
9%|▊ | 49501/569592 [1:13:50<557:48:43, 3.86s/it]
9%|▊ | 49501/569592 [1:13:50<557:48:43, 3.86s/it]
9%|▊ | 49502/569592 [1:13:55<621:01:10, 4.30s/it]
9%|▊ | 49502/569592 [1:13:55<621:01:10, 4.30s/it]
9%|▊ | 49503/569592 [1:14:00<632:03:02, 4.37s/it]
9%|▊ | 49503/569592 [1:14:00<632:03:02, 4.37s/it]
9%|▊ | 49504/569592 [1:14:05<637:16:17, 4.41s/it]
9%|▊ | 49504/569592 [1:14:05<637:16:17, 4.41s/it]
9%|▊ | 49505/569592 [1:14:10<674:25:46, 4.67s/it]
9%|▊ | 49505/569592 [1:14:10<674:25:46, 4.67s/it]
9%|▊ | 49506/569592 [1:14:17<767:52:57, 5.32s/it]
9%|▊ | 49506/569592 [1:14:17<767:52:57, 5.32s/it]
9%|▊ | 49507/569592 [1:14:21<733:51:37, 5.08s/it]
9%|▊ | 49507/569592 [1:14:21<733:51:37, 5.08s/it]
9%|▊ | 49508/569592 [1:14:26<722:52:10, 5.00s/it]
9%|▊ | 49508/569592 [1:14:26<722:52:10, 5.00s/it]
9%|▊ | 49509/569592 [1:14:30<674:53:21, 4.67s/it]
9%|▊ | 49509/569592 [1:14:30<674:53:21, 4.67s/it]
9%|▊ | 49510/569592 [1:14:34<630:37:58, 4.37s/it]
9%|▊ | 49510/569592 [1:14:34<630:37:58, 4.37s/it]
9%|▊ | 49511/569592 [1:14:37<609:09:18, 4.22s/it]
9%|▊ | 49511/569592 [1:14:37<609:09:18, 4.22s/it]
9%|▊ | 49512/569592 [1:14:42<631:24:00, 4.37s/it]
9%|▊ | 49512/569592 [1:14:42<631:24:00, 4.37s/it]
9%|▊ | 49513/569592 [1:14:47<643:09:29, 4.45s/it]
9%|▊ | 49513/569592 [1:14:47<643:09:29, 4.45s/it]
9%|▊ | 49514/569592 [1:14:48<498:41:44, 3.45s/it]
9%|▊ | 49514/569592 [1:14:48<498:41:44, 3.45s/it]
9%|▊ | 49515/569592 [1:14:53<558:49:04, 3.87s/it]
9%|▊ | 49515/569592 [1:14:53<558:49:04, 3.87s/it]
9%|▊ | 49516/569592 [1:14:57<591:50:15, 4.10s/it]
9%|▊ | 49516/569592 [1:14:57<591:50:15, 4.10s/it]
9%|▊ | 49517/569592 [1:15:01<577:19:37, 4.00s/it]
9%|▊ | 49517/569592 [1:15:01<577:19:37, 4.00s/it]
9%|▊ | 49518/569592 [1:15:06<608:09:11, 4.21s/it]
9%|▊ | 49518/569592 [1:15:06<608:09:11, 4.21s/it]
9%|▊ | 49519/569592 [1:15:10<620:37:21, 4.30s/it]
9%|▊ | 49519/569592 [1:15:10<620:37:21, 4.30s/it]
9%|▊ | 49520/569592 [1:15:15<628:47:16, 4.35s/it]
9%|▊ | 49520/569592 [1:15:15<628:47:16, 4.35s/it]
9%|▊ | 49521/569592 [1:15:19<638:21:25, 4.42s/it]
9%|▊ | 49521/569592 [1:15:19<638:21:25, 4.42s/it]
9%|▊ | 49522/569592 [1:15:23<589:23:27, 4.08s/it]
9%|▊ | 49522/569592 [1:15:23<589:23:27, 4.08s/it]
9%|▊ | 49523/569592 [1:15:24<450:24:54, 3.12s/it]
9%|▊ | 49523/569592 [1:15:24<450:24:54, 3.12s/it]
9%|▊ | 49524/569592 [1:15:28<520:36:21, 3.60s/it]
9%|▊ | 49524/569592 [1:15:28<520:36:21, 3.60s/it]
9%|▊ | 49525/569592 [1:15:29<405:09:50, 2.80s/it]
9%|▊ | 49525/569592 [1:15:29<405:09:50, 2.80s/it]
9%|▊ | 49526/569592 [1:15:33<426:48:15, 2.95s/it]
9%|▊ | 49526/569592 [1:15:33<426:48:15, 2.95s/it]
9%|▊ | 49527/569592 [1:15:33<339:14:20, 2.35s/it]
9%|▊ | 49527/569592 [1:15:33<339:14:20, 2.35s/it]
9%|▊ | 49528/569592 [1:15:34<279:18:12, 1.93s/it]
9%|▊ | 49528/569592 [1:15:34<279:18:12, 1.93s/it]
9%|▊ | 49529/569592 [1:15:35<238:17:08, 1.65s/it]
9%|▊ | 49529/569592 [1:15:36<238:17:08, 1.65s/it]
9%|▊ | 49530/569592 [1:15:37<218:38:36, 1.51s/it]
9%|▊ | 49530/569592 [1:15:37<218:38:36, 1.51s/it]
9%|▊ | 49531/569592 [1:15:38<193:07:34, 1.34s/it]
9%|▊ | 49531/569592 [1:15:38<193:07:34, 1.34s/it]
9%|▊ | 49532/569592 [1:15:39<181:54:12, 1.26s/it]
9%|▊ | 49532/569592 [1:15:39<181:54:12, 1.26s/it]
9%|▊ | 49533/569592 [1:15:42<262:17:57, 1.82s/it]
9%|▊ | 49533/569592 [1:15:42<262:17:57, 1.82s/it]
9%|▊ | 49534/569592 [1:15:43<226:54:47, 1.57s/it]
9%|▊ | 49534/569592 [1:15:43<226:54:47, 1.57s/it]
9%|▊ | 49535/569592 [1:15:46<304:31:14, 2.11s/it]
9%|▊ | 49535/569592 [1:15:46<304:31:14, 2.11s/it]
9%|▊ | 49536/569592 [1:15:47<261:31:25, 1.81s/it]
9%|▊ | 49536/569592 [1:15:47<261:31:25, 1.81s/it]
9%|▊ | 49537/569592 [1:15:52<372:58:47, 2.58s/it]
9%|▊ | 49537/569592 [1:15:52<372:58:47, 2.58s/it]
9%|▊ | 49538/569592 [1:15:53<312:55:03, 2.17s/it]
9%|▊ |/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
49538/569592 [1:15:53<312:55:03, 2.17s/it]
9%|▊ | 49539/569592 [1:15:57<414:48:20, 2.87s/it]
9%|▊ | 49539/569592 [1:15:57<414:48:20, 2.87s/it]
9%|▊ | 49540/569592 [1:15:58<330:57:16, 2.29s/it]
9%|▊ | 49540/569592 [1:15:58<330:57:16, 2.29s/it]
9%|▊ | 49541/569592 [1:16:01<373:26:12, 2.59s/it]
9%|▊ | 49541/569592 [1:16:02<373:26:12, 2.59s/it]
9%|▊ | 49542/569592 [1:16:04<363:26:59, 2.52s/it]
9%|▊ | 49542/569592 [1:16:04<363:26:59, 2.52s/it]
9%|▊ | 49543/569592 [1:16:07<408:31:01, 2.83s/it]
9%|▊ | 49543/569592 [1:16:07<408:31:01, 2.83s/it]
9%|▊ | 49544/569592 [1:16:08<328:17:12, 2.27s/it]
9%|▊ | 49544/569592 [1:16:08<328:17:12, 2.27s/it]
9%|▊ | 49545/569592 [1:16:13<418:56:40, 2.90s/it]
9%|▊ | 49545/569592 [1:16:13<418:56:40, 2.90s/it]
9%|▊ | 49546/569592 [1:16:14<336:31:27, 2.33s/it]
9%|▊ | 49546/569592 [1:16:14<336:31:27, 2.33s/it]
9%|▊ | 49547/569592 [1:16:17<396:53:41, 2.75s/it]
9%|▊ | 49547/569592 [1:16:17<396:53:41, 2.75s/it]
9%|▊ | 49548/569592 [1:16:18<320:55:32, 2.22s/it]
9%|▊ | 49548/569592 [1:16:18<320:55:32, 2.22s/it]
9%|▊ | 49549/569592 [1:16:23<407:44:07, 2.82s/it]
9%|▊ | 49549/569592 [1:16:23<407:44:07, 2.82s/it]
9%|▊ | 49550/569592 [1:16:26<410:03:05, 2.84s/it]
9%|▊ | 49550/569592 [1:16:26<410:03:05, 2.84s/it]
9%|▊ | 49551/569592 [1:16:28<395:30:08, 2.74s/it]
9%|▊ | 49551/569592 [1:16:28<395:30:08, 2.74s/it]
9%|▊ | 49552/569592 [1:16:29<317:51:50, 2.20s/it]
9%|▊ | 49552/569592 [1:16:29<317:51:50, 2.20s/it]
9%|▊ | 49553/569592 [1:16:34<444:24:28, 3.08s/it]
9%|▊ | 49553/569592 [1:16:34<444:24:28, 3.08s/it]
9%|▊ | 49554/569592 [1:16:36<409:35:01, 2.84s/it]
9%|▊ | 49554/569592 [1:16:36<409:35:01, 2.84s/it]
9%|▊ | 49555/569592 [1:16:38<364:25:10, 2.52s/it]
9%|▊ | 49555/569592 [1:16:38<364:25:10, 2.52s/it]
9%|▊ | 49556/569592 [1:16:39<296:43:54, 2.05s/it]
9%|▊ | 49556/569592 [1:16:39<296:43:54, 2.05s/it]
9%|▊ | 49557/569592 [1:16:43<357:37:33, 2.48s/it]
9%|▊ | 49557/569592 [1:16:43<357:37:33, 2.48s/it]
9%|▊ | 49558/569592 [1:16:46<415:22:00, 2.88s/it]
9%|▊ | 49558/569592 [1:16:46<415:22:00, 2.88s/it]
9%|▊ | 49559/569592 [1:16:49<403:20:09, 2.79s/it]
9%|▊ | 49559/569592 [1:16:49<403:20:09, 2.79s/it]
9%|▊ | 49560/569592 [1:16:50<322:43:34, 2.23s/it]
9%|▊ | 49560/569592 [1:16:50<322:43:34, 2.23s/it]
9%|▊ | 49561/569592 [1:16:54<394:17:40, 2.73s/it]
9%|▊ | 49561/569592 [1:16:54<394:17:40, 2.73s/it]
9%|▊ | 49562/569592 [1:16:56<387:26:19, 2.68s/it]
9%|▊ | 49562/569592 [1:16:56<387:26:19, 2.68s/it]
9%|▊ | 49563/569592 [1:17:00<413:39:16, 2.86s/it]
9%|▊ | 49563/569592 [1:17:00<413:39:16, 2.86s/it]
9%|▊ | 49564/569592 [1:17:01<336:15:08, 2.33s/it]
9%|▊ | 49564/569592 [1:17:01<336:15:08, 2.33s/it]
9%|▊ | 49565/569592 [1:17:05<405:02:52, 2.80s/it]
9%|▊ | 49565/569592 [1:17:05<405:02:52, 2.80s/it]
9%|▊ | 49566/569592 [1:17:07<364:00:10, 2.52s/it]
9%|▊ | 49566/569592 [1:17:07<364:00:10, 2.52s/it]
9%|▊ | 49567/569592 [1:17:10<386:19:51, 2.67s/it]
9%|▊ | 49567/569592 [1:17:10<386:19:51, 2.67s/it]
9%|▊ | 49568/569592 [1:17:11<310:53:59, 2.15s/it]
9%|▊ | 49568/569592 [1:17:11<310:53:59, 2.15s/it]
9%|▊ | 49569/569592 [1:17:14<371:09:52, 2.57s/it]
9%|▊ | 49569/569592 [1:17:14<371:09:52, 2.57s/it]
9%|▊ | 49570/569592 [1:17:17<378:21:46, 2.62s/it]
9%|▊ | 49570/569592 [1:17:17<378:21:46, 2.62s/it]
9%|▊ | 49571/569592 [1:17:19<343:54:30, 2.38s/it]
9%|▊ | 49571/569592 [1:17:19<343:54:30, 2.38s/it]
9%|▊ | 49572/569592 [1:17:20<292:30:47, 2.03s/it]
9%|▊ | 49572/569592 [1:17:20<292:30:47, 2.03s/it]
9%|▊ | 49573/569592 [1:17:26<468:39:14, 3.24s/it]
9%|▊ | 49573/569592 [1:17:26<468:39:14, 3.24s/it]
9%|▊ | 49574/569592 [1:17:27<368:37:32, 2.55s/it]
9%|▊ | 49574/569592 [1:17:27<368:37:32, 2.55s/it]
9%|▊ | 49575/569592 [1:17:29<365:47:53, 2.53s/it]
9%|▊ | 49575/569592 [1:17:29<365:47:53, 2.53s/it]
9%|▊ | 49576/569592 [1:17:30<296:26:48, 2.05s/it]
9%|▊ | 49576/569592 [1:17:30<2/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (95699712 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
96:26:48, 2.05s/it]
9%|▊ | 49577/569592 [1:17:36<452:40:20, 3.13s/it]
9%|▊ | 49577/569592 [1:17:36<452:40:20, 3.13s/it]
9%|▊ | 49578/569592 [1:17:37<358:47:32, 2.48s/it]
9%|▊ | 49578/569592 [1:17:37<358:47:32, 2.48s/it]
9%|▊ | 49579/569592 [1:17:39<325:46:35, 2.26s/it]
9%|▊ | 49579/569592 [1:17:39<325:46:35, 2.26s/it]
9%|▊ | 49580/569592 [1:17:40<268:19:18, 1.86s/it]
9%|▊ | 49580/569592 [1:17:40<268:19:18, 1.86s/it]
9%|▊ | 49581/569592 [1:17:46<456:30:29, 3.16s/it]
9%|▊ | 49581/569592 [1:17:46<456:30:29, 3.16s/it]
9%|▊ | 49582/569592 [1:17:47<367:53:14, 2.55s/it]
9%|▊ | 49582/569592 [1:17:47<367:53:14, 2.55s/it]
9%|▊ | 49583/569592 [1:17:48<306:07:03, 2.12s/it]
9%|▊ | 49583/569592 [1:17:48<306:07:03, 2.12s/it]
9%|▊ | 49584/569592 [1:17:49<255:05:10, 1.77s/it]
9%|▊ | 49584/569592 [1:17:49<255:05:10, 1.77s/it]
9%|▊ | 49585/569592 [1:17:56<470:04:43, 3.25s/it]
9%|▊ | 49585/569592 [1:17:56<470:04:43, 3.25s/it]
9%|▊ | 49586/569592 [1:17:57<390:00:31, 2.70s/it]
9%|▊ | 49586/569592 [1:17:57<390:00:31, 2.70s/it]
9%|▊ | 49587/569592 [1:17:59<337:54:03, 2.34s/it]
9%|▊ | 49587/569592 [1:17:59<337:54:03, 2.34s/it]
9%|▊ | 49588/569592 [1:18:00<281:34:41, 1.95s/it]
9%|▊ | 49588/569592 [1:18:00<281:34:41, 1.95s/it]
9%|▊ | 49589/569592 [1:18:04<402:13:05, 2.78s/it]
9%|▊ | 49589/569592 [1:18:04<402:13:05, 2.78s/it]
9%|▊ | 49590/569592 [1:18:07<386:00:29, 2.67s/it]
9%|▊ | 49590/569592 [1:18:07<386:00:29, 2.67s/it]
9%|▊ | 49591/569592 [1:18:11<468:41:51, 3.24s/it]
9%|▊ | 49591/569592 [1:18:11<468:41:51, 3.24s/it]
9%|▊ | 49592/569592 [1:18:12<368:47:54, 2.55s/it]
9%|▊ | 49592/569592 [1:18:12<368:47:54, 2.55s/it]
9%|▊ | 49593/569592 [1:18:16<402:41:44, 2.79s/it]
9%|▊ | 49593/569592 [1:18:16<402:41:44, 2.79s/it]
9%|▊ | 49594/569592 [1:18:21<498:05:16, 3.45s/it]
9%|▊ | 49594/569592 [1:18:21<498:05:16, 3.45s/it]
9%|▊ | 49595/569592 [1:18:24<480:36:40, 3.33s/it]
9%|▊ | 49595/569592 [1:18:24<480:36:40, 3.33s/it]
9%|▊ | 49596/569592 [1:18:28<538:01:00, 3.72s/it]
9%|▊ | 49596/569592 [1:18:28<538:01:00, 3.72s/it]
9%|▊ | 49597/569592 [1:18:31<512:18:41, 3.55s/it]
9%|▊ | 49597/569592 [1:18:31<512:18:41, 3.55s/it]
9%|▊ | 49598/569592 [1:18:34<486:24:34, 3.37s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (93641436 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▊ | 49598/569592 [1:18:34<486:24:34, 3.37s/it]
9%|▊ | 49599/569592 [1:18:39<547:17:53, 3.79s/it]
9%|▊ | 49599/569592 [1:18:39<547:17:53, 3.79s/it]
9%|▊ | 49600/569592 [1:18:43<530:19:12, 3.67s/it]
9%|▊ | 49600/569592 [1:18:43<530:19:12, 3.67s/it]
9%|▊ | 49601/569592 [1:18:43<409:16:02, 2.83s/it]
9%|▊ | 49601/569592 [1:18:43<409:16:02, 2.83s/it]
9%|▊ | 49602/569592 [1:18:47<433:57:08, 3.00s/it]
9%|▊ | 49602/569592 [1:18:47<433:57:08, 3.00s/it]
9%|▊ | 49603/569592 [1:18:52<516:02:08, 3.57s/it]
9%|▊ | 49603/569592 [1:18:52<516:02:08, 3.57s/it]
9%|▊ | 49604/569592 [1:18:53<399:36:32, 2.77s/it]
9%|▊ | 49604/569592 [1:18:53<399:36:32, 2.77s/it]
9%|▊ | 49605/569592 [1:18:53<318:51:45, 2.21s/it]
9%|▊ | 49605/569592 [1:18:53<318:51:45, 2.21s/it]
9%|▊ | 49606/569592 [1:18:55<271:36:00, 1.88s/it]
9%|▊ | 49606/569592 [1:18:55<271:36:00, 1.88s/it]
9%|▊ | 49607/569592 [1:18:58<334:15:19, 2.31s/it]
9%|▊ | 49607/569592 [1:18:58<334:15:19, 2.31s/it]
9%|▊ | 49608/569592 [1:19:02<395:54:05, 2.74s/it]
9%|▊ | 49608/569592 [1:19:02<395:54:05, 2.74s/it]
9%|▊ | 49609/569592 [1:19:06<483:03:31, 3.34s/it]
9%|▊ | 49609/569592 [1:19:06<483:03:31, 3.34s/it]
9%|▊ | 49610/569592 [1:19:11<551:14:07, 3.82s/it]
9%|▊ | 49610/569592 [1:19:11<551:14:07, 3.82s/it]
9%|▊ | 49611/569592 [1:19:15<537:18:09, 3.72s/it]
9%|▊ | 49611/569592 [1:19:15<537:18:09, 3.72s/it]
9%|▊ | 49612/569592 [1:19:18<521:03:55, 3.61s/it]
9%|▊ | 49612/569592 [1:19:18<521:03:55, 3.61s/it]
9%|▊ | 49613/569592 [1:19:19<403:24:58, 2.79s/it]
9%|▊ | 49613/569592 [1:19:19<403:24:58, 2.79s/it]
9%|▊ | 49614/569592 [1:19:25<518:24:16, 3.59s/it]
9%|▊ | 49614/569592 [1:19:25<518:24:16, 3.59s/it]
9%|▊ | 49615/569592 [1:19:30<590:57:27, 4.09s/it]
9%|▊ | 49615/569592 [1:19:30<590:57:27, 4.09s/it]
9%|▊ | 49616/569592 [1:19:34<596:28:13, 4.13s/it]
9%|▊ | 49616/569592 [1:19:34<596:28:13, 4.13s/it]
9%|▊ | 49617/569592 [1:19:38<575:51:46, 3.99s/it]
9%|▊ | 49617/569592 [1:19:38<575:51:46, 3.99s/it]
9%|▊ | 49618/569592 [1:19:42<607:55:02, 4.21s/it]
9%|▊ | 49618/569592 [1:19:42<607:55:02, 4.21s/it]
9%|▊ | 49619/569592 [1:19:47<605:35:16, 4.19s/it]
9%|▊ | 49619/569592 [1:19:47<605:35:16, 4.19s/it]
9%|▊ | 49620/569592 [1:19:47<463:57:23, 3.21s/it]
9%|▊ | 49620/569592 [1:19:47<463:57:23, 3.21s/it]
9%|▊ | 49621/569592 [1:19:52<532:08:59, 3.68s/it]
9%|▊ | 49621/569592 [1:19:52<532:08:59, 3.68s/it]
9%|▊ | 49622/569592 [1:19:57<581:17:04, 4.02s/it]
9%|▊ | 49622/569592 [1:19:57<581:17:04, 4.02s/it]
9%|▊ | 49623/569592 [1:20:02<602:29:11, 4.17s/it]
9%|▊ | 49623/569592 [1:20:02<602:29:11, 4.17s/it]
9%|▊ | 49624/569592 [1:20:06<631:40:15, 4.37s/it]
9%|▊ | 49624/569592 [1:20:06<631:40:15, 4.37s/it]
9%|▊ | 49625/569592 [1:20:11<637:00:03, 4.41s/it]
9%|▊ | 49625/569592 [1:20:11<637:00:03, 4.41s/it]
9%|▊ | 49626/569592 [1:20:16<658:59:21, 4.56s/it]
9%|▊ | 49626/569592 [1:20:16<658:59:21, 4.56s/it]
9%|▊ | 49627/569592 [1:20:20<659:47:46, 4.57s/it]
9%|▊ | 49627/569592 [1:20:20<659:47:46, 4.57s/it]
9%|▊ | 49628/569592 [1:20:24<621:00:03, 4.30s/it]
9%|▊ | 49628/569592 [1:20:24<621:00:03, 4.30s/it]
9%|▊ | 49629/569592 [1:20:29<648:05:01, 4.49s/it]
9%|▊ | 49629/569592 [1:20:29<648:05:01, 4.49s/it]
9%|▊ | 49630/569592 [1:20:34<648:24:40, 4.49s/it]
9%|▊ | 49630/569592 [1:20:34<648:24:40, 4.49s/it]
9%|▊ | 49631/569592 [1:20:37<586:18:58, 4.06s/it]
9%|▊ | 49631/569592 [1:20:37<586:18:58, 4.06s/it]
9%|▊ | 49632/569592 [1:20:37<449:26:24, 3.11s/it]
9%|▊ | 49632/569592 [1:20:37<449:26:24, 3.11s/it]
9%|▊ | 49633/569592 [1:20:42<520:54:01, 3.61s/it]
9%|▊ | 49633/569592 [1:20:42<520:54:01, 3.61s/it]
9%|▊ | 49634/569592 [1:20:47<578:22:20, 4.00s/it]
9%|▊ | 49634/569592 [1:20:47<578:22:20, 4.00s/it]
9%|▊ | 49635/569592 [1:20:52<606:57:00, 4.20s/it]
9%|▊ | 49635/569592 [1:20:52<606:57:00, 4.20s/it]
9%|▊ | 49636/569592 [1:20:56<626:09:51, 4.34s/it]
9%|▊ | 49636/569592 [1:20:56<626:09:51, 4.34s/it]
9%|▊ | 49637/569592 [1:21:00<575:01:35, 3.98s/it]
9%|▊ | 49637/569592 [1:21:00<575:01:35, 3.98s/it]
9%|▊ | 49638/569592 [1:21:04<600:32:42, 4.16s/it]
9%|▊ | 49638/569592 [1:21:04<600:32:42, 4.16s/it]
9%|▊ | 49639/569592 [1:21:09<644:43:46, 4.46s/it]
9%|▊ | 49639/569592 [1:21:09<644:43:46, 4.46s/it]
9%|▊ | 49640/569592 [1:21:13<590:09:42, 4.09s/it]
9%|▊ | 49640/569592 [1:21:13<590:09:42, 4.09s/it]
9%|▊ | 49641/569592 [1:21:16<541:28:08, 3.75s/it]
9%|▊ | 49641/569592 [1:21:16<541:28:08, 3.75s/it]
9%|▊ | 49642/569592 [1:21:16<419:36:34, 2.91s/it]
9%|▊ | 49642/569592 [1:21:16<419:36:34, 2.91s/it]
9%|▊ | 49643/569592 [1:21:17<334:34:23, 2.32s/it]
9%|▊ | 49643/569592 [1:21:17<334:34:23, 2.32s/it]
9%|▊ | 49644/569592 [1:21:21<371:44:32, 2.57s/it]
9%|▊ | 49644/569592 [1:21:21<371:44:32, 2.57s/it]
9%|▊ | 49645/569592 [1:21:22<302:30:44, 2.09s/it]
9%|▊ | 49645/569592 [1:21:22<302:30:44, 2.09s/it]
9%|▊ | 49646/569592 [1:21:23<252:57:47, 1.75s/it]
9%|▊ | 49646/569592 [1:21:23<252:57:47, 1.75s/it]
9%|▊ | 49647/569592 [1:21:24<221:27:08, 1.53s/it]
9%|▊ | 49647/569592 [1:21:24<221:27:08, 1.53s/it]
9%|▊ | 49648/569592 [1:21:25<197:22:27, 1.37s/it]
9%|▊ | 49648/569592 [1:21:25<197:22:27, 1.37s/it]
9%|▊ | 49649/569592 [1:21:26<196:03:02, 1.36s/it]
9%|▊ | 49649/569592 [1:21:26<196:03:02, 1.36s/it]
9%|▊ | 49650/569592 [1:21:29<281:16:24, 1.95s/it]
9%|▊ | 49650/569592 [1:21:29<281:16:24, 1.95s/it]
9%|▊ | 49651/569592 [1:21:31<259:38:28, 1.80s/it]
9%|▊ | 49651/569592 [1:21:31<259:38:28, 1.80s/it]
9%|▊ | 49652/569592 [1:21:32<227:48:35, 1.58s/it]
9%|▊ | 49652/569592 [1:21:32<227:48:35, 1.58s/it]
9%|▊ | 49653/569592 [1:21:37<376:43:25, 2.61s/it]
9%|▊ | 49653/569592 [1:21:37<376:43:25, 2.61s/it]
9%|▊ | 49654/569592 [1:21:39<373:01:43, 2.58s/it]
9%|▊ | 49654/569592 [1:21:39<373:01:43, 2.58s/it]
9%|▊ | 49655/569592 [1:21:41<319:26:10, 2.21s/it]
9%|▊ | 49655/569592 [1:21:41<319:26:10, 2.21s/it]
9%|▊ | 49656/569592 [1:21:44<368:10:08, 2.55s/it]
9%|▊ | 49656/569592 [1:21:44<368:10:08, 2.55s/it]
9%|▊ | 49657/569592 [1:21:45<324:54:10, 2.25s/it]
9%|▊ | 49657/569592 [1:21:45<324:54:10, 2.25s/it]
9%|▊ | 49658/569592 [1:21:50<435:18:06, 3.01s/it]
9%|▊ | 49658/569592 [1:21:50<435:18:06, 3.01s/it]
9%|▊ | 49659/569592 [1:21:51<345:32:49, 2.39s/it]
9%|▊ | 49659/569592 [1:21:51<345:32:49, 2.39s/it]
9%|▊ | 49660/569592 [1:21:55<385:01:45, 2.67s/it]
9%|▊ | 49660/569592 [1:21:55<385:01:45, 2.67s/it]
9%|▊ | 49661/569592 [1:21:57<358:50:58, 2.48s/it]
9%|▊ | 49661/569592 [1:21:57<358:50:58, 2.48s/it]
9%|▊ | 49662/569592 [1:22:00<414:37:37, 2.87s/it]
9%|▊ | 49662/569592 [1:22:00<414:37:37, 2.87s/it]
9%|▊ | 49663/569592 [1:22:01<336:39:43, 2.33s/it]
9%|▊ | 49663/569592 [1:22:01<336:39:43, 2.33s/it]
9%|▊ | 49664/569592 [1:22:03<298:22:13, 2.07s/it]
9%|▊ | 49664/569592 [1:22:03<298:22:13, 2.07s/it]
9%|▊ | 49665/569592 [1:22:07<369:37:12, 2.56s/it]
9%|▊ | 49665/569592 [1:22:07<369:37:12, 2.56s/it]
9%|▊ | 49666/569592 [1:22:11<436:04:16, 3.02s/it]
9%|▊ | 49666/569592 [1:22:11<436:04:16, 3.02s/it]
9%|▊ | 49667/569592 [1:22:12<347:46:39, 2.41s/it]
9%|▊ | 49667/569592 [1:22:12<347:46:39, 2.41s/it]
9%|▊ | 49668/569592 [1:22:14<337:12:08, 2.33s/it]
9%|▊ | 49668/569592 [1:22:14<337:12:08, 2.33s/it]
9%|▊ | 49669/569592 [1:22:17<366:05:40, 2.53s/it]
9%|▊ | 49669/569592 [1:22:17<366:05:40, 2.53s/it]
9%|▊ | 49670/569592 [1:22:21<454:51:23, 3.15s/it]
9%|▊ | 49670/569592 [1:22:21<454:51:23, 3.15s/it]
9%|▊ | 49671/569592 [1:22:22<362:05:08, 2.51s/it]
9%|▊ | 49671/569592 [1:22:22<362:05:08, 2.51s/it]
9%|▊ | 49672/569592 [1:22:24<335:00:49, 2.32s/it]
9%|▊ | 49672/569592 [1:22:24<335:00:49, 2.32s/it]
9%|▊ | 49673/569592 [1:22:26<326:10:28, 2.26s/it]
9%|▊ | 49673/569592 [1:22:26<326:10:28, 2.26s/it]
9%|▊ | 49674/569592 [1:22:31<413:11:05, 2.86s/it]
9%|▊ | 49674/569592 [1:22:31<413:11:05, 2.86s/it]
9%|▊ | 49675/569592 [1:22:32<335:08:12, 2.32s/it]
9%|▊ | 49675/569592 [1:22:32<335:08:12, 2.32s/it]
9%|▊ | 49676/569592 [1:22:35<367:00:30, 2.54s/it]
9%|▊ | 49676/569592 [1:22:35<367:00:30, 2.54s/it]
9%|▊ | 49677/569592 [1:22:36<330:17:50, 2.29s/it]
9%|▊ | 49677/569592 [1:22:36<330:17:50, 2.29s/it]
9%|▊ | 49678/569592 [1:22:41<421:44:08, 2.92s/it]
9%|▊ | 49678/569592 [1:22:41<421:44:08, 2.92s/it]
9%|▊ | 49679/569592 [1:22:42<338:31:52, 2.34s/it]
9%|▊ | 49679/569592 [1:22:42<338:31:52, 2.34s/it]
9%|▊ | 49680/569592 [1:22:46<400:47:57, 2.78s/it]
9%|▊ | 49680/569592 [1:22:46<400:47:57, 2.78s/it]
9%|▊ | 49681/569592 [1:22:47<324:20:23, 2.25s/it]
9%|▊ | 49681/569592 [1:22:47<324:20:23, 2.25s/it]
9%|▊ | 49682/569592 [1:22:52<471:28:49, 3.26s/it]
9%|▊ | 49682/569592 [1:22:52<471:28:49, 3.26s/it]
9%|▊ | 49683/569592 [1:22:53<372:20:58, 2.58s/it]
9%|▊ | 49683/569592 [1:22:53<372:20:58, 2.58s/it]
9%|▊ | 49684/569592 [1:22:56<369:42:44, 2.56s/it]
9%|▊ | 49684/569592 [1:22:56<369:42:44, 2.56s/it]
9%|▊ | 49685/569592 [1:22:58<334:54:00, 2.32s/it]
9%|▊ | 49685/569592 [1:22:58<334:54:00, 2.32s/it]
9%|▊ | 49686/569592 [1:23:01<372:41:31, 2.58s/it]
9%|▊ | 49686/569592 [1:23:01<372:41:31, 2.58s/it]
9%|▊ | 49687/569592 [1:23:02<318:36:53, 2.21s/it]
9%|▊ | 49687/569592 [1:23:02<318:36:53, 2.21s/it]
9%|▊ | 49688/569592 [1:23:06<382:16:28, 2.65s/it]
9%|▊ | 49688/569592 [1:23:06<382:16:28, 2.65s/it]
9%|▊ | 49689/569592 [1:23:07<340:45:20, 2.36s/it]
9%|▊ | 49689/569592 [1:23:07<340:45:20, 2.36s/it]
9%|▊ | 49690/569592 [1:23:11<386:34:02, 2.68s/it]
9%|▊ | 49690/569592 [1:23:11<386:34:02, 2.68s/it]
9%|▊ | 49691/569592 [1:23:12<314:17:34, 2.18s/it]
9%|▊ | 49691/569592 [1:23:12<314:17:34, 2.18s/it]
9%|▊ | 49692/569592 [1:23:16<377:38:08, 2.61s/it]
9%|▊ | 49692/569592 [1:23:16<377:38:08, 2.61s/it]
9%|▊ | 49693/569592 [1:23:19<405:46:33, 2.81s/it]
9%|▊ | 49693/569592 [1:23:19<405:46:33, 2.81s/it]
9%|▊ | 49694/569592 [1:23:23<447:39:58, 3.10s/it]
9%|▊ | 49694/569592 [1:23:23<447:39:58, 3.10s/it]
9%|▊ | 49695/569592 [1:23:23<353:06:27, 2.45s/it]
9%|▊ | 49695/569592 [1:23:23<353:06:27, 2.45s/it]
9%|▊ | 49696/569592 [1:23:26<341:31:06, 2.36s/it]
9%|▊ | 49696/569592 [1:23:26<341:31:06, 2.36s/it]
9%|▊ | 49697/569592 [1:23:27<318:13:50, 2.20s/it]
9%|▊ | 49697/569592 [1:23:27<318:13:50, 2.20s/it]
9%|▊ | 49698/569592 [1:23:31<391:16:12, 2.71s/it]
9%|▊ | 49698/569592 [1:23:31<391:16:12, 2.71s/it]
9%|▊ | 49699/569592 [1:23:33<324:11:08, 2.24s/it]
9%|▊ | 49699/569592 [1:23:33<324:11:08, 2.24s/it]
9%|▊ | 49700/569592 [1:23:35<331:28:53, 2.30s/it]
9%|▊ | 49700/569592 [1:23:35<331:28:53, 2.30s/it]
9%|▊ | 49701/569592 [1:23:38<369:42:00, 2.56s/it]
9%|▊ | 49701/569592 [1:23:38<369:42:00, 2.56s/it]
9%|▊ | 49702/569592 [1:23:42<415:35:53, 2.88s/it]
9%|▊ | 49702/569592 [1:23:42<415:35:53, 2.88s/it]
9%|▊ | 49703/569592 [1:23:43<331:31:55, 2.30s/it]
9%|▊ | 49703/569592 [1:23:43<331:31:55, 2.30s/it]
9%|▊ | 49704/569592 [1:23:46<381:53:20, 2.64s/it]
9%|▊ | 49704/569592 [1:23:46<381:53:20, 2.64s/it]
9%|▊ | 49705/569592 [1:23:49<402:28:07, 2.79s/it]
9%|▊ | 49705/569592 [1:23:49<402:28:07, 2.79s/it]
9%|▊ | 49706/569592 [1:23:51<358:09:26, 2.48s/it]
9%|▊ | 49706/569592 [1:23:51<358:09:26, 2.48s/it]
9%|▊ | 49707/569592 [1:23:52<305:02:58, 2.11s/it]
9%|▊ | 49707/569592 [1:23:52<305:02:58, 2.11s/it]
9%|▊ | 49708/569592 [1:23:57<433:00:22, 3.00s/it]
9%|▊ | 49708/569592 [1:23:57<433:00:22, 3.00s/it]
9%|▊ | 49709/569592 [1:24:01<466:42:49, 3.23s/it]
9%|▊ | 49709/569592 [1:24:01<466:42:49, 3.23s/it]
9%|▊ | 49710/569592 [1:24:05<515:23:17, 3.57s/it]
9%|▊ | 49710/569592 [1:24:05<515:23:17, 3.57s/it]
9%|▊ | 49711/569592 [1:24:10<575:38:39, 3.99s/it]
9%|▊ | 49711/569592 [1:24:10<575:38:39, 3.99s/it]
9%|▊ | 49712/569592 [1:24:15<617:01:56, 4.27s/it]
9%|▊ | 49712/569592 [1:24:15<617:01:56, 4.27s/it]
9%|▊ | 49713/569592 [1:24:20<633:07:50, 4.38s/it]
9%|▊ | 49713/569592 [1:24:20<633:07:50, 4.38s/it]
9%|▊ | 49714/569592 [1:24:21<482:00:18, 3.34s/it]
9%|▊ | 49714/569592 [1:24:21<482:00:18, 3.34s/it]
9%|▊ | 49715/569592 [1:24:22<375:57:18, 2.60s/it]
9%|▊ | 49715/569592 [1:24:22<375:57:18, 2.60s/it]
9%|▊ | 49716/569592 [1:24:25<408:25:45, 2.83s/it]
9%|▊ | 49716/569592 [1:24:25<408:25:45, 2.83s/it]
9%|▊ | 49717/569592 [1:24:29<470:04:50, 3.26s/it]
9%|▊ | 49717/569592 [1:24:29<470:04:50, 3.26s/it]
9%|▊ | 49718/569592 [1:24:30<373:04:46, 2.58s/it]
9%|▊ | 49718/569592 [1:24:30<373:04:46, 2.58s/it]
9%|▊ | 49719/569592 [1:24:34<413:33:03, 2.86s/it]
9%|▊ | 49719/569592 [1:24:34<413:33:03, 2.86s/it]
9%|▊ | 49720/569592 [1:24:39<498:11:29, 3.45s/it]
9%|▊ | 49720/569592 [1:24:39<498:11:29, 3.45s/it]
9%|▊ | 49721/569592 [1:24:43<546:50:53, 3.79s/it]
9%|▊ | 49721/569592 [1:24:43<546:50:53, 3.79s/it]
9%|▊ | 49722/569592 [1:24:46<510:09:54, 3.53s/it]
9%|▊ | 49722/569592 [1:24:46<510:09:54, 3.53s/it]
9%|▊ | 49723/569592 [1:24:51<563:06:18, 3.90s/it]
9%|▊ | 49723/569592 [1:24:51<563:06:18, 3.90s/it]
9%|▊ | 49724/569592 [1:24:55<567:09:50, 3.93s/it]
9%|▊ | 49724/569592 [1:24:55<567:09:50, 3.93s/it]
9%|▊ | 49725/569592 [1:24:58<540:17:01, 3.74s/it]
9%|▊ | 49725/569592 [1:24:58<540:17:01, 3.74s/it]
9%|▊ | 49726/569592 [1:24:59<416:58:53, 2.89s/it]
9%|▊ | 49726/569592 [1:24:59<416:58:53, 2.89s/it]
9%|▊ | 49727/569592 [1:25:03<446:50:06, 3.09s/it]
9%|▊ | 49727/569592 [1:25:03<446:50:06, 3.09s/it]
9%|▊ | 49728/569592 [1:25:08<540:02:23, 3.74s/it]
9%|▊ | 49728/569592 [1:25:08<540:02:23, 3.74s/it]
9%|▊ | 49729/569592 [1:25:15<675:44:20, 4.68s/it]
9%|▊ | 49729/569592 [1:25:15<675:44:20, 4.68s/it]
9%|▊ | 49730/569592 [1:25:20<678:08:53, 4.70s/it]
9%|▊ | 49730/569592 [1:25:20<678:08:53, 4.70s/it]
9%|▊ | 49731/569592 [1:25:24<683:32:23, 4.73s/it]
9%|▊ | 49731/569592 [1:25:24<683:32:23, 4.73s/it]
9%|▊ | 49732/569592 [1:25:28<627:03:07, 4.34s/it]
9%|▊ | 49732/569592 [1:25:28<627:03:07, 4.34s/it]
9%|▊ | 49733/569592 [1:25:29<476:06:53, 3.30s/it]
9%|▊ | 49733/569592 [1:25:29<476:06:53, 3.30s/it]
9%|▊ | 49734/569592 [1:25:34<550:25:47, 3.81s/it]
9%|▊ | 49734/569592 [1:25:34<550:25:47, 3.81s/it]
9%|▊ | 49735/569592 [1:25:38<565:24:32, 3.92s/it]
9%|▊ | 49735/569592 [1:25:38<565:24:32, 3.92s/it]
9%|▊ | 49736/569592 [1:25:42<570:52:38, 3.95s/it]
9%|▊ | 49736/569592 [1:25:42<570:52:38, 3.95s/it]
9%|▊ | 49737/569592 [1:25:46<586:07:38, 4.06s/it]
9%|▊ | 49737/569592 [1:25:46<586:07:38, 4.06s/it]
9%|▊ | 49738/569592 [1:25:50<570:36:52, 3.95s/it]
9%|▊ | 49738/569592 [1:25:50<570:36:52, 3.95s/it]
9%|▊ | 49739/569592 [1:25:53<532:49:23, 3.69s/it]
9%|▊ | 49739/569592 [1:25:53<532:49:23, 3.69s/it]
9%|▊ | 49740/569592 [1:25:58<577:35:30, 4.00s/it]
9%|▊ | 49740/569592 [1:25:58<577:35:30, 4.00s/it]
9%|▊ | 49741/569592 [1:26:03<609:28:36, 4.22s/it]
9%|▊ | 49741/569592 [1:26:03<609:28:36, 4.22s/it]
9%|▊ | 49742/569592 [1:26:07<608:23:58, 4.21s/it]
9%|▊ | 49742/569592 [1:26:07<608:23:58, 4.21s/it]
9%|▊ | 49743/569592 [1:26:11<632:08:22, 4.38s/it]
9%|▊ | 49743/569592 [1:26:11<632:08:22, 4.38s/it]
9%|▊ | 49744/569592 [1:26:16<651:35:16, 4.51s/it]
9%|▊ | 49744/569592 [1:26:16<651:35:16, 4.51s/it]
9%|▊ | 49745/569592 [1:26:20<614:42:10, 4.26s/it]
9%|▊ | 49745/569592 [1:26:20<614:42:10, 4.26s/it]
9%|▊ | 49746/569592 [1:26:23<580:55:37, 4.02s/it]
9%|▊ | 49746/569592 [1:26:23<580:55:37, 4.02s/it]
9%|▊ | 49747/569592 [1:26:29<630:53:31, 4.37s/it]
9%|▊ | 49747/569592 [1:26:29<630:53:31, 4.37s/it]
9%|▊ | 49748/569592 [1:26:33<641:59:00, 4.45s/it]
9%|▊ | 49748/569592 [1:26:33<641:59:00, 4.45s/it]
9%|▊ | 49749/569592 [1:26:38<647:34:46, 4.48s/it]
9%|▊ | 49749/569592 [1:26:38<647:34:46, 4.48s/it]
9%|▊ | 49750/569592 [1:26:41<604:56:13, 4.19s/it]
9%|▊ | 49750/569592 [1:26:41<604:56:13, 4.19s/it]
9%|▊ | 49751/569592 [1:26:46<641:33:15, 4.44s/it]
9%|▊ | 49751/569592 [1:26:46<641:33:15, 4.44s/it]
9%|▊ | 49752/569592 [1:26:51<656:59:50, 4.55s/it]
9%|▊ | 49752/569592 [1:26:51<656:59:50, 4.55s/it]
9%|▊ | 49753/569592 [1:26:55<621:47:25, 4.31s/it]
9%|▊ | 49753/569592 [1:26:55<621:47:25, 4.31s/it]
9%|▊ | 49754/569592 [1:26:58<571:34:05, 3.96s/it]
9%|▊ | 49754/569592 [1:26:58<571:34:05, 3.96s/it]
9%|▊ | 49755/569592 [1:27:03<619:39:20, 4.29s/it]
9%|▊ | 49755/569592 [1:27:03<619:39:20, 4.29s/it]
9%|▊ | 49756/569592 [1:27:07<592:19:14, 4.10s/it]
9%|▊ | 49756/569592 [1:27:07<592:19:14, 4.10s/it]
9%|▊ | 49757/569592 [1:27:11<618:13:50, 4.28s/it]
9%|▊ | 49757/569592 [1:27:11<618:13:50, 4.28s/it]
9%|▊ | 49758/569592 [1:27:16<622:51:17, 4.31s/it]
9%|▊ | 49758/569592 [1:27:16<622:51:17, 4.31s/it]
9%|▊ | 49759/569592 [1:27:19<583:44:24, 4.04s/it]
9%|▊ | 49759/569592 [1:27:19<583:44:24, 4.04s/it]
9%|▊ | 49760/569592 [1:27:20<447:52:56, 3.10s/it]
9%|▊ | 49760/569592 [1:27:20<447:52:56, 3.10s/it]
9%|▊ | 49761/569592 [1:27:21<353:14:59, 2.45s/it]
9%|▊ | 49761/569592 [1:27:21<353:14:59, 2.45s/it]
9%|▊ | 49762/569592 [1:27:24<393:00:46, 2.72s/it]
9%|▊ | 49762/569592 [1:27:24<393:00:46, 2.72s/it]
9%|▊ | 49763/569592 [1:27:25<319:13:09, 2.21s/it]
9%|▊ | 49763/569592 [1:27:25<319:13:09, 2.21s/it]
9%|▊ | 49764/569592 [1:27:26<264:31:11, 1.83s/it]
9%|▊ | 49764/569592 [1:27:26<264:31:11, 1.83s/it]
9%|▊ | 49765/569592 [1:27:27<225:49:28, 1.56s/it]
9%|▊ | 49765/569592 [1:27:27<225:49:28, 1.56s/it]
9%|▊ | 49766/569592 [1:27:28<199:30:13, 1.38s/it]
9%|▊ | 49766/569592 [1:27:28<199:30:13, 1.38s/it]
9%|▊ | 49767/569592 [1:27:29<179:47:15, 1.25s/it]
9%|▊ | 49767/569592 [1:27:29<179:47:15, 1.25s/it]
9%|▊ | 49768/569592 [1:27:33<304:34:53, 2.11s/it]
9%|▊ | 49768/569592 [1:27:33<304:34:53, 2.11s/it]
9%|▊ | 49769/569592 [1:27:34<258:02:34, 1.79s/it]
9%|▊ | 49769/569592 [1:27:34<258:02:34, 1.79s/it]
9%|▊ | 49770/569592 [1:27:35<221:08:47, 1.53s/it]
9%|▊ | 49770/569592 [1:27:35<221:08:47, 1.53s/it]
9%|▊ | 49771/569592 [1:27:39<304:41:19, 2.11s/it]
9%|▊ | 49771/569592 [1:27:39<304:41:19, 2.11s/it]
9%|▊ | 49772/569592 [1:27:42<373:56:13, 2.59s/it]
9%|▊ | 49772/569592 [1:27:43<373:56:13, 2.59s/it]
9%|▊ | 49773/569592 [1:27:43<303:36:21, 2.10s/it]
9%|▊ | 49773/569592 [1:27:43<303:36:21, 2.10s/it]
9%|▊ | 49774/569592 [1:27:45<257:41:44, 1.78s/it]
9%|▊ | 49774/569592 [1:27:45<257:41:44, 1.78s/it]
9%|▊ | 49775/569592 [1:27:49<395:28:09, 2.74s/it]
9%|▊ | 49775/569592 [1:27:49<395:28:09, 2.74s/it]
9%|▊ | 49776/569592 [1:27:53<449:49:16, 3.12s/it]
9%|▊ | 49776/569592 [1:27:53<449:49:16, 3.12s/it]
9%|▊ | 49777/569592 [1:27:55<361:14:04, 2.50s/it]
9%|▊ | 49777/569592 [1:27:55<361:14:04, 2.50s/it]
9%|▊ | 49778/569592 [1:27:55<294:08:53, 2.04s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (90481664 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▊ | 49778/569592 [1:27:56<294:08:53, 2.04s/it]
9%|▊ | 49779/569592 [1:28:00<398:43:54, 2.76s/it]
9%|▊ | 49779/569592 [1:28:00<398:43:54, 2.76s/it]
9%|▊ | 49780/569592 [1:28:04<462:20:50, 3.20s/it]
9%|▊ | 49780/569592 [1:28:04<462:20:50, 3.20s/it]
9%|▊ | 49781/569592 [1:28:05<364:42:43, 2.53s/it]
9%|▊ | 49781/569592 [1:28:05<364:42:43, 2.53s/it]
9%|▊ | 49782/569592 [1:28:06<306:40:12, 2.12s/it]
9%|▊ | 49782/569592 [1:28:06<306:40:12, 2.12s/it]
9%|▊ | 49783/569592 [1:28:11<396:44:44, 2.75s/it]
9%|▊ | 49783/569592 [1:28:11<396:44:44, 2.75s/it]
9%|▊ | 49784/569592 [1:28:13<397:43:52, 2.75s/it]
9%|▊ | 49784/569592 [1:28:13<397:43:52, 2.75s/it]
9%|▊ | 49785/569592 [1:28:14<318:45:33, 2.21s/it]
9%|▊ | 49785/569592 [1:28:14<318:45:33, 2.21s/it]
9%|▊ | 49786/569592 [1:28:15<266:37:20, 1.85s/it]
9%|▊ | 49786/569592 [1:28:15<266:37:20, 1.85s/it]
9%|▊ | 49787/569592 [1:28:21<438:24:07, 3.04s/it]
9%|▊ | 49787/569592 [1:28:21<438:24:07, 3.04s/it]
9%|▊ | 49788/569592 [1:28:24<423:42:33, 2.93s/it]
9%|▊ | 49788/569592 [1:28:24<423:42:33, 2.93s/it]
9%|▊ | 49789/569592 [1:28:25<353:02:37, 2.45s/it]
9%|▊ | 49789/569592 [1:28:25<353:02:37, 2.45s/it]
9%|▊ | 49790/569592 [1:28:26<288:39:56, 2.00s/it]
9%|▊ | 49790/569592 [1:28:26<288:39:56, 2.00s/it]
9%|▊ | 49791/569592 [1:28:31<433:10:32, 3.00s/it]
9%|▊ | 49791/569592 [1:28:31<433:10:32, 3.00s/it]
9%|▊ | 49792/569592 [1:28:34<423:43:57, 2.93s/it]
9%|▊ | 49792/569592 [1:28:34<423:43:57, 2.93s/it]
9%|▊ | 49793/569592 [1:28:35<337:44:18, 2.34s/it]
9%|▊ | 49793/569592 [1:28:35<337:44:18, 2.34s/it]
9%|▊ | 49794/569592 [1:28:36<277:18:33, 1.92s/it]
9%|▊ | 49794/569592 [1:28:36<277:18:33, 1.92s/it]
9%|▊ | 49795/569592 [1:28:43<515:20:38, 3.57s/it]
9%|▊ | 49795/569592 [1:28:43<515:20:38, 3.57s/it]
9%|▊ | 49796/569592 [1:28:45<428:25:43, 2.97s/it]
9%|▊ | 49796/569592 [1:28:45<428:25:43, 2.97s/it]
9%|▊ | 49797/569592 [1:28:46<340:29:37, 2.36s/it]
9%|▊ | 49797/569592 [1:28:46<340:29:37, 2.36s/it]
9%|▊ | 49798/569592 [1:28:47<280:14:48, 1.94s/it]
9%|▊ | 49798/569592 [1:28:47<280:14:48, 1.94s/it]
9%|▊ | 49799/569592 [1:28:53<475:25:17, 3.29s/it]
9%|▊ | 49799/569592 [1:28:53<475:25:17, 3.29s/it]
9%|▊ | 49800/569592 [1:28:55<391:40:03, 2.71s/it]
9%|▊ | 49800/569592 [1:28:55<391:40:03, 2.71s/it]
9%|▊ | 49801/569592 [1:28:56<315:59:59, 2.19s/it]
9%|▊ | 49801/569592 [1:28:56<315:59:59, 2.19s/it]
9%|▊ | 49802/569592 [1:28:57<262:20:09, 1.82s/it]
9%|▊ | 49802/569592 [1:28:57<262:20:09, 1.82s/it]
9%|▊ | 49803/569592 [1:29:03<476:18:11, 3.30s/it]
9%|▊ | 49803/569592 [1:29:03<476:18:11, 3.30s/it]
9%|▊ | 49804/569592 [1:29:05<392:33:13, 2.72s/it]
9%|▊ | 49804/569592 [1:29:05<392:33:13, 2.72s/it]
9%|▊ | 49805/569592 [1:29:07<353:16:44, 2.45s/it]
9%|▊ | 49805/569592 [1:29:07<353:16:44, 2.45s/it]
9%|▊ | 49806/569592 [1:29:08<289:35:34, 2.01s/it]
9%|▊ | 49806/569592 [1:29:08<289:35:34, 2.01s/it]
9%|▊ | 49807/569592 [1:29:13<444:34:23, 3.08s/it]
9%|▊ | 49807/569592 [1:29:13<444:34:23, 3.08s/it]
9%|▊ | 49808/569592 [1:29:15<397:13:22, 2.75s/it]
9%|▊ | 49808/569592 [1:29:15<397:13:22, 2.75s/it]
9%|▊ | 49809/569592 [1:29:16<322:24:56, 2.23s/it]
9%|▊ | 49809/569592 [1:29:16<322:24:56, 2.23s/it]
9%|▊ | 49810/569592 [1:29:17<268:02:03, 1.86s/it]
9%|▊ | 49810/569592 [1:29:17<268:02:03, 1.86s/it]
9%|▊ | /home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
49811/569592 [1:29:22<384:48:10, 2.67s/it]
9%|▊ | 49811/569592 [1:29:22<384:48:10, 2.67s/it]
9%|▊ | 49812/569592 [1:29:24<385:02:18, 2.67s/it]
9%|▊ | 49812/569592 [1:29:24<385:02:18, 2.67s/it]
9%|▊ | 49813/569592 [1:29:26<364:03:06, 2.52s/it]
9%|▊ | 49813/569592 [1:29:27<364:03:06, 2.52s/it]
9%|▊ | 49814/569592 [1:29:27<297:36:45, 2.06s/it]
9%|▊ | 49814/569592 [1:29:27<297:36:45, 2.06s/it]
9%|▊ | 49815/569592 [1:29:33<456:56:11, 3.16s/it]
9%|▊ | 49815/569592 [1:29:33<456:56:11, 3.16s/it]
9%|▊ | 49816/569592 [1:29:35<411:53:43, 2.85s/it]
9%|▊ | 49816/569592 [1:29:35<411:53:43, 2.85s/it]
9%|▊ | 49817/569592 [1:29:37<339:25:15, 2.35s/it]
9%|▊ | 49817/569592 [1:29:37<339:25:15, 2.35s/it]
9%|▊ | 49818/569592 [1:29:37<278:21:59, 1.93s/it]
9%|▊ | 49818/569592 [1:29:37<278:21:59, 1.93s/it]
9%|▊ | 49819/569592 [1:29:43<454:22:37, 3.15s/it]
9%|▊ | 49819/569592 [1:29:43<454:22:37, 3.15s/it]
9%|▊ | 49820/569592 [1:29:45<405:49:34, 2.81s/it]
9%|▊ | 49820/569592 [1:29:45<405:49:34, 2.81s/it]
9%|▊ | 49821/569592 [1:29:47<344:09:22, 2.38s/it]
9%|▊ | 49821/569592 [1:29:47<344:09:22, 2.38s/it]
9%|▊ | 49822/569592 [1:29:52<448:50:32, 3.11s/it]
9%|▊ | 49822/569592 [1:29:52<448:50:32, 3.11s/it]
9%|▊ | 49823/569592 [1:29:57<527:42:25, 3.65s/it]
9%|▊ | 49823/569592 [1:29:57<527:42:25, 3.65s/it]
9%|▊ | 49824/569592 [1:29:57<407:59:19, 2.83s/it]
9%|▊ | 49824/569592 [1:29:57<407:59:19, 2.83s/it]
9%|▊ | 49825/569592 [1:30:02<487:48:46, 3.38s/it]
9%|▊ | 49825/569592 [1:30:02<487:48:46, 3.38s/it]
9%|▊ | 49826/569592 [1:30:07<549:27:56, 3.81s/it]
9%|▊ | 49826/569592 [1:30:07<549:27:56, 3.81s/it]
9%|▊ | 49827/569592 [1:30:08<424:52:29, 2.94s/it]
9%|▊ | 49827/569592 [1:30:08<424:52:29, 2.94s/it]
9%|▊ | 49828/569592 [1:30:11<451:42:29, 3.13s/it]
9%|▊ | 49828/569592 [1:30:11<451:42:29, 3.13s/it]
9%|▊ | 49829/569592 [1:30:15<451:41:05, 3.13s/it]
9%|▊ | 49829/569592 [1:30:15<451:41:05, 3.13s/it]
9%|▊ | 49830/569592 [1:30:16<358:03:48, 2.48s/it]
9%|▊ | 49830/569592 [1:30:16<358:03:48, 2.48s/it]
9%|▊ | 49831/569592 [1:30:17<292:31:57, 2.03s/it]
9%|▊ | 49831/569592 [1:30:17<292:31:57, 2.03s/it]
9%|▊ | 49832/569592 [1:30:22<427:22:10, 2.96s/it]
9%|▊ | 49832/569592 [1:30:22<427:22:10, 2.96s/it]
9%|▊ | 49833/569592 [1:30:23<340:16:02, 2.36s/it]
9%|▊ | 49833/569592 [1:30:23<340:16:02, 2.36s/it]
9%|▊ | 49834/569592 [1:30:28<455:25:27, 3.15s/it]
9%|▊ | 49834/569592 [1:30:28<455:25:27, 3.15s/it]
9%|▊ | 49835/569592 [1:30:32<529:59:53, 3.67s/it]
9%|▊ | 49835/569592 [1:30:32<529:59:53, 3.67s/it]
9%|▊ | 49836/569592 [1:30:33<410:46:02, 2.85s/it]
9%|▊ | 49836/569592 [1:30:33<410:46:02, 2.85s/it]
9%|▊ | 49837/569592 [1:30:38<494:42:01, 3.43s/it]
9%|▊ | 49837/569592 [1:30:38<494:42:01, 3.43s/it]
9%|▊ | 49838/569592 [1:30:43<544:53:28, 3.77s/it]
9%|▊ | 49838/569592 [1:30:43<544:53:28, 3.77s/it]
9%|▊ | 49839/569592 [1:30:48<594:35:22, 4.12s/it]
9%|▊ | 49839/569592 [1:30:48<594:35:22, 4.12s/it]
9%|▉ | 49840/569592 [1:30:49<455:18:41, 3.15s/it]
9%|▉ | 49840/569592 [1:30:49<455:18:41, 3.15s/it]
9%|▉ | 49841/569592 [1:30:52<461:32:15, 3.20s/it]
9%|▉ | 49841/569592 [1:30:52<461:32:15, 3.20s/it]
9%|▉ | 49842/569592 [1:30:55<467:28:48, 3.24s/it]
9%|▉ | 49842/569592 [1:30:55<467:28:48, 3.24s/it]
9%|▉ | 49843/569592 [1:31:00<516:15:16, 3.58s/it]
9%|▉ | 49843/569592 [1:31:00<516:15:16, 3.58s/it]
9%|▉ | 49844/569592 [1:31:03<510:51:33, 3.54s/it]
9%|▉ | 49844/569592 [1:31:03<510:51:33, 3.54s/it]
9%|▉ | 49845/569592 [1:31:08<582:16:51, 4.03s/it]
9%|▉ | 49845/569592 [1:31:08<582:16:51, 4.03s/it]
9%|▉ | 49846/569592 [1:31:11<542:31:48, 3.76s/it]
9%|▉ | 49846/569592 [1:31:11<542:31:48, 3.76s/it]
9%|▉ | 49847/569592 [1:31:16<582:34:34, 4.04s/it]
9%|▉ | 49847/569592 [1:31:16<582:34:34, 4.04s/it]
9%|▉ | 49848/569592 [1:31:21<621:06:33, 4.30s/it]
9%|▉ | 49848/569592 [1:31:21<621:06:33, 4.30s/it]
9%|▉ | 49849/569592 [1:31:24<575:46:35, 3.99s/it]
9%|▉ | 49849/569592 [1:31:24<575:46:35, 3.99s/it]
9%|▉ | 49850/569592 [1:31:29<631:32:13, 4.37s/it]
9%|▉ | 49850/569592 [1:31:29<631:32:13, 4.37s/it]
9%|▉ | 49851/569592 [1:31:33<584:34:34, 4.05s/it]
9%|▉ | 49851/569592 [1:31:33<584:34:34, 4.05s/it]
9%|▉ | 49852/569592 [1:31:36<558:42:09, 3.87s/it]
9%|▉ | 49852/569592 [1:31:36<558:42:09, 3.87s/it]
9%|▉ | 49853/569592 [1:31:39<525:01:00, 3.64s/it]
9%|▉ | 49853/569592 [1:31:39<525:01:00, 3.64s/it]
9%|▉ | 49854/569592 [1:31:44<572:30:07, 3.97s/it]
9%|▉ | 49854/569592 [1:31:44<572:30:07, 3.97s/it]
9%|▉ | 49855/569592 [1:31:47<544:48:29, 3.77s/it]
9%|▉ | 49855/569592 [1:31:47<544:48:29, 3.77s/it]
9%|▉ | 49856/569592 [1:31:52<564:19:27, 3.91s/it]
9%|▉ | 49856/569592 [1:31:52<564:19:27, 3.91s/it]
9%|▉ | 49857/569592 [1:31:55<541:17:52, 3.75s/it]
9%|▉ | 49857/569592 [1:31:55<541:17:52, 3.75s/it]
9%|▉ | 49858/569592 [1:31:58<521:33:37, 3.61s/it]
9%|▉ | 49858/569592 [1:31:58<521:33:37, 3.61s/it]
9%|▉ | 49859/569592 [1:32:02<542:54:53, 3.76s/it]
9%|▉ | 49859/569592 [1:32:02<542:54:53, 3.76s/it]
9%|▉ | 49860/569592 [1:32:07<596:06:10, 4.13s/it]
9%|▉ | 49860/569592 [1:32:07<596:06:10, 4.13s/it]
9%|▉ | 49861/569592 [1:32:12<635:26:57, 4.40s/it]
9%|▉ | 49861/569592 [1:32:12<635:26:57, 4.40s/it]
9%|▉ | 49862/569592 [1:32:17<645:39:40, 4.47s/it]
9%|▉ | 49862/569592 [1:32:17<645:39:40, 4.47s/it]
9%|▉ | 49863/569592 [1:32:22<659:15:51, 4.57s/it]
9%|▉ | 49863/569592 [1:32:22<659:15:51, 4.57s/it]
9%|▉ | 49864/569592 [1:32:26<625:34:20, 4.33s/it]
9%|▉ | 49864/569592 [1:32:26<625:34:20, 4.33s/it]
9%|▉ | 49865/569592 [1:32:30<615:22:29, 4.26s/it]
9%|▉ | 49865/569592 [1:32:30<615:22:29, 4.26s/it]
9%|▉ | 49866/569592 [1:32:35<638:23:33, 4.42s/it]
9%|▉ | 49866/569592 [1:32:35<638:23:33, 4.42s/it]
9%|▉ | 49867/569592 [1:32:37<575:29:12, 3.99s/it]
9%|▉ | 49867/569592 [1:32:37<575:29:12, 3.99s/it]
9%|▉ | 49868/569592 [1:32:42<604:40:53, 4.19s/it]
9%|▉ | 49868/569592 [1:32:42<604:40:53, 4.19s/it]
9%|▉ | 49869/569592 [1:32:47<634:03:45, 4.39s/it]
9%|▉ | 49869/569592 [1:32:47<634:03:45, 4.39s/it]
9%|▉ | 49870/569592 [1:32:50<594:43:21, 4.12s/it]
9%|▉ | 49870/569592 [1:32:51<594:43:21, 4.12s/it]
9%|▉ | 49871/569592 [1:32:55<619:35:39, 4.29s/it]
9%|▉ | 49871/569592 [1:32:55<619:35:39, 4.29s/it]
9%|▉ | 49872/569592 [1:32:56<472:35:42, 3.27s/it]
9%|▉ | 49872/569592 [1:32:56<472:35:42, 3.27s/it]
9%|▉ | 49873/569592 [1:32:59<464:37:23, 3.22s/it]
9%|▉ | 49873/569592 [1:32:59<464:37:23, 3.22s/it]
9%|▉ | 49874/569592 [1:33:04<536:24:27, 3.72s/it]
9%|▉ | 49874/569592 [1:33:04<536:24:27, 3.72s/it]
9%|▉ | 49875/569592 [1:33:07<517:28:43, 3.58s/it]
9%|▉ | 49875/569592 [1:33:07<517:28:43, 3.58s/it]
9%|▉ | 49876/569592 [1:33:10<493:30:10, 3.42s/it]
9%|▉ | 49876/569592 [1:33:10<493:30:10, 3.42s/it]
9%|▉ | 49877/569592 [1:33:11<385:24:48, 2.67s/it]
9%|▉ | 49877/569592 [1:33:11<385:24:48, 2.67s/it]
9%|▉ | 49878/569592 [1:33:14<402:44:56, 2.79s/it]
9%|▉ | 49878/569592 [1:33:14<402:44:56, 2.79s/it]
9%|▉ | 49879/569592 [1:33:15<323:49:15, 2.24s/it]
9%|▉ | 49879/569592 [1:33:15<323:49:15, 2.24s/it]
9%|▉ | 49880/569592 [1:33:19<367:27:13, 2.55s/it]
9%|▉ | 49880/569592 [1:33:19<367:27:13, 2.55s/it]
9%|▉ | 49881/569592 [1:33:20<297:57:13, 2.06s/it]
9%|▉ | 49881/569592 [1:33:20<297:57:13, 2.06s/it]
9%|▉ | 49882/569592 [1:33:20<248:17:33, 1.72s/it]
9%|▉ | 49882/569592 [1:33:20<248:17:33, 1.72s/it]
9%|▉ | 49883/569592 [1:33:21<218:39:17, 1.51s/it]
9%|▉ | 49883/569592 [1:33:21<218:39:17, 1.51s/it]
9%|▉ | 49884/569592 [1:33:22<192:57:22, 1.34s/it]
9%|▉ | 49884/569592 [1:33:22<192:57:22, 1.34s/it]
9%|▉ | 49885/569592 [1:33:24<206:10:25, 1.43s/it]
9%|▉ | 49885/569592 [1:33:24<206:10:25, 1.43s/it]
9%|▉ | 49886/569592 [1:33:25<184:12:53, 1.28s/it]
9%|▉ | 49886/569592 [1:33:25<184:12:53, 1.28s/it]
9%|▉ | 49887/569592 [1:33:28<260:22:52, 1.80s/it]
/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
9%|▉ | 49887/569592 [1:33:28<260:22:52, 1.80s/it]
9%|▉ | 49888/569592 [1:33:30<284:24:03, 1.97s/it]
9%|▉ | 49888/569592 [1:33:30<284:24:03, 1.97s/it]
9%|▉ | 49889/569592 [1:33:34<363:21:53, 2.52s/it]
9%|▉ | 49889/569592 [1:33:34<363:21:53, 2.52s/it]
9%|▉ | 49890/569592 [1:33:35<299:48:37, 2.08s/it]
9%|▉ | 49890/569592 [1:33:35<299:48:37, 2.08s/it]
9%|▉ | 49891/569592 [1:33:39<364:16:13, 2.52s/it]
9%|▉ | 49891/569592 [1:33:39<364:16:13, 2.52s/it]
9%|▉ | 49892/569592 [1:33:40<325:25:54, 2.25s/it]
9%|▉ | 49892/5/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (100920000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
69592 [1:33:40<325:25:54, 2.25s/it]
9%|▉ | 49893/569592 [1:33:44<372:56:22, 2.58s/it]
9%|▉ | 49893/569592 [1:33:44<372:56:22, 2.58s/it]
9%|▉ | 49894/569592 [1:33:45<315:57:51, 2.19s/it]
9%|▉ | 49894/569592 [1:33:45<315:57:51, 2.19s/it]
9%|▉ | 49895/569592 [1:33:49<389:27:25, 2.70s/it]
9%|▉ | 49895/569592 [1:33:49<389:27:25, 2.70s/it]
9%|▉ | 49896/569592 [1:33:51<350:14:10, 2.43s/it]
9%|▉ | 49896/569592 [1:33:51<350:14:10, 2.43s/it]
9%|▉ | 49897/569592 [1:33:54<405:37:08, 2.81s/it]
9%|▉ | 49897/569592 [1:33:54<405:37:08, 2.81s/it]
9%|▉ | 49898/569592 [1:33:56<335:33:10, 2.32s/it]
9%|▉ | 49898/569592 [1:33:56<335:33:10, 2.32s/it]
9%|▉ | 49899/569592 [1:33:59<370:20:42, 2.57s/it]
9%|▉ | 49899/569592 [1:33:59<370:20:42, 2.57s/it]
9%|▉ | 49900/569592 [1:34:01<351:00:00, 2.43s/it]
9%|▉ | 49900/569592 [1:34:01<351:00:00, 2.43s/it]
9%|▉ | 49901/569592 [1:34:05<424:27:30, 2.94s/it]
9%|▉ | 49901/569592 [1:34:05<424:27:30, 2.94s/it]
9%|▉ | 49902/569592 [1:34:06<338:30:11, 2.34s/it]
9%|▉ | 49902/569592 [1:34:06<338:30:11, 2.34s/it]
9%|▉ | 49903/569592 [1:34:09<368:48:57, 2.55s/it]
9%|▉ | 49903/569592 [1:34:09<368:48:57, 2.55s/it]
9%|▉ | 49904/569592 [1:34:11<353:51:40, 2.45s/it]
9%|▉ | 49904/569592 [1:34:11<353:51:40, 2.45s/it]
9%|▉ | 49905/569592 [1:34:16<448:32:35, 3.11s/it]
9%|▉ | 49905/569592 [1:34:16<448:32:35, 3.11s/it]
9%|▉ | 49906/569592 [1:34:18<389:39:03, 2.70s/it]
9%|▉ | 49906/569592 [1:34:18<389:39:03, 2.70s/it]
9%|▉ | 49907/569592 [1:34:20<361:17:47, 2.50s/it]
9%|▉ | 49907/569592 [1:34:20<361:17:47, 2.50s/it]
9%|▉ | 49908/569592 [1:34:21<327:06:37, 2.27s/it]
9%|▉ | 49908/569592 [1:34:21<327:06:37, 2.27s/it]
9%|▉ | 49909/569592 [1:34:26<432:02:17, 2.99s/it]
9%|▉ | 49909/569592 [1:34:26<432:02:17, 2.99s/it]
9%|▉ | 49910/569592 [1:34:27<345:09:23, 2.39s/it]
9%|▉ | 49910/569592 [1:34:27<345:09:23, 2.39s/it]
9%|▉ | 49911/569592 [1:34:29<332:22:19, 2.30s/it]
9%|▉ | 49911/569592 [1:34:29<332:22:19, 2.30s/it]
9%|▉ | 49912/569592 [1:34:31<333:48:50, 2.31s/it]
9%|▉ | 49912/569592 [1:34:31<333:48:50, 2.31s/it]
9%|▉ | 49913/569592 [1:34:36<416:04:34, 2.88s/it]
9%|▉ | 49913/569592 [1:34:36<416:04:34, 2.88s/it]
9%|▉ | 49914/569592 [1:34:37<340:10:46, 2.36s/it]
9%|▉ | 49914/569592 [1:34:37<340:10:46, 2.36s/it]
9%|▉ | 49915/569592 [1:34:39<330:56:25, 2.29s/it]
9%|▉ | 49915/569592 [1:34:39<330:56:25, 2.29s/it]
9%|▉ | 49916/569592 [1:34:43<389:07:19, 2.70s/it]
9%|▉ | 49916/569592 [1:34:43<389:07:19, 2.70s/it]
9%|▉ | 49917/569592 [1:34:46<414:23:26, 2.87s/it]
9%|▉ | 49917/569592 [1:34:46<414:23:26, 2.87s/it]
9%|▉ | 49918/569592 [1:34:48<381:06:58, 2.64s/it]
9%|▉ | 49918/569592 [1:34:48<381:06:58, 2.64s/it]
9%|▉ | 49919/569592 [1:34:50<342:36:13, 2.37s/it]
9%|▉ | 49919/569592 [1:34:50<342:36:13, 2.37s/it]
9%|▉ | 49920/569592 [1:34:52<348:13:37, 2.41s/it]
9%|▉ | 49920/569592 [1:34:52<348:13:37, 2.41s/it]
9%|▉ | 49921/569592 [1:34:56<416:32:24, 2.89s/it]
9%|▉ | 49921/569592 [1:34:56<416:32:24, 2.89s/it]
9%|▉ | 49922/569592 [1:34:58<375:57:04, 2.60s/it]
9%|▉ | 49922/569592 [1:34:58<375:57:04, 2.60s/it]
9%|▉ | 49923/569592 [1:35:00<347:05:04, 2.40s/it]
9%|▉ | 49923/569592 [1:35:00<347:05:04, 2.40s/it]
9%|▉ | 49924/569592 [1:35:02<342:58:42, 2.38s/it]
9%|▉ | 49924/569592 [1:35:02<342:58:42, 2.38s/it]
9%|▉ | 49925/569592 [1:35:07<428:29:42, 2.97s/it]
9%|▉ | 49925/569592 [1:35:07<428:29:42, 2.97s/it]
9%|▉ | 49926/569592 [1:35:08<371:22:41, 2.57s/it]
9%|▉ | 49926/569592 [1:35:08<371:22:41, 2.57s/it]
9%|▉ | 49927/569592 [1:35:10<347:48:23, 2.41s/it]
9%|▉ | 49927/569592 [1:35:10<347:48:23, 2.41s/it]
9%|▉ | 49928/569592 [1:35:13<372:31:19, 2.58s/it]
9%|▉ | 49928/569592 [1:35:13<372:31:19, 2.58s/it]
9%|▉ | 49929/569592 [1:35:17<407:19:26, 2.82s/it]
9%|▉ | 49929/569592 [1:35:17<407:19:26, 2.82s/it]
9%|▉ | 49930/569592 [1:35:18<347:37:42, 2.41s/it]
9%|▉ | 49930/569592 [1:35:18<347:37:42, 2.41s/it]
9%|▉ | 49931/569592 [1:35:21<351:29:55, 2.44s/it]
9%|▉ | 49931/569592 [1:35:21<351:29:55, 2.44s/it]
9%|▉ | 49932/569592 [1:35:23<367:28:27, 2.55s/it]
9%|▉ | 49932/569592 [1:35:23<367:28:27, 2.55s/it]
9%|▉ | 49933/569592 [1:35:27<417:24:56, 2.89s/it]
9%|▉ | 49933/569592 [1:35:27<417:24:56, 2.89s/it]
9%|▉ | 49934/569592 [1:35:28<334:35:29, 2.32s/it]
9%|▉ | 49934/569592 [1:35:28<334:35:29, 2.32s/it]
9%|▉ | 49935/569592 [1:35:33<448:18:31, 3.11s/it]
9%|▉ | 49935/569592 [1:35:33<448:18:31, 3.11s/it]
9%|▉ | 49936/569592 [1:35:34<353:47:03, 2.45s/it]
9%|▉ | 49936/569592 [1:35:34<353:47:03, 2.45s/it]
9%|▉ | 49937/569592 [1:35:37<375:24:04, 2.60s/it]
9%|▉ | 49937/569592 [1:35:37<375:24:04, 2.60s/it]
9%|▉ | 49938/569592 [1:35:40<412:06:46, 2.85s/it]
9%|▉ | 49938/569592 [1:35:40<412:06:46, 2.85s/it]
9%|▉ | 49939/569592 [1:35:44<449:43:18, 3.12s/it]
9%|▉ | 49939/569592 [1:35:44<449:43:18, 3.12s/it]
9%|▉ | 49940/569592 [1:35:48<463:14:22, 3.21s/it]
9%|▉ | 49940/569592 [1:35:48<463:14:22, 3.21s/it]
9%|▉ | 49941/569592 [1:35:51<454:19:03, 3.15s/it]
9%|▉ | 49941/569592 [1:35:51<454:19:03, 3.15s/it]
9%|▉ | 49942/569592 [1:35:52<358:47:10, 2.49s/it]
9%|▉ | 49942/569592 [1:35:52<358:47:10, 2.49s/it]
9%|▉ | 49943/569592 [1:35:52<292:06:57, 2.02s/it]
9%|▉ | 49943/569592 [1:35:52<292:06:57, 2.02s/it]
9%|▉ | 49944/569592 [1:35:58<435:14:50, 3.02s/it]
9%|▉ | 49944/569592 [1:35:58<435:14:50, 3.02s/it]
9%|▉ | 49945/569592 [1:35:59<347:23:21, 2.41s/it]
9%|▉ | 49945/569592 [1:35:59<347:23:21, 2.41s/it]
9%|▉ | 49946/569592 [1:36:03<426:50:39, 2.96s/it]
9%|▉ | 49946/569592 [1:36:03<426:50:39, 2.96s/it]
9%|▉ | 49947/569592 [1:36:07<463:09:43,/home/zhaojiang/.local/lib/python3.10/site-packages/PIL/Image.py:3368: DecompressionBombWarning: Image size (98420112 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
warnings.warn(
3.21s/it]
9%|▉ | 49947/569592 [1:36:07<463:09:43, 3.21s/it]
9%|▉ | 49948/569592 [1:36:08<366:16:03, 2.54s/it]
9%|▉ | 49948/569592 [1:36:08<366:16:03, 2.54s/it]
9%|▉ | 49949/569592 [1:36:11<400:30:36, 2.77s/it]
9%|▉ | 49949/569592 [1:36:11<400:30:36, 2.77s/it]
9%|▉ | 49950/569592 [1:36:14<421:40:04, 2.92s/it]
9%|▉ | 49950/569592 [1:36:14<421:40:04, 2.92s/it]
9%|▉ | 49951/569592 [1:36:15<339:03:32, 2.35s/it]
9%|▉ | 49951/569592 [1:36:15<339:03:32, 2.35s/it]
9%|▉ | 49952/569592 [1:36:20<427:41:44, 2.96s/it]
9%|▉ | 49952/569592 [1:36:20<427:41:44, 2.96s/it]
9%|▉ | 49953/569592 [1:36:23<454:36:29, 3.15s/it]
9%|▉ | 49953/569592 [1:36:23<454:36:29, 3.15s/it]
9%|▉ | 49954/569592 [1:36:27<471:26:56, 3.27s/it]
9%|▉ | 49954/569592 [1:36:27<471:26:56, 3.27s/it]
9%|▉ | 49955/569592 [1:36:28<368:47:48, 2.55s/it]
9%|▉ | 49955/569592 [1:36:28<368:47:48, 2.55s/it]
9%|▉ | 49956/569592 [1:36:32<446:21:54, 3.09s/it]
9%|▉ | 49956/569592 [1:36:32<446:21:54, 3.09s/it]
9%|▉ | 49957/569592 [1:36:37<509:14:27, 3.53s/it]
9%|▉ | 49957/569592 [1:36:37<509:14:27, 3.53s/it]
9%|▉ | 49958/569592 [1:36:40<514:51:31, 3.57s/it]
9%|▉ | 49958/569592 [1:36:40<514:51:31, 3.57s/it]
9%|▉ | 49959/569592 [1:36:45<572:10:41, 3.96s/it]
9%|▉ | 49959/569592 [1:36:45<572:10:41, 3.96s/it]
9%|▉ | 49960/569592 [1:36:50<596:38:40, 4.13s/it]
9%|▉ | 49960/569592 [1:36:50<596:38:40, 4.13s/it]
9%|▉ | 49961/569592 [1:36:54<609:13:15, 4.22s/it]
9%|▉ | 49961/569592 [1:36:54<609:13:15, 4.22s/it]
9%|▉ | 49962/569592 [1:36:55<468:33:46, 3.25s/it]
9%|▉ | 49962/569592 [1:36:55<468:33:46, 3.25s/it]
9%|▉ | 49963/569592 [1:37:01<576:27:02, 3.99s/it]
9%|▉ | 49963/569592 [1:37:01<576:27:02, 3.99s/it]
9%|▉ | 49964/569592 [1:37:06<614:40:20, 4.26s/it]
9%|▉ | 49964/569592 [1:37:06<614:40:20, 4.26s/it]
9%|▉ | 49965/569592 [1:37:10<624:00:37, 4.32s/it]
9%|▉ | 49965/569592 [1:37:10<624:00:37, 4.32s/it]
9%|▉ | 49966/569592 [1:37:14<594:16:17, 4.12s/it]
9%|▉ | 49966/569592 [1:37:14<594:16:17, 4.12s/it]
9%|▉ | 49967/569592 [1:37:15<453:31:43, 3.14s/it]
9%|▉ | 49967/569592 [1:37:15<453:31:43, 3.14s/it]
9%|▉ | 49968/569592 [1:37:19<491:38:48, 3.41s/it]
9%|▉ | 49968/569592 [1:37:19<491:38:48, 3.41s/it]
9%|▉ | 49969/569592 [1:37:24<566:30:27, 3.92s/it]
9%|▉ | 49969/569592 [1:37:24<566:30:27, 3.92s/it]
9%|▉ | 49970/569592 [1:37:29<600:59:21, 4.16s/it]
9%|▉ | 49970/569592 [1:37:29<600:59:21, 4.16s/it]
9%|▉ | 49971/569592 [1:37:32<557:51:15, 3.86s/it]
9%|▉ | 49971/569592 [1:37:32<557:51:15, 3.86s/it]
9%|▉ | 49972/569592 [1:37:36<568:13:00, 3.94s/it]
9%|▉ | 49972/569592 [1:37:36<568:13:00, 3.94s/it]
9%|▉ | 49973/569592 [1:37:37<436:39:14, 3.03s/it]
9%|▉ | 49973/569592 [1:37:37<436:39:14, 3.03s/it]
9%|▉ | 49974/569592 [1:37:40<451:46:46, 3.13s/it]
9%|▉ | 49974/569592 [1:37:40<451:46:46, 3.13s/it]
9%|▉ | 49975/569592 [1:37:45<539:56:41, 3.74s/it]
9%|▉ | 49975/569592 [1:37:45<539:56:41, 3.74s/it]
9%|▉ | 49976/569592 [1:37:50<576:14:55, 3.99s/it]
9%|▉ | 49976/569592 [1:37:50<576:14:55, 3.99s/it]
9%|▉ | 49977/569592 [1:37:55<611:15:58, 4.23s/it]
9%|▉ | 49977/569592 [1:37:55<611:15:58, 4.23s/it]
9%|▉ | 49978/569592 [1:37:59<622:22:27, 4.31s/it]
9%|▉ | 49978/569592 [1:37:59<622:22:27, 4.31s/it]
9%|▉ | 49979/569592 [1:38:04<641:07:35, 4.44s/it]
9%|▉ | 49979/569592 [1:38:04<641:07:35, 4.44s/it]
9%|▉ | 49980/569592 [1:38:09<675:51:51, 4.68s/it]
9%|▉ | 49980/569592 [1:38:09<675:51:51, 4.68s/it]
9%|▉ | 49981/569592 [1:38:14<665:25:53, 4.61s/it]
9%|▉ | 49981/569592 [1:38:14<665:25:53, 4.61s/it]
9%|▉ | 49982/569592 [1:38:18<659:21:05, 4.57s/it]
9%|▉ | 49982/569592 [1:38:18<659:21:05, 4.57s/it]
9%|▉ | 49983/569592 [1:38:23<668:04:07, 4.63s/it]
9%|▉ | 49983/569592 [1:38:23<668:04:07, 4.63s/it]
9%|▉ | 49984/569592 [1:38:27<625:18:45, 4.33s/it]
9%|▉ | 49984/569592 [1:38:27<625:18:45, 4.33s/it]
9%|▉ | 49985/569592 [1:38:27<475:15:07, 3.29s/it]
9%|▉ | 49985/569592 [1:38:27<475:15:07, 3.29s/it]
9%|▉ | 49986/569592 [1:38:32<538:49:46, 3.73s/it]
9%|▉ | 49986/569592 [1:38:32<538:49:46, 3.73s/it]
9%|▉ | 49987/569592 [1:38:35<521:13:25, 3.61s/it]
9%|▉ | 49987/569592 [1:38:36<521:13:25, 3.61s/it]
9%|▉ | 49988/569592 [1:38:40<539:01:06, 3.73s/it]
9%|▉ | 49988/569592 [1:38:40<539:01:06, 3.73s/it]
9%|▉ | 49989/569592 [1:38:45<599:16:23, 4.15s/it]
9%|▉ | 49989/569592 [1:38:45<599:16:23, 4.15s/it]
9%|▉ | 49990/569592 [1:38:48<583:46:18, 4.04s/it]
9%|▉ | 49990/569592 [1:38:48<583:46:18, 4.04s/it]
9%|▉ | 49991/569592 [1:38:53<612:29:07, 4.24s/it]
9%|▉ | 49991/569592 [1:38:53<612:29:07, 4.24s/it]
9%|▉ | 49992/569592 [1:38:57<579:15:52, 4.01s/it]
9%|▉ | 49992/569592 [1:38:57<579:15:52, 4.01s/it]
9%|▉ | 49993/569592 [1:39:00<535:44:26, 3.71s/it]
9%|▉ | 49993/569592 [1:39:00<535:44:26, 3.71s/it]
9%|▉ | 49994/569592 [1:39:03<518:36:48, 3.59s/it]
9%|▉ | 49994/569592 [1:39:03<518:36:48, 3.59s/it]
9%|▉ | 49995/569592 [1:39:04<402:57:00, 2.79s/it]
9%|▉ | 49995/569592 [1:39:04<402:57:00, 2.79s/it]
9%|▉ | 49996/569592 [1:39:05<326:55:27, 2.27s/it]
9%|▉ | 49996/569592 [1:39:05<326:55:27, 2.27s/it]
9%|▉ | 49997/569592 [1:39:06<269:27:13, 1.87s/it]
9%|▉ | 49997/569592 [1:39:06<269:27:13, 1.87s/it]
9%|▉ | 49998/569592 [1:39:09<337:31:12, 2.34s/it]
9%|▉ | 49998/569592 [1:39:09<337:31:12, 2.34s/it]
9%|▉ | 49999/569592 [1:39:10<277:28:44, 1.92s/it]
9%|▉ | 49999/569592 [1:39:10<277:28:44, 1.92s/it]
9%|▉ | 50000/569592 [1:39:11<234:50:06, 1.63s/it]
9%|▉ | 50000/569592 [1:39:11<234:50:06, 1.63s/it]Saving model checkpoint to /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-50000
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-50000/config.json
Configuration saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-50000/generation_config.json
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-50000/model.safetensors.index.json.
tokenizer config file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-50000/tokenizer_config.json
Special tokens file saved in /fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-50000/special_tokens_map.json
Deleting older checkpoint [/fsx_0/user/zhaojiang/models/qwen-vl-gen/checkpoint-49000] due to args.save_total_limit
model-00001-of-00006.safetensors: 0%| | 0.00/4.97G [00:00, ?B/s][A
rng_state_0.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
model-00004-of-00006.safetensors: 0%| | 0.00/5.00G [00:00, ?B/s][A[A[A
Upload 132 LFS files: 0%| | 0/132 [00:00, ?it/s][A[A[A[A
rng_state_10.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_1.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_1.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 234kB/s]
rng_state_0.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 162kB/s]
model-00001-of-00006.safetensors: 0%| | 2.33M/4.97G [00:00<03:43, 22.2MB/s][A
model-00001-of-00006.safetensors: 0%| | 15.1M/4.97G [00:00<01:03, 78.1MB/s][A
rng_state_10.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 67.4kB/s][A[A[A[A[A
rng_state_10.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 49.8kB/s]
rng_state_100.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_101.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_100.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 192kB/s]
rng_state_102.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
model-00001-of-00006.safetensors: 0%| | 22.8M/4.97G [00:00<03:06, 26.5MB/s][A
rng_state_102.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 107kB/s]
rng_state_103.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
model-00004-of-00006.safetensors: 0%| | 16.4k/5.00G [00:01<109:48:49, 12.6kB/s][A[A[A
model-00004-of-00006.safetensors: 0%| | 7.67M/5.00G [00:01<11:08, 7.47MB/s] [A[A[A
rng_state_103.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 32.8kB/s]
model-00004-of-00006.safetensors: 0%| | 11.9M/5.00G [00:01<08:44, 9.51MB/s][A[A[A
model-00004-of-00006.safetensors: 0%| | 14.6M/5.00G [00:01<08:36, 9.64MB/s][A[A[A
rng_state_104.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_101.pth: 100%|██████████| 16.0k/16.0k [00:01<00:00, 10.5kB/s]
rng_state_105.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_105.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 163kB/s]
rng_state_104.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 55.3kB/s]
rng_state_106.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_107.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_108.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_108.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 275kB/s]
rng_state_107.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 106kB/s]
rng_state_106.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 36.9kB/s]
rng_state_109.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_11.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_109.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 171kB/s]
rng_state_110.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_111.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_111.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 254kB/s]
model-00001-of-00006.safetensors: 1%| | 27.6M/4.97G [00:03<13:43, 6.00MB/s][A
rng_state_112.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_112.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 190kB/s]
rng_state_110.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 30.7kB/s]
rng_state_113.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_11.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 25.9kB/s]
rng_state_113.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 359kB/s]
rng_state_114.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_115.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_114.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 278kB/s]
rng_state_115.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 282kB/s]
rng_state_116.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_117.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_117.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 229kB/s]
rng_state_118.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_116.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 51.3kB/s]
rng_state_118.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 30.4kB/s]
rng_state_119.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_12.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 0%| | 16.7M/5.00G [00:04<34:13, 2.43MB/s][A[A[A
rng_state_12.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 158kB/s]
rng_state_120.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 0%| | 24.2M/5.00G [00:05<16:25, 5.05MB/s][A[A[A
rng_state_120.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 452kB/s]
rng_state_121.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_121.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 129kB/s]
rng_state_122.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_122.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 230kB/s]
rng_state_119.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 25.7kB/s]
rng_state_123.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_124.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_123.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 205kB/s]
rng_state_124.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 119kB/s]
rng_state_125.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_125.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 205kB/s]
rng_state_126.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_127.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_127.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 233kB/s]
rng_state_13.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_13.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 217kB/s]
model-00004-of-00006.safetensors: 1%| | 32.0M/5.00G [00:06<13:36, 6.09MB/s][A[A[A
rng_state_126.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 57.4kB/s]
rng_state_14.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_15.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 1%| | 40.0M/5.00G [00:06<08:42, 9.49MB/s][A[A[A
model-00004-of-00006.safetensors: 1%| | 44.0M/5.00G [00:06<07:59, 10.3MB/s][A[A[A
rng_state_16.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 1%| | 46.7M/5.00G [00:06<07:57, 10.4MB/s][A[A[A
rng_state_15.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 20.6kB/s]
rng_state_17.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_16.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 23.6kB/s]
rng_state_18.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_18.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 154kB/s]
rng_state_19.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_19.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 172kB/s]
model-00001-of-00006.safetensors: 1%| | 30.6M/4.97G [00:07<36:20, 2.26MB/s][A
rng_state_14.pth: 100%|██████████| 16.0k/16.0k [00:01<00:00, 9.31kB/s]
rng_state_17.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 20.1kB/s]
rng_state_2.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_2.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 401kB/s]
rng_state_20.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_20.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 221kB/s]
rng_state_21.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_22.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_23.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_23.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 27.4kB/s]
rng_state_21.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 22.9kB/s]
rng_state_22.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 24.4kB/s]
rng_state_24.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_25.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_25.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 182kB/s]
rng_state_26.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_27.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_27.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 324kB/s]
rng_state_26.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 182kB/s]
rng_state_28.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_29.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_28.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 54.8kB/s]
rng_state_29.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 45.0kB/s]
rng_state_3.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_24.pth: 100%|██████████| 16.0k/16.0k [00:01<00:00, 14.8kB/s]
rng_state_30.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_31.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_30.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 51.2kB/s]
rng_state_32.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_31.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 49.3kB/s]
rng_state_32.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 195kB/s]
rng_state_33.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_33.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 176kB/s]
rng_state_34.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_35.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_35.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 113kB/s]
rng_state_3.pth: 100%|██████████| 16.0k/16.0k [00:01<00:00, 8.15kB/s]
rng_state_36.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_36.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 261kB/s]
rng_state_37.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_37.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 154kB/s]
rng_state_38.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_38.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 225kB/s]
model-00004-of-00006.safetensors: 1%| | 49.0M/5.00G [00:12<44:45, 1.84MB/s][A[A[A
rng_state_39.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 1%| | 55.6M/5.00G [00:13<27:39, 2.98MB/s][A[A[A
rng_state_39.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 96.6kB/s]
rng_state_4.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_40.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_40.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 98.0kB/s]
rng_state_34.pth: 100%|██████████| 16.0k/16.0k [00:01<00:00, 8.81kB/s]
model-00004-of-00006.safetensors: 1%| | 60.9M/5.00G [00:13<20:53, 3.94MB/s][A[A[A
rng_state_41.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
model-00004-of-00006.safetensors: 1%|▏ | 63.3M/5.00G [00:13<18:01, 4.57MB/s][A[A[A
rng_state_42.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_41.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 56.5kB/s]
rng_state_43.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_43.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 111kB/s]
rng_state_42.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 29.1kB/s]
rng_state_4.pth: 100%|██████████| 16.0k/16.0k [00:01<00:00, 15.5kB/s]
rng_state_44.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_45.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_44.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 142kB/s]
rng_state_46.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_46.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 378kB/s]
rng_state_45.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 60.4kB/s]
rng_state_47.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
model-00004-of-00006.safetensors: 1%|▏ | 65.0M/5.00G [00:15<26:01, 3.16MB/s][A[A[A
rng_state_48.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 1%|▏ | 72.2M/5.00G [00:15<14:33, 5.64MB/s][A[A[A
model-00004-of-00006.safetensors: 2%|▏ | 79.8M/5.00G [00:15<09:06, 9.00MB/s][A[A[A
rng_state_48.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 56.0kB/s]
rng_state_49.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_5.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_49.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 177kB/s]
rng_state_5.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 262kB/s]
rng_state_50.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_51.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_51.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 57.4kB/s]
rng_state_52.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_47.pth: 100%|██████████| 16.0k/16.0k [00:01<00:00, 9.21kB/s]
rng_state_52.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 98.8kB/s]
rng_state_53.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_50.pth: 100%|██████████| 16.0k/16.0k [00:01<00:00, 15.5kB/s]
rng_state_54.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_54.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 219kB/s]
rng_state_55.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_56.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_53.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 51.4kB/s]
rng_state_56.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 185kB/s]
rng_state_57.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_58.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_55.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 20.9kB/s]
rng_state_58.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 44.8kB/s]
rng_state_59.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_6.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_59.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 25.3kB/s]
rng_state_6.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 24.2kB/s]
rng_state_60.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_61.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_60.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 45.2kB/s]
rng_state_62.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_62.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 241kB/s]
rng_state_63.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_61.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 20.0kB/s]
rng_state_63.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 47.8kB/s]
rng_state_64.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_64.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 176kB/s]
rng_state_65.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_66.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
model-00004-of-00006.safetensors: 2%|▏ | 82.9M/5.00G [00:20<33:23, 2.45MB/s][A[A[A
rng_state_66.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 159kB/s]
model-00004-of-00006.safetensors: 2%|▏ | 90.0M/5.00G [00:20<21:02, 3.89MB/s][A[A[A
rng_state_57.pth: 100%|██████████| 16.0k/16.0k [00:03<00:00, 4.46kB/s]
rng_state_65.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 19.7kB/s]
rng_state_67.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_68.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_68.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 50.4kB/s]
rng_state_67.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 47.1kB/s]
rng_state_69.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_69.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 138kB/s]
rng_state_7.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_70.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 2%|▏ | 96.0M/5.00G [00:22<21:29, 3.80MB/s][A[A[A
rng_state_70.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 52.9kB/s]
rng_state_71.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 2%|▏ | 109M/5.00G [00:22<11:29, 7.09MB/s] [A[A[A
rng_state_71.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 357kB/s]
rng_state_72.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_73.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_72.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 313kB/s]
rng_state_73.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 158kB/s]
rng_state_74.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_75.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_7.pth: 100%|██████████| 16.0k/16.0k [00:01<00:00, 14.6kB/s]
rng_state_76.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_76.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 182kB/s]
model-00001-of-00006.safetensors: 1%| | 32.6M/4.97G [00:23<2:11:28, 625kB/s][A
rng_state_74.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 42.7kB/s]
rng_state_77.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_77.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 237kB/s]
rng_state_78.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_79.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_75.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 25.4kB/s]
rng_state_79.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 230kB/s]
rng_state_78.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 121kB/s]
rng_state_8.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_80.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_80.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 211kB/s]
rng_state_81.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_8.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 55.9kB/s]
rng_state_81.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 301kB/s]
rng_state_82.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
model-00001-of-00006.safetensors: 1%| | 48.0M/4.97G [00:24<50:35, 1.62MB/s] [A
rng_state_82.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 170kB/s]
rng_state_83.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_84.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_83.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 320kB/s]
rng_state_85.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
model-00001-of-00006.safetensors: 1%| | 56.6M/4.97G [00:24<34:18, 2.38MB/s][A
rng_state_84.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 56.5kB/s]
model-00001-of-00006.safetensors: 1%| | 58.9M/4.97G [00:24<31:40, 2.58MB/s][A
rng_state_86.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
model-00004-of-00006.safetensors: 2%|▏ | 112M/5.00G [00:24<18:21, 4.44MB/s][A[A[A
rng_state_86.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 208kB/s]
model-00004-of-00006.safetensors: 2%|▏ | 119M/5.00G [00:24<13:04, 6.22MB/s][A[A[A
rng_state_87.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_87.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 137kB/s]
rng_state_85.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 19.4kB/s]
model-00001-of-00006.safetensors: 1%| | 60.7M/4.97G [00:25<31:00, 2.64MB/s][A
rng_state_88.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
model-00004-of-00006.safetensors: 2%|▏ | 123M/5.00G [00:25<12:39, 6.42MB/s][A[A[A
rng_state_88.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 245kB/s]
rng_state_89.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_89.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 361kB/s]
rng_state_9.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_90.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A
rng_state_91.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
model-00001-of-00006.safetensors: 1%|▏ | 62.1M/4.97G [00:25<30:20, 2.69MB/s][A
rng_state_91.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 266kB/s]
rng_state_92.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_9.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 55.2kB/s]
rng_state_93.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_92.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 30.2kB/s]
rng_state_93.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 114kB/s]
rng_state_94.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_95.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_94.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 132kB/s]
model-00001-of-00006.safetensors: 1%|▏ | 63.3M/4.97G [00:26<35:12, 2.32MB/s][A
rng_state_95.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 54.1kB/s]
rng_state_96.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_97.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A[A[A[A[A
rng_state_96.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 53.6kB/s]
rng_state_98.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
rng_state_97.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 44.5kB/s]
rng_state_98.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 133kB/s]
rng_state_99.pth: 0%| | 0.00/16.0k [00:00, ?B/s][A[A
model-00004-of-00006.safetensors: 3%|▎ | 125M/5.00G [00:27<22:02, 3.69MB/s][A[A[A
scheduler.pt: 0%| | 0.00/1.06k [00:00, ?B/s][A[A[A[A[A[A
rng_state_99.pth: 100%|██████████| 16.0k/16.0k [00:00<00:00, 54.5kB/s]
scheduler.pt: 100%|██████████| 1.06k/1.06k [00:00<00:00, 3.29kB/s]
training_args.bin: 0%| | 0.00/7.35k [00:00, ?B/s][A[A
training_args.bin: 100%|██████████| 7.35k/7.35k [00:00<00:00, 25.4kB/s]
model-00004-of-00006.safetensors: 3%|▎ | 127M/5.00G [00:29<29:27, 2.76MB/s][A[A[A
rng_state_90.pth: 100%|██████████| 16.0k/16.0k [00:04<00:00, 3.90kB/s]
model-00001-of-00006.safetensors: 1%|▏ | 64.1M/4.97G [00:43<3:31:52, 386kB/s][A
model-00001-of-00006.safetensors: 1%|▏ | 70.4M/4.97G [00:43<1:38:22, 829kB/s][A
model-00001-of-00006.safetensors: 1%|▏ | 73.7M/4.97G [00:44<1:12:38, 1.12MB/s][A
model-00001-of-00006.safetensors: 2%|▏ | 75.6M/4.97G [00:44<1:00:11, 1.35MB/s][A
model-00001-of-00006.safetensors: 2%|▏ | 77.2M/4.97G [00:44<51:13, 1.59MB/s] [A
model-00001-of-00006.safetensors: 2%|▏ | 78.6M/4.97G [00:45<45:57, 1.77MB/s][A
model-00004-of-00006.safetensors: 3%|▎ | 128M/5.00G [00:45<29:27, 2.76MB/s][A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 128M/5.00G [00:48<3:05:45, 437kB/s][A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 140M/5.00G [00:48<1:09:37, 1.16MB/s][A[A[A
model-00001-of-00006.safetensors: 2%|▏ | 80.0M/4.97G [00:49<1:34:08, 865kB/s][A
model-00001-of-00006.safetensors: 2%|▏ | 96.0M/4.97G [00:49<21:27, 3.78MB/s] [A
model-00001-of-00006.safetensors: 2%|▏ | 101M/4.97G [00:50<17:08, 4.73MB/s] [A
model-00001-of-00006.safetensors: 2%|▏ | 112M/4.97G [00:52<18:12, 4.44MB/s][A
model-00004-of-00006.safetensors: 3%|▎ | 144M/5.00G [00:54<1:17:55, 1.04MB/s][A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 150M/5.00G [00:54<51:38, 1.57MB/s] [A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 155M/5.00G [00:54<40:19, 2.00MB/s][A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 157M/5.00G [00:55<36:41, 2.20MB/s][A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 159M/5.00G [00:55<32:20, 2.49MB/s][A[A[A
model-00001-of-00006.safetensors: 2%|▏ | 121M/4.97G [00:56<23:26, 3.44MB/s][A
model-00001-of-00006.safetensors: 2%|▏ | 123M/4.97G [00:59<33:36, 2.40MB/s][A
model-00004-of-00006.safetensors: 3%|▎ | 160M/5.00G [01:03<1:28:52, 907kB/s][A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 164M/5.00G [01:03<58:58, 1.37MB/s] [A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 166M/5.00G [01:03<53:41, 1.50MB/s][A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 167M/5.00G [01:04<47:30, 1.70MB/s][A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 169M/5.00G [01:04<42:50, 1.88MB/s][A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 170M/5.00G [01:05<42:28, 1.90MB/s][A[A[A
model-00004-of-00006.safetensors: 3%|▎ | 172M/5.00G [01:05<35:02, 2.30MB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 125M/4.97G [01:07<1:09:19, 1.16MB/s][A
model-00004-of-00006.safetensors: 3%|▎ | 174M/5.00G [01:07<47:30, 1.69MB/s][A[A[A
model-00004-of-00006.safetensors: 4%|▎ | 175M/5.00G [01:11<1:25:50, 937kB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 126M/4.97G [01:11<1:34:07, 857kB/s] [A
model-00001-of-00006.safetensors: 3%|▎ | 127M/4.97G [01:16<1:59:36, 674kB/s][A
model-00004-of-00006.safetensors: 4%|▎ | 176M/5.00G [01:25<1:25:49, 937kB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 128M/4.97G [01:35<1:59:35, 674kB/s][A
model-00004-of-00006.safetensors: 4%|▎ | 176M/5.00G [01:42<10:06:52, 132kB/s][A[A[A
model-00004-of-00006.safetensors: 4%|▎ | 186M/5.00G [01:43<2:37:43, 509kB/s] [A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 188M/5.00G [01:43<2:11:26, 610kB/s][A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 189M/5.00G [01:43<1:51:44, 718kB/s][A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 191M/5.00G [01:43<1:26:33, 926kB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 128M/4.97G [01:44<6:30:25, 207kB/s][A
model-00001-of-00006.safetensors: 3%|▎ | 135M/4.97G [01:44<2:56:16, 457kB/s][A
model-00001-of-00006.safetensors: 3%|▎ | 137M/4.97G [01:48<2:47:05, 482kB/s][A
model-00001-of-00006.safetensors: 3%|▎ | 138M/4.97G [01:50<2:39:43, 504kB/s][A
model-00004-of-00006.safetensors: 4%|▍ | 192M/5.00G [01:55<1:26:32, 926kB/s][A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 192M/5.00G [01:55<3:35:47, 371kB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 140M/4.97G [01:55<3:06:31, 431kB/s][A
model-00004-of-00006.safetensors: 4%|▍ | 208M/5.00G [01:56<51:55, 1.54MB/s] [A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 215M/5.00G [01:56<35:32, 2.24MB/s][A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 219M/5.00G [01:57<32:20, 2.46MB/s][A[A[A
model-00004-of-00006.safetensors: 4%|▍ | 221M/5.00G [01:59<39:34, 2.01MB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 141M/4.97G [02:02<3:40:04, 365kB/s][A
model-00004-of-00006.safetensors: 4%|▍ | 223M/5.00G [02:02<52:36, 1.51MB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 143M/4.97G [02:08<4:02:48, 331kB/s][A
model-00004-of-00006.safetensors: 4%|▍ | 224M/5.00G [02:15<52:35, 1.51MB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 144M/4.97G [02:25<4:02:44, 331kB/s][A
model-00004-of-00006.safetensors: 4%|▍ | 224M/5.00G [02:34<4:55:37, 269kB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▍ | 231M/5.00G [02:34<2:33:18, 518kB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▍ | 233M/5.00G [02:35<2:12:11, 601kB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 144M/4.97G [02:38<10:13:26, 131kB/s][A
model-00001-of-00006.safetensors: 3%|▎ | 150M/4.97G [02:38<3:59:57, 334kB/s] [A
model-00001-of-00006.safetensors: 3%|▎ | 152M/4.97G [02:41<3:45:03, 357kB/s][A
model-00004-of-00006.safetensors: 5%|▍ | 234M/5.00G [02:43<2:55:36, 452kB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 153M/4.97G [02:43<3:28:22, 385kB/s][A
model-00001-of-00006.safetensors: 3%|▎ | 155M/4.97G [02:50<3:52:13, 345kB/s][A
model-00004-of-00006.safetensors: 5%|▍ | 236M/5.00G [02:55<2:55:33, 452kB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 156M/4.97G [02:55<4:15:40, 314kB/s][A
model-00004-of-00006.safetensors: 5%|▍ | 236M/5.00G [02:56<4:40:06, 283kB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 158M/4.97G [03:01<4:24:55, 302kB/s][A
model-00004-of-00006.safetensors: 5%|▍ | 237M/5.00G [03:01<4:30:00, 294kB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 159M/4.97G [03:05<4:23:15, 304kB/s][A
model-00004-of-00006.safetensors: 5%|▍ | 239M/5.00G [03:07<4:36:00, 288kB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 160M/4.97G [03:20<7:41:02, 174kB/s][A
model-00001-of-00006.safetensors: 3%|▎ | 164M/4.97G [03:20<3:26:04, 388kB/s][A
model-00001-of-00006.safetensors: 3%|▎ | 168M/4.97G [03:21<2:09:30, 618kB/s][A
model-00001-of-00006.safetensors: 3%|▎ | 169M/4.97G [03:22<2:00:06, 666kB/s][A
model-00001-of-00006.safetensors: 3%|▎ | 171M/4.97G [03:24<1:49:12, 732kB/s][A
model-00004-of-00006.safetensors: 5%|▍ | 240M/5.00G [03:25<4:35:55, 288kB/s][A[A[A
model-00001-of-00006.safetensors: 3%|▎ | 172M/4.97G [03:26<1:46:57, 747kB/s][A
model-00001-of-00006.safetensors: 3%|▎ | 174M/4.97G [03:27<1:36:35, 827kB/s][A
model-00001-of-00006.safetensors: 4%|▎ | 175M/4.97G [03:30<1:48:56, 733kB/s][A
model-00004-of-00006.safetensors: 5%|▍ | 240M/5.00G [03:32<9:11:40, 144kB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▌ | 251M/5.00G [03:32<2:33:50, 514kB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▌ | 253M/5.00G [03:32<2:06:57, 623kB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▌ | 255M/5.00G [03:32<1:43:49, 762kB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▌ | 257M/5.00G [03:38<2:13:11, 593kB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▌ | 263M/5.00G [03:38<1:13:58, 1.07MB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▌ | 264M/5.00G [03:38<1:03:17, 1.25MB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▌ | 268M/5.00G [03:40<57:31, 1.37MB/s] [A[A[A
model-00001-of-00006.safetensors: 4%|▎ | 176M/4.97G [03:42<4:52:58, 272kB/s][A
model-00001-of-00006.safetensors: 4%|▎ | 182M/4.97G [03:42<1:46:18, 750kB/s][A
model-00004-of-00006.safetensors: 5%|▌ | 269M/5.00G [03:43<1:09:07, 1.14MB/s][A[A[A
model-00004-of-00006.safetensors: 5%|▌ | 271M/5.00G [03:44<1:10:17, 1.12MB/s][A[A[A
model-00001-of-00006.safetensors: 4%|▎ | 184M/4.97G [03:46<1:58:55, 670kB/s][A
model-00001-of-00006.safetensors: 4%|▎ | 185M/4.97G [03:54<3:08:45, 422kB/s][A
model-00004-of-00006.safetensors: 5%|▌ | 272M/5.00G [03:55<1:10:16, 1.12MB/s][A[A[A
model-00001-of-00006.safetensors: 4%|▍ | 187M/4.97G [04:00<3:33:13, 374kB/s][A
model-00001-of-00006.safetensors: 4%|▍ | 188M/4.97G [04:07<4:07:51, 321kB/s][A
model-00001-of-00006.safetensors: 4%|▍ | 190M/4.97G [04:13<4:19:11, 307kB/s][A
model-00004-of-00006.safetensors: 5%|▌ | 272M/5.00G [04:16<7:04:45, 186kB/s] [A[A[A
model-00004-of-00006.safetensors: 6%|▌ | 288M/5.00G [04:16<1:38:50, 795kB/s][A[A[A
model-00004-of-00006.safetensors: 6%|▌ | 293M/5.00G [04:17<1:12:30, 1.08MB/s][A[A[A
model-00004-of-00006.safetensors: 6%|▌ | 298M/5.00G [04:21<1:11:52, 1.09MB/s][A[A[A
model-00001-of-00006.safetensors: 4%|▍ | 192M/4.97G [04:25<4:19:06, 307kB/s][A
model-00004-of-00006.safetensors: 6%|▌ | 299M/5.00G [04:26<1:33:52, 835kB/s] [A[A[A
model-00001-of-00006.safetensors: 4%|▍ | 192M/4.97G [04:30<7:02:20, 188kB/s][A
model-00004-of-00006.safetensors: 6%|▌ | 301M/5.00G [04:32<2:01:57, 642kB/s][A[A[A
model-00004-of-00006.safetensors: 6%|▌ | 302M/5.00G [04:41<2:57:01, 442kB/s][A[A[A
model-00004-of-00006.safetensors: 6%|▌ | 304M/5.00G [04:43<2:48:33, 464kB/s][A[A[A
model-00001-of-00006.safetensors: 4%|▍ | 192M/4.97G [04:45<7:02:19, 188kB/s][A
model-00004-of-00006.safetensors: 6%|▌ | 304M/5.00G [04:55<2:48:33, 464kB/s][A[A[A
model-00001-of-00006.safetensors: 4%|▍ | 192M/4.97G [05:00<15:38:32, 84.8kB/s][A
model-00001-of-00006.safetensors: 4%|▍ | 198M/4.97G [05:00<5:24:13, 245kB/s] [A
model-00001-of-00006.safetensors: 4%|▍ | 208M/4.97G [05:00<2:05:10, 633kB/s][A
model-00001-of-00006.safetensors: 4%|▍ | 217M/4.97G [05:02<1:17:14, 1.02MB/s][A
model-00001-of-00006.safetensors: 4%|▍ | 218M/4.97G [05:04<1:21:31, 971kB/s] [A
model-00004-of-00006.safetensors: 6%|▌ | 304M/5.00G [05:05<7:23:12, 177kB/s][A[A[A
model-00004-of-00006.safetensors: 6%|▌ | 308M/5.00G [05:05<3:42:06, 352kB/s][A[A[A
model-00004-of-00006.safetensors: 6%|▋ | 315M/5.00G [05:05<1:46:16, 735kB/s][A[A[A
model-00004-of-00006.safetensors: 6%|▋ | 318M/5.00G [05:05<1:20:13, 973kB/s][A[A[A
model-00004-of-00006.safetensors: 6%|▋ | 320M/5.00G [05:07<1:13:32, 1.06MB/s][A[A[A
model-00004-of-00006.safetensors: 6%|▋ | 322M/5.00G [05:07<1:00:09, 1.30MB/s][A[A[A
model-00001-of-00006.safetensors: 4%|▍ | 220M/4.97G [05:08<1:34:14, 839kB/s][A
model-00004-of-00006.safetensors: 7%|▋ | 336M/5.00G [05:08<23:37, 3.29MB/s] [A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 342M/5.00G [05:09<17:26, 4.45MB/s][A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 344M/5.00G [05:09<15:11, 5.11MB/s][A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 347M/5.00G [05:09<13:13, 5.86MB/s][A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 349M/5.00G [05:09<12:07, 6.39MB/s][A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 351M/5.00G [05:09<10:26, 7.42MB/s][A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 353M/5.00G [05:10<15:17, 5.07MB/s][A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 357M/5.00G [05:10<11:40, 6.62MB/s][A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 362M/5.00G [05:10<07:53, 9.79MB/s][A[A[A
model-00004-of-00006.safetensors: 7%|▋ | 367M/5.00G [05:11<05:54, 13.1MB/s][A[A[A
model-00001-of-00006.safetensors: 4%|▍ | 221M/4.97G [05:11<1:46:50, 740kB/s][A
model-00004-of-00006.safetensors: 7%|▋ | 369M/5.00G [05:12<16:04, 4.80MB/s][A[A[A
model-00001-of-00006.safetensors: 4%|▍ | 223M/4.97G [05:12<1:36:07, 822kB/s][A
model-00004-of-00006.safetensors: 8%|▊ | 379M/5.00G [05:12<07:47, 9.89MB/s][A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 383M/5.00G [05:13<07:56, 9.68MB/s][A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 386M/5.00G [05:22<53:53, 1.43MB/s][A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 392M/5.00G [05:22<34:24, 2.23MB/s][A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 394M/5.00G [05:22<31:04, 2.47MB/s][A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 398M/5.00G [05:23<22:28, 3.41MB/s][A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 401M/5.00G [05:24<27:38, 2.77MB/s][A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 408M/5.00G [05:24<15:25, 4.96MB/s][A[A[A
model-00001-of-00006.safetensors: 5%|▍ | 224M/4.97G [05:25<1:36:06, 822kB/s][A
model-00004-of-00006.safetensors: 8%|▊ | 412M/5.00G [05:25<15:33, 4.91MB/s][A[A[A
model-00001-of-00006.safetensors: 5%|▍ | 224M/4.97G [05:26<3:52:47, 339kB/s][A
model-00001-of-00006.safetensors: 5%|▍ | 240M/4.97G [05:27<58:41, 1.34MB/s] [A
model-00001-of-00006.safetensors: 5%|▍ | 246M/4.97G [05:27<41:22, 1.90MB/s][A
model-00001-of-00006.safetensors: 5%|▌ | 251M/4.97G [05:27<33:13, 2.37MB/s][A
model-00001-of-00006.safetensors: 5%|▌ | 253M/4.97G [05:28<29:32, 2.66MB/s][A
model-00001-of-00006.safetensors: 5%|▌ | 254M/4.97G [05:28<26:21, 2.98MB/s][A
model-00001-of-00006.safetensors: 5%|▌ | 256M/4.97G [05:28<23:09, 3.39MB/s][A
model-00004-of-00006.safetensors: 8%|▊ | 414M/5.00G [05:28<33:17, 2.30MB/s][A[A[A
model-00001-of-00006.safetensors: 5%|▌ | 258M/4.97G [05:29<30:16, 2.59MB/s][A
model-00001-of-00006.safetensors: 5%|▌ | 262M/4.97G [05:29<18:45, 4.18MB/s][A
model-00001-of-00006.safetensors: 5%|▌ | 265M/4.97G [05:29<13:58, 5.60MB/s][A
model-00001-of-00006.safetensors: 5%|▌ | 270M/4.97G [05:30<10:22, 7.54MB/s][A
model-00001-of-00006.safetensors: 5%|▌ | 272M/4.97G [05:30<13:06, 5.97MB/s][A
model-00001-of-00006.safetensors: 6%|▌ | 277M/4.97G [05:30<08:59, 8.69MB/s][A
model-00001-of-00006.safetensors: 6%|▌ | 279M/4.97G [05:31<08:18, 9.40MB/s][A
model-00004-of-00006.safetensors: 8%|▊ | 416M/5.00G [05:34<1:10:03, 1.09MB/s][A[A[A
model-00001-of-00006.safetensors: 6%|▌ | 281M/4.97G [05:35<41:35, 1.88MB/s][A
model-00001-of-00006.safetensors: 6%|▌ | 282M/4.97G [05:43<1:59:01, 656kB/s][A
model-00004-of-00006.safetensors: 8%|▊ | 416M/5.00G [05:45<1:10:03, 1.09MB/s][A[A[A
model-00001-of-00006.safetensors: 6%|▌ | 284M/4.97G [05:47<2:22:35, 547kB/s][A
model-00001-of-00006.safetensors: 6%|▌ | 285M/4.97G [05:55<3:17:43, 395kB/s][A
model-00001-of-00006.safetensors: 6%|▌ | 287M/4.97G [06:02<4:09:33, 313kB/s][A
model-00004-of-00006.safetensors: 8%|▊ | 416M/5.00G [06:04<6:14:46, 204kB/s] [A[A[A
model-00004-of-00006.safetensors: 8%|▊ | 425M/5.00G [06:04<2:21:26, 539kB/s][A[A[A
model-00004-of-00006.safetensors: 9%|▊ | 429M/5.00G [06:04<1:41:20, 752kB/s][A[A[A
model-00004-of-00006.safetensors: 9%|▊ | 432M/5.00G [06:05<1:20:03, 951kB/s][A[A[A
model-00001-of-00006.safetensors: 6%|▌ | 288M/4.97G [06:15<4:09:29, 313kB/s][A
model-00004-of-00006.safetensors: 9%|▊ | 434M/5.00G [06:16<2:16:59, 555kB/s][A[A[A
model-00004-of-00006.safetensors: 9%|▉ | 448M/5.00G [06:18<56:31, 1.34MB/s] [A[A[A
model-00004-of-00006.safetensors: 9%|▉ | 459M/5.00G [06:18<33:18, 2.27MB/s][A[A[A
model-00004-of-00006.safetensors: 9%|▉ | 463M/5.00G [06:18<28:04, 2.69MB/s][A[A[A
model-00004-of-00006.safetensors: 9%|▉ | 466M/5.00G [06:25<50:00, 1.51MB/s][A[A[A
model-00004-of-00006.safetensors: 9%|▉ | 475M/5.00G [06:25<30:28, 2.47MB/s][A[A[A
model-00004-of-00006.safetensors: 10%|▉ | 477M/5.00G [06:25<27:23, 2.75MB/s][A[A[A
model-00004-of-00006.safetensors: 10%|▉ | 479M/5.00G [06:25<24:00, 3.14MB/s][A[A[A
model-00004-of-00006.safetensors: 10%|▉ | 481M/5.00G [06:33<1:04:01, 1.18MB/s][A[A[A
model-00004-of-00006.safetensors: 10%|▉ | 489M/5.00G [06:33<35:46, 2.10MB/s] [A[A[A
model-00004-of-00006.safetensors: 10%|▉ | 492M/5.00G [06:33<28:41, 2.62MB/s][A[A[A
model-00001-of-00006.safetensors: 6%|▌ | 288M/4.97G [06:35<10:47:01, 120kB/s][A
model-00001-of-00006.safetensors: 6%|▌ | 292M/4.97G [06:35<5:13:21, 249kB/s] [A
model-00001-of-00006.safetensors: 6%|▌ | 304M/4.97G [06:36<1:42:58, 754kB/s][A
model-00001-of-00006.safetensors: 6%|▋ | 320M/4.97G [06:36<45:38, 1.70MB/s] [A
model-00001-of-00006.safetensors: 7%|▋ | 326M/4.97G [06:37<36:50, 2.10MB/s][A
model-00004-of-00006.safetensors: 10%|▉ | 494M/5.00G [06:37<48:52, 1.54MB/s][A[A[A
model-00001-of-00006.safetensors: 7%|▋ | 328M/4.97G [06:38<37:49, 2.04MB/s][A
model-00001-of-00006.safetensors: 7%|▋ | 330M/4.97G [06:42<54:03, 1.43MB/s][A
model-00004-of-00006.safetensors: 10%|▉ | 495M/5.00G [06:45<1:37:33, 770kB/s][A[A[A
model-00001-of-00006.safetensors: 7%|▋ | 331M/4.97G [06:48<1:24:52, 910kB/s][A
model-00001-of-00006.safetensors: 7%|▋ | 332M/4.97G [06:51<1:38:59, 780kB/s][A
model-00004-of-00006.safetensors: 10%|▉ | 496M/5.00G [06:55<1:37:32, 770kB/s][A[A[A
model-00001-of-00006.safetensors: 7%|▋ | 334M/4.97G [06:55<1:53:31, 680kB/s][A
model-00001-of-00006.safetensors: 7%|▋ | 336M/4.97G [07:00<2:19:07, 555kB/s][A
model-00004-of-00006.safetensors: 10%|▉ | 496M/5.00G [07:07<5:12:07, 240kB/s][A[A[A
model-00004-of-00006.safetensors: 10%|█ | 506M/5.00G [07:07<1:49:51, 682kB/s][A[A[A
model-00004-of-00006.safetensors: 10%|█ | 510M/5.00G [07:07<1:18:44, 950kB/s][A[A[A
model-00004-of-00006.safetensors: 10%|█ | 512M/5.00G [07:08<1:11:07, 1.05MB/s][A[A[A
model-00004-of-00006.safetensors: 10%|█ | 513M/5.00G [07:12<1:37:10, 769kB/s] [A[A[A
model-00004-of-00006.safetensors: 10%|█ | 523M/5.00G [07:13<42:00, 1.78MB/s] [A[A[A
model-00001-of-00006.safetensors: 7%|▋ | 336M/4.97G [07:14<5:19:00, 242kB/s][A
model-00001-of-00006.safetensors: 7%|▋ | 352M/4.97G [07:15<1:07:21, 1.14MB/s][A
model-00001-of-00006.safetensors: 7%|▋ | 365M/4.97G [07:15<35:49, 2.14MB/s] [A
model-00001-of-00006.safetensors: 7%|▋ | 368M/4.97G [07:16<32:25, 2.36MB/s][A
model-00001-of-00006.safetensors: 8%|▊ | 376M/4.97G [07:16<21:48, 3.51MB/s][A
model-00001-of-00006.safetensors: 8%|▊ | 380M/4.97G [07:16<18:13, 4.20MB/s][A
model-00001-of-00006.safetensors: 8%|▊ | 384M/4.97G [07:16<15:02, 5.08MB/s][A
model-00004-of-00006.safetensors: 11%|█ | 528M/5.00G [07:17<46:05, 1.62MB/s][A[A[A
model-00004-of-00006.safetensors: 11%|█ | 535M/5.00G [07:17<30:38, 2.43MB/s][A[A[A
model-00001-of-00006.safetensors: 8%|▊ | 387M/4.97G [07:17<14:48, 5.15MB/s][A
model-00001-of-00006.safetensors: 8%|▊ | 394M/4.97G [07:17<09:49, 7.76MB/s][A
model-00004-of-00006.safetensors: 11%|█ | 539M/5.00G [07:17<25:08, 2.96MB/s][A[A[A
model-00001-of-00006.safetensors: 8%|▊ | 400M/4.97G [07:18<07:52, 9.66MB/s][A
model-00001-of-00006.safetensors: 8%|▊ | 410M/4.97G [07:18<04:47, 15.8MB/s][A
model-00001-of-00006.safetensors: 8%|▊ | 416M/4.97G [07:18<04:58, 15.2MB/s][A
model-00001-of-00006.safetensors: 9%|▊ | 423M/4.97G [07:19<05:49, 13.0MB/s][A
model-00001-of-00006.safetensors: 9%|▊ | 432M/4.97G [07:19<05:26, 13.9MB/s][A
model-00001-of-00006.safetensors: 9%|▉ | 448M/4.97G [07:20<05:15, 14.3MB/s][A
model-00004-of-00006.safetensors: 11%|█ | 540M/5.00G [07:20<39:01, 1.90MB/s][A[A[A
model-00001-of-00006.safetensors: 9%|▉ | 453M/4.97G [07:21<04:33, 16.5MB/s][A
model-00001-of-00006.safetensors: 9%|▉ | 456M/4.97G [07:22<07:16, 10.3MB/s][A
model-00001-of-00006.safetensors: 9%|▉ | 459M/4.97G [07:22<08:16, 9.07MB/s][A
model-00001-of-00006.safetensors: 9%|▉ | 461M/4.97G [07:23<09:58, 7.52MB/s][A
model-00001-of-00006.safetensors: 9%|▉ | 462M/4.97G [07:23<11:53, 6.31MB/s][A
model-00004-of-00006.safetensors: 11%|█ | 542M/5.00G [07:24<56:08, 1.32MB/s][A[A[A
model-00001-of-00006.safetensors: 9%|▉ | 463M/4.97G [07:24<16:53, 4.44MB/s][A
model-00004-of-00006.safetensors: 11%|█ | 544M/5.00G [07:25<59:45, 1.24MB/s][A[A[A
model-00001-of-00006.safetensors: 9%|▉ | 464M/4.97G [07:26<38:47, 1.93MB/s][A
model-00001-of-00006.safetensors: 10%|▉ | 480M/4.97G [07:27<13:05, 5.71MB/s][A
model-00001-of-00006.safetensors: 10%|▉ | 495M/4.97G [07:27<06:48, 10.9MB/s][A
model-00001-of-00006.safetensors: 10%|█ | 499M/4.97G [07:28<08:32, 8.72MB/s][A
model-00001-of-00006.safetensors: 10%|█ | 509M/4.97G [07:28<05:47, 12.8MB/s][A
model-00001-of-00006.safetensors: 10%|█ | 513M/4.97G [07:29<07:50, 9.47MB/s][A
model-00001-of-00006.safetensors: 10%|█ | 519M/4.97G [07:30<07:24, 10.0MB/s][A
model-00004-of-00006.safetensors: 11%|█ | 544M/5.00G [07:30<1:45:50, 702kB/s][A[A[A
model-00004-of-00006.safetensors: 11%|█ | 560M/5.00G [07:31<26:13, 2.82MB/s] [A[A[A
model-00004-of-00006.safetensors: 11%|█▏ | 567M/5.00G [07:31<18:17, 4.04MB/s][A[A[A
model-00004-of-00006.safetensors: 11%|█▏ | 572M/5.00G [07:31<14:15, 5.17MB/s][A[A[A
model-00001-of-00006.safetensors: 10%|█ | 521M/4.97G [07:32<14:01, 5.28MB/s][A
model-00004-of-00006.safetensors: 11%|█▏ | 575M/5.00G [07:32<17:05, 4.31MB/s][A[A[A
model-00001-of-00006.safetensors: 11%|█ | 523M/4.97G [07:33<17:17, 4.28MB/s][A
model-00001-of-00006.safetensors: 11%|█ | 524M/4.97G [07:33<19:27, 3.80MB/s][A
model-00001-of-00006.safetensors: 11%|█ | 526M/4.97G [07:34<21:09, 3.50MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 577M/5.00G [07:34<25:26, 2.90MB/s][A[A[A
model-00001-of-00006.safetensors: 11%|█ | 528M/4.97G [07:34<19:37, 3.77MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 585M/5.00G [07:34<14:32, 5.06MB/s][A[A[A
model-00004-of-00006.safetensors: 12%|█▏ | 591M/5.00G [07:34<10:40, 6.89MB/s][A[A[A
model-00004-of-00006.safetensors: 12%|█▏ | 594M/5.00G [07:36<14:03, 5.22MB/s][A[A[A
model-00001-of-00006.safetensors: 11%|█ | 528M/4.97G [07:36<35:53, 2.06MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 597M/5.00G [07:36<12:01, 6.10MB/s][A[A[A
model-00001-of-00006.safetensors: 11%|█ | 540M/4.97G [07:36<10:33, 6.99MB/s][A
model-00001-of-00006.safetensors: 11%|█ | 544M/4.97G [07:36<08:35, 8.57MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 603M/5.00G [07:36<08:52, 8.25MB/s][A[A[A
model-00001-of-00006.safetensors: 11%|█ | 547M/4.97G [07:36<08:00, 9.19MB/s][A
model-00001-of-00006.safetensors: 11%|█▏ | 560M/4.97G [07:37<04:54, 15.0MB/s][A
model-00001-of-00006.safetensors: 12%|█▏ | 573M/4.97G [07:37<03:09, 23.2MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 605M/5.00G [07:37<13:29, 5.43MB/s][A[A[A
model-00001-of-00006.safetensors: 12%|█▏ | 577M/4.97G [07:37<03:45, 19.5MB/s][A
model-00001-of-00006.safetensors: 12%|█▏ | 592M/4.97G [07:38<02:15, 32.2MB/s][A
model-00001-of-00006.safetensors: 12%|█▏ | 598M/4.97G [07:38<02:55, 24.9MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 607M/5.00G [07:38<17:51, 4.10MB/s][A[A[A
model-00001-of-00006.safetensors: 12%|█▏ | 603M/4.97G [07:39<05:16, 13.8MB/s][A
model-00001-of-00006.safetensors: 12%|█▏ | 606M/4.97G [07:39<05:31, 13.1MB/s][A
model-00001-of-00006.safetensors: 12%|█▏ | 609M/4.97G [07:40<06:51, 10.6MB/s][A
model-00001-of-00006.safetensors: 13%|█▎ | 623M/4.97G [07:40<03:22, 21.5MB/s][A
model-00001-of-00006.safetensors: 13%|█▎ | 629M/4.97G [07:40<03:17, 22.0MB/s][A
model-00001-of-00006.safetensors: 13%|█▎ | 634M/4.97G [07:40<03:34, 20.2MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 608M/5.00G [07:41<35:08, 2.08MB/s][A[A[A
model-00001-of-00006.safetensors: 13%|█▎ | 637M/4.97G [07:41<03:35, 20.1MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 615M/5.00G [07:41<16:58, 4.30MB/s][A[A[A
model-00004-of-00006.safetensors: 12%|█▏ | 620M/5.00G [07:41<13:26, 5.43MB/s][A[A[A
model-00004-of-00006.safetensors: 12%|█▏ | 622M/5.00G [07:41<12:11, 5.98MB/s][A[A[A
model-00001-of-00006.safetensors: 13%|█▎ | 641M/4.97G [07:41<06:19, 11.4MB/s][A
model-00001-of-00006.safetensors: 13%|█▎ | 656M/4.97G [07:42<03:00, 23.8MB/s][A
model-00004-of-00006.safetensors: 12%|█▏ | 624M/5.00G [07:42<13:30, 5.40MB/s][A[A[A
model-00001-of-00006.safetensors: 13%|█▎ | 662M/4.97G [07:42<03:15, 22.0MB/s][A
model-00004-of-00006.safetensors: 13%|█▎ | 632M/5.00G [07:42<07:12, 10.1MB/s][A[A[A
model-00004-of-00006.safetensors: 13%|█▎ | 637M/5.00G [07:42<05:25, 13.4MB/s][A[A[A
model-00001-of-00006.safetensors: 14%|█▎ | 672M/4.97G [07:42<03:05, 23.2MB/s][A
model-00001-of-00006.safetensors: 14%|█▍ | 688M/4.97G [07:43<02:16, 31.4MB/s][A
model-00004-of-00006.safetensors: 13%|█▎ | 640M/5.00G [07:43<06:43, 10.8MB/s][A[A[A
model-00004-of-00006.safetensors: 13%|█▎ | 651M/5.00G [07:43<03:35, 20.2MB/s][A[A[A
model-00001-of-00006.safetensors: 14%|█▍ | 704M/4.97G [07:43<01:49, 38.7MB/s][A
model-00001-of-00006.safetensors: 14%|█▍ | 718M/4.97G [07:43<01:25, 49.7MB/s][A
model-00004-of-00006.safetensors: 13%|█▎ | 656M/5.00G [07:43<03:50, 18.8MB/s][A[A[A
model-00004-of-00006.safetensors: 13%|█▎ | 664M/5.00G [07:43<02:48, 25.7MB/s][A[A[A
model-00004-of-00006.safetensors: 13%|█▎ | 668M/5.00G [07:43<02:41, 26.9MB/s][A[A[A
model-00001-of-00006.safetensors: 15%|█▍ | 725M/4.97G [07:43<02:08, 33.1MB/s][A
model-00001-of-00006.safetensors: 15%|█▍ | 736M/4.97G [07:44<02:09, 32.6MB/s][A
model-00001-of-00006.safetensors: 15%|█▍ | 744M/4.97G [07:44<02:10, 32.3MB/s][A
model-00001-of-00006.safetensors: 15%|█▌ | 752M/4.97G [07:44<02:19, 30.1MB/s][A
model-00001-of-00006.safetensors: 15%|█▌ | 768M/4.97G [07:45<01:46, 39.3MB/s][A
model-00001-of-00006.safetensors: 16%|█▌ | 784M/4.97G [07:45<01:38, 42.4MB/s][A
model-00001-of-00006.safetensors: 16%|█▌ | 800M/4.97G [07:45<01:32, 44.9MB/s][A
model-00004-of-00006.safetensors: 13%|█▎ | 673M/5.00G [07:45<09:58, 7.23MB/s][A[A[A
model-00004-of-00006.safetensors: 14%|█▎ | 683M/5.00G [07:45<05:54, 12.2MB/s][A[A[A
model-00001-of-00006.safetensors: 16%|█▋ | 816M/4.97G [07:46<01:21, 51.0MB/s][A
model-00004-of-00006.safetensors: 14%|█▎ | 687M/5.00G [07:46<04:58, 14.4MB/s][A[A[A
model-00001-of-00006.safetensors: 17%|█▋ | 832M/4.97G [07:46<01:25, 48.6MB/s][A
model-00001-of-00006.safetensors: 17%|█▋ | 848M/4.97G [07:46<01:17, 52.8MB/s][A
model-00004-of-00006.safetensors: 14%|█▍ | 692M/5.00G [07:46<06:24, 11.2MB/s][A[A[A
model-00004-of-00006.safetensors: 14%|█▍ | 696M/5.00G [07:46<05:18, 13.5MB/s][A[A[A
model-00001-of-00006.safetensors: 17%|█▋ | 864M/4.97G [07:46<01:17, 53.2MB/s][A
model-00001-of-00006.safetensors: 18%|█▊ | 877M/4.97G [07:47<01:06, 61.3MB/s][A
model-00004-of-00006.safetensors: 14%|█▍ | 700M/5.00G [07:47<05:53, 12.2MB/s][A[A[A
model-00004-of-00006.safetensors: 14%|█▍ | 702M/5.00G [07:47<05:20, 13.4MB/s][A[A[A
model-00001-of-00006.safetensors: 18%|█▊ | 884M/4.97G [07:47<01:32, 44.2MB/s][A
model-00004-of-00006.safetensors: 14%|█▍ | 705M/5.00G [07:47<06:01, 11.9MB/s][A[A[A
model-00001-of-00006.safetensors: 18%|█▊ | 896M/4.97G [07:47<01:39, 41.0MB/s][A
model-00004-of-00006.safetensors: 14%|█▍ | 720M/5.00G [07:47<02:43, 26.1MB/s][A[A[A
model-00004-of-00006.safetensors: 15%|█▍ | 728M/5.00G [07:47<02:07, 33.4MB/s][A[A[A
model-00001-of-00006.safetensors: 18%|█▊ | 912M/4.97G [07:48<01:29, 45.1MB/s][A
model-00004-of-00006.safetensors: 15%|█▍ | 735M/5.00G [07:48<01:52, 37.7MB/s][A[A[A
model-00001-of-00006.safetensors: 19%|█▊ | 928M/4.97G [07:48<01:21, 49.8MB/s][A
model-00004-of-00006.safetensors: 15%|█▍ | 740M/5.00G [07:48<02:53, 24.5MB/s][A[A[A
model-00004-of-00006.safetensors: 15%|█▍ | 749M/5.00G [07:48<02:10, 32.5MB/s][A[A[A
model-00001-of-00006.safetensors: 19%|█▉ | 944M/4.97G [07:48<01:22, 48.7MB/s][A
model-00001-of-00006.safetensors: 19%|█▉ | 960M/4.97G [07:48<01:17, 51.5MB/s][A
model-00001-of-00006.safetensors: 20%|█▉ | 976M/4.97G [07:49<01:10, 56.5MB/s][A
model-00004-of-00006.safetensors: 15%|█▌ | 754M/5.00G [07:49<03:34, 19.8MB/s][A[A[A
model-00004-of-00006.safetensors: 15%|█▌ | 763M/5.00G [07:49<02:42, 26.1MB/s][A[A[A
model-00001-of-00006.safetensors: 20%|█▉ | 992M/4.97G [07:49<01:05, 60.7MB/s][A
model-00001-of-00006.safetensors: 20%|██ | 1.01G/4.97G [07:49<01:06, 59.8MB/s][A
model-00004-of-00006.safetensors: 15%|█▌ | 768M/5.00G [07:49<03:16, 21.5MB/s][A[A[A
model-00001-of-00006.safetensors: 21%|██ | 1.02G/4.97G [07:49<01:07, 58.1MB/s][A
model-00004-of-00006.safetensors: 16%|█▌ | 784M/5.00G [07:50<03:51, 18.2MB/s][A[A[A
model-00004-of-00006.safetensors: 16%|█▌ | 800M/5.00G [07:51<02:42, 25.8MB/s][A[A[A
model-00004-of-00006.safetensors: 16%|█▋ | 816M/5.00G [07:51<02:13, 31.3MB/s][A[A[A
model-00004-of-00006.safetensors: 17%|█▋ | 832M/5.00G [07:51<01:53, 36.7MB/s][A[A[A
model-00004-of-00006.safetensors: 17%|█▋ | 848M/5.00G [07:51<01:40, 41.5MB/s][A[A[A
model-00004-of-00006.safetensors: 17%|█▋ | 864M/5.00G [07:52<01:30, 45.5MB/s][A[A[A
model-00004-of-00006.safetensors: 18%|█▊ | 880M/5.00G [07:52<01:26, 47.4MB/s][A[A[A
model-00004-of-00006.safetensors: 18%|█▊ | 896M/5.00G [07:52<01:17, 53.1MB/s][A[A[A
model-00004-of-00006.safetensors: 18%|█▊ | 912M/5.00G [07:52<01:09, 59.1MB/s][A[A[A
model-00004-of-00006.safetensors: 19%|█▊ | 928M/5.00G [07:53<01:54, 35.6MB/s][A[A[A
model-00004-of-00006.safetensors: 19%|█▉ | 944M/5.00G [07:54<01:41, 40.1MB/s][A[A[A
model-00004-of-00006.safetensors: 19%|█▉ | 952M/5.00G [07:54<01:33, 43.5MB/s][A[A[A
model-00004-of-00006.safetensors: 19%|█▉ | 958M/5.00G [07:54<01:58, 34.1MB/s][A[A[A
model-00004-of-00006.safetensors: 19%|█▉ | 963M/5.00G [07:55<02:40, 25.1MB/s][A[A[A
model-00001-of-00006.safetensors: 21%|██ | 1.04G/4.97G [07:55<07:16, 9.00MB/s][A
model-00004-of-00006.safetensors: 20%|█▉ | 976M/5.00G [07:55<02:17, 29.4MB/s][A[A[A
model-00001-of-00006.safetensors: 21%|██▏ | 1.06G/4.97G [07:55<05:26, 12.0MB/s][A
model-00004-of-00006.safetensors: 20%|█▉ | 992M/5.00G [07:55<01:49, 36.5MB/s][A[A[A
model-00004-of-00006.safetensors: 20%|██ | 1.01G/5.00G [07:55<01:34, 42.4MB/s][A[A[A
model-00004-of-00006.safetensors: 20%|██ | 1.02G/5.00G [07:56<01:20, 49.7MB/s][A[A[A
model-00001-of-00006.safetensors: 22%|██▏ | 1.07G/4.97G [07:56<05:10, 12.6MB/s][A
model-00001-of-00006.safetensors: 22%|██▏ | 1.09G/4.97G [07:56<03:52, 16.7MB/s][A
model-00004-of-00006.safetensors: 21%|██ | 1.03G/5.00G [07:56<02:51, 23.2MB/s][A[A[A
model-00001-of-00006.safetensors: 22%|██▏ | 1.10G/4.97G [07:57<03:01, 21.3MB/s][A
model-00001-of-00006.safetensors: 23%|██▎ | 1.12G/4.97G [07:57<02:21, 27.1MB/s][A
model-00004-of-00006.safetensors: 21%|██ | 1.04G/5.00G [07:57<02:27, 26.8MB/s][A[A[A
model-00001-of-00006.safetensors: 23%|██▎ | 1.14G/4.97G [07:57<01:53, 33.7MB/s][A
model-00004-of-00006.safetensors: 21%|██ | 1.06G/5.00G [07:57<01:55, 34.3MB/s][A[A[A
model-00001-of-00006.safetensors: 23%|██▎ | 1.15G/4.97G [07:57<01:39, 38.5MB/s][A
model-00004-of-00006.safetensors: 21%|██▏ | 1.07G/5.00G [07:57<01:39, 39.6MB/s][A[A[A
model-00004-of-00006.safetensors: 22%|██▏ | 1.09G/5.00G [07:58<01:22, 47.5MB/s][A[A[A
model-00001-of-00006.safetensors: 24%|██▎ | 1.17G/4.97G [07:58<01:38, 38.5MB/s][A
model-00004-of-00006.safetensors: 22%|██▏ | 1.10G/5.00G [07:58<01:14, 52.1MB/s][A[A[A
model-00001-of-00006.safetensors: 24%|██▍ | 1.18G/4.97G [07:58<01:26, 43.8MB/s][A
model-00004-of-00006.safetensors: 22%|██▏ | 1.12G/5.00G [07:58<01:05, 59.2MB/s][A[A[A
model-00004-of-00006.safetensors: 23%|██▎ | 1.14G/5.00G [07:58<01:03, 61.3MB/s][A[A[A
model-00004-of-00006.safetensors: 23%|██▎ | 1.15G/5.00G [07:59<01:14, 51.7MB/s][A[A[A
model-00004-of-00006.safetensors: 23%|██▎ | 1.17G/5.00G [07:59<01:10, 54.6MB/s][A[A[A
model-00001-of-00006.safetensors: 24%|██▍ | 1.20G/4.97G [07:59<02:18, 27.2MB/s][A
model-00004-of-00006.safetensors: 24%|██▎ | 1.18G/5.00G [07:59<01:05, 57.9MB/s][A[A[A
model-00001-of-00006.safetensors: 24%|██▍ | 1.22G/4.97G [07:59<01:53, 33.1MB/s][A
model-00001-of-00006.safetensors: 25%|██▍ | 1.23G/4.97G [08:00<01:34, 39.6MB/s][A
model-00001-of-00006.safetensors: 25%|██▌ | 1.25G/4.97G [08:00<01:21, 45.4MB/s][A
model-00004-of-00006.safetensors: 24%|██▍ | 1.20G/5.00G [08:00<01:32, 41.1MB/s][A[A[A
model-00001-of-00006.safetensors: 25%|██▌ | 1.26G/4.97G [08:00<01:13, 50.2MB/s][A
model-00004-of-00006.safetensors: 24%|██▍ | 1.22G/5.00G [08:00<01:22, 45.6MB/s][A[A[A
model-00004-of-00006.safetensors: 25%|██▍ | 1.23G/5.00G [08:00<01:16, 49.3MB/s][A[A[A
model-00001-of-00006.safetensors: 26%|██▌ | 1.28G/4.97G [08:00<01:19, 46.5MB/s][A
model-00004-of-00006.safetensors: 25%|██▍ | 1.25G/5.00G [08:01<01:08, 55.1MB/s][A[A[A
model-00001-of-00006.safetensors: 26%|██▌ | 1.30G/4.97G [08:01<01:12, 50.9MB/s][A
model-00004-of-00006.safetensors: 25%|██▌ | 1.26G/5.00G [08:01<01:04, 58.1MB/s][A[A[A
model-00001-of-00006.safetensors: 26%|██▋ | 1.31G/4.97G [08:01<01:02, 58.4MB/s][A
model-00004-of-00006.safetensors: 26%|██▌ | 1.28G/5.00G [08:01<01:06, 55.7MB/s][A[A[A
model-00004-of-00006.safetensors: 26%|██▌ | 1.30G/5.00G [08:01<01:03, 58.4MB/s][A[A[A
model-00001-of-00006.safetensors: 27%|██▋ | 1.33G/4.97G [08:01<01:22, 44.3MB/s][A
model-00001-of-00006.safetensors: 27%|██▋ | 1.34G/4.97G [08:02<01:14, 48.8MB/s][A
model-00004-of-00006.safetensors: 26%|██▌ | 1.31G/5.00G [08:02<01:04, 57.3MB/s][A[A[A
model-00001-of-00006.safetensors: 27%|██▋ | 1.36G/4.97G [08:02<01:13, 48.8MB/s][A
model-00004-of-00006.safetensors: 27%|██▋ | 1.33G/5.00G [08:02<01:11, 51.5MB/s][A[A[A
model-00001-of-00006.safetensors: 28%|██▊ | 1.38G/4.97G [08:02<01:08, 52.8MB/s][A
model-00004-of-00006.safetensors: 27%|██▋ | 1.34G/5.00G [08:02<01:05, 56.0MB/s][A[A[A
model-00001-of-00006.safetensors: 28%|██▊ | 1.39G/4.97G [08:02<01:03, 56.5MB/s][A
model-00004-of-00006.safetensors: 27%|██▋ | 1.36G/5.00G [08:03<01:10, 51.3MB/s][A[A[A
model-00001-of-00006.safetensors: 28%|██▊ | 1.41G/4.97G [08:03<01:02, 56.9MB/s][A
model-00004-of-00006.safetensors: 28%|██▊ | 1.38G/5.00G [08:03<01:05, 55.5MB/s][A[A[A
model-00001-of-00006.safetensors: 29%|██▊ | 1.42G/4.97G [08:03<00:58, 60.9MB/s][A
model-00004-of-00006.safetensors: 28%|██▊ | 1.39G/5.00G [08:03<01:17, 46.5MB/s][A[A[A
model-00004-of-00006.safetensors: 28%|██▊ | 1.41G/5.00G [08:04<01:10, 51.1MB/s][A[A[A
model-00001-of-00006.safetensors: 29%|██▉ | 1.44G/4.97G [08:04<01:32, 38.0MB/s][A
model-00004-of-00006.safetensors: 28%|██▊ | 1.42G/5.00G [08:04<01:06, 53.5MB/s][A[A[A
model-00001-of-00006.safetensors: 29%|██▉ | 1.46G/4.97G [08:04<01:20, 43.8MB/s][A
model-00004-of-00006.safetensors: 29%|██▉ | 1.44G/5.00G [08:04<01:07, 52.7MB/s][A[A[A
model-00001-of-00006.safetensors: 30%|██▉ | 1.47G/4.97G [08:04<01:10, 49.2MB/s][A
model-00001-of-00006.safetensors: 30%|██▉ | 1.49G/4.97G [08:04<01:03, 54.4MB/s][A
model-00001-of-00006.safetensors: 30%|███ | 1.50G/4.97G [08:05<01:04, 53.6MB/s][A
model-00004-of-00006.safetensors: 29%|██▉ | 1.46G/5.00G [08:05<01:33, 37.8MB/s][A[A[A
model-00001-of-00006.safetensors: 31%|███ | 1.52G/4.97G [08:05<00:58, 59.1MB/s][A
model-00004-of-00006.safetensors: 29%|██▉ | 1.47G/5.00G [08:05<01:21, 43.4MB/s][A[A[A
model-00001-of-00006.safetensors: 31%|███ | 1.54G/4.97G [08:05<00:53, 64.7MB/s][A
model-00004-of-00006.safetensors: 30%|██▉ | 1.49G/5.00G [08:05<01:14, 47.3MB/s][A[A[A
model-00001-of-00006.safetensors: 31%|███▏ | 1.55G/4.97G [08:05<00:56, 59.9MB/s][A
model-00001-of-00006.safetensors: 32%|███▏ | 1.57G/4.97G [08:06<00:57, 59.3MB/s][A
model-00004-of-00006.safetensors: 30%|███ | 1.50G/5.00G [08:06<01:28, 39.6MB/s][A[A[A
model-00001-of-00006.safetensors: 32%|███▏ | 1.58G/4.97G [08:06<00:54, 61.9MB/s][A
model-00001-of-00006.safetensors: 32%|███▏ | 1.60G/4.97G [08:06<00:51, 65.2MB/s][A
model-00004-of-00006.safetensors: 30%|███ | 1.52G/5.00G [08:06<01:17, 44.9MB/s][A[A[A
model-00004-of-00006.safetensors: 31%|███ | 1.54G/5.00G [08:06<01:08, 50.3MB/s][A[A[A
model-00001-of-00006.safetensors: 33%|███▎ | 1.62G/4.97G [08:06<00:50, 65.8MB/s][A
model-00001-of-00006.safetensors: 33%|███▎ | 1.63G/4.97G [08:07<00:48, 68.4MB/s][A
model-00004-of-00006.safetensors: 31%|███ | 1.55G/5.00G [08:07<01:16, 45.2MB/s][A[A[A
model-00001-of-00006.safetensors: 33%|███▎ | 1.65G/4.97G [08:07<00:54, 60.9MB/s][A
model-00004-of-00006.safetensors: 31%|███▏ | 1.57G/5.00G [08:07<01:15, 45.6MB/s][A[A[A
model-00001-of-00006.safetensors: 34%|███▎ | 1.66G/4.97G [08:07<01:06, 49.6MB/s][A
model-00001-of-00006.safetensors: 34%|███▍ | 1.68G/4.97G [08:08<00:59, 55.1MB/s][A
model-00001-of-00006.safetensors: 34%|███▍ | 1.70G/4.97G [08:08<01:00, 54.1MB/s][A
model-00004-of-00006.safetensors: 32%|███▏ | 1.58G/5.00G [08:08<01:43, 33.1MB/s][A[A[A
model-00001-of-00006.safetensors: 34%|███▍ | 1.71G/4.97G [08:08<00:57, 56.6MB/s][A
model-00004-of-00006.safetensors: 32%|███▏ | 1.60G/5.00G [08:08<01:27, 38.8MB/s][A[A[A
model-00004-of-00006.safetensors: 32%|███▏ | 1.62G/5.00G [08:08<01:16, 44.2MB/s][A[A[A
model-00001-of-00006.safetensors: 35%|███▍ | 1.73G/4.97G [08:09<00:57, 56.0MB/s][A
model-00004-of-00006.safetensors: 33%|███▎ | 1.63G/5.00G [08:09<01:11, 46.8MB/s][A[A[A
model-00001-of-00006.safetensors: 35%|███▌ | 1.74G/4.97G [08:09<01:01, 52.1MB/s][A
model-00004-of-00006.safetensors: 33%|███▎ | 1.65G/5.00G [08:09<01:02, 53.6MB/s][A[A[A
model-00001-of-00006.safetensors: 35%|███▌ | 1.76G/4.97G [08:09<00:58, 54.8MB/s][A
model-00004-of-00006.safetensors: 33%|███▎ | 1.66G/5.00G [08:09<00:58, 57.5MB/s][A[A[A
model-00001-of-00006.safetensors: 36%|███▌ | 1.78G/4.97G [08:09<00:55, 57.8MB/s][A
model-00004-of-00006.safetensors: 34%|███▎ | 1.68G/5.00G [08:09<00:54, 60.8MB/s][A[A[A
model-00001-of-00006.safetensors: 36%|███▌ | 1.79G/4.97G [08:10<00:52, 61.0MB/s][A
model-00001-of-00006.safetensors: 36%|███▋ | 1.81G/4.97G [08:10<00:50, 62.0MB/s][A
model-00004-of-00006.safetensors: 34%|███▍ | 1.70G/5.00G [08:10<01:07, 48.8MB/s][A[A[A
model-00004-of-00006.safetensors: 34%|███▍ | 1.71G/5.00G [08:10<01:01, 53.6MB/s][A[A[A
model-00001-of-00006.safetensors: 37%|███▋ | 1.82G/4.97G [08:10<01:07, 46.5MB/s][A
model-00004-of-00006.safetensors: 35%|███▍ | 1.73G/5.00G [08:10<00:57, 56.9MB/s][A[A[A
model-00001-of-00006.safetensors: 37%|███▋ | 1.84G/4.97G [08:11<01:02, 50.0MB/s][A
model-00004-of-00006.safetensors: 35%|███▍ | 1.74G/5.00G [08:11<01:05, 49.5MB/s][A[A[A
model-00001-of-00006.safetensors: 37%|███▋ | 1.86G/4.97G [08:11<00:59, 52.4MB/s][A
model-00004-of-00006.safetensors: 35%|███▌ | 1.76G/5.00G [08:11<00:56, 57.2MB/s][A[A[A
model-00001-of-00006.safetensors: 38%|███▊ | 1.87G/4.97G [08:11<01:00, 51.0MB/s][A
model-00004-of-00006.safetensors: 35%|███▌ | 1.77G/5.00G [08:11<01:10, 45.8MB/s][A[A[A
model-00001-of-00006.safetensors: 38%|███▊ | 1.88G/4.97G [08:11<00:51, 60.2MB/s][A
model-00004-of-00006.safetensors: 36%|███▌ | 1.78G/5.00G [08:12<01:10, 45.5MB/s][A[A[A
model-00001-of-00006.safetensors: 38%|███▊ | 1.89G/4.97G [08:12<00:56, 54.1MB/s][A
model-00004-of-00006.safetensors: 36%|███▌ | 1.79G/5.00G [08:12<01:02, 51.2MB/s][A[A[A
model-00001-of-00006.safetensors: 38%|███▊ | 1.90G/4.97G [08:12<01:10, 43.6MB/s][A
model-00004-of-00006.safetensors: 36%|███▌ | 1.81G/5.00G [08:12<00:57, 55.6MB/s][A[A[A
model-00004-of-00006.safetensors: 36%|███▋ | 1.82G/5.00G [08:12<00:53, 59.1MB/s][A[A[A
model-00001-of-00006.safetensors: 39%|███▊ | 1.92G/4.97G [08:12<01:04, 47.0MB/s][A
model-00004-of-00006.safetensors: 37%|███▋ | 1.84G/5.00G [08:12<00:51, 61.4MB/s][A[A[A
model-00004-of-00006.safetensors: 37%|███▋ | 1.86G/5.00G [08:13<00:52, 59.5MB/s][A[A[A
model-00004-of-00006.safetensors: 37%|███▋ | 1.87G/5.00G [08:13<00:50, 62.3MB/s][A[A[A
model-00001-of-00006.safetensors: 39%|███▉ | 1.94G/4.97G [08:13<01:39, 30.4MB/s][A
model-00004-of-00006.safetensors: 38%|███▊ | 1.89G/5.00G [08:13<00:50, 61.9MB/s][A[A[A
model-00001-of-00006.safetensors: 39%|███▉ | 1.95G/4.97G [08:13<01:20, 37.2MB/s][A
model-00004-of-00006.safetensors: 38%|███▊ | 1.90G/5.00G [08:14<00:51, 60.2MB/s][A[A[A
model-00001-of-00006.safetensors: 40%|███▉ | 1.97G/4.97G [08:14<01:08, 43.6MB/s][A
model-00001-of-00006.safetensors: 40%|███▉ | 1.98G/4.97G [08:14<01:00, 49.6MB/s][A
model-00004-of-00006.safetensors: 38%|███▊ | 1.92G/5.00G [08:14<01:02, 49.2MB/s][A[A[A
model-00001-of-00006.safetensors: 40%|████ | 2.00G/4.97G [08:14<00:53, 55.3MB/s][A
model-00004-of-00006.safetensors: 39%|███▊ | 1.94G/5.00G [08:14<00:56, 54.4MB/s][A[A[A
model-00001-of-00006.safetensors: 41%|████ | 2.02G/4.97G [08:14<00:51, 56.9MB/s][A
model-00004-of-00006.safetensors: 39%|███▉ | 1.95G/5.00G [08:14<00:51, 59.1MB/s][A[A[A
model-00001-of-00006.safetensors: 41%|████ | 2.03G/4.97G [08:15<00:47, 62.1MB/s][A
model-00004-of-00006.safetensors: 39%|███▉ | 1.97G/5.00G [08:15<00:48, 62.1MB/s][A[A[A
model-00001-of-00006.safetensors: 41%|████ | 2.05G/4.97G [08:15<00:54, 53.1MB/s][A
model-00004-of-00006.safetensors: 40%|███▉ | 1.98G/5.00G [08:15<00:53, 56.7MB/s][A[A[A
model-00001-of-00006.safetensors: 42%|████▏ | 2.06G/4.97G [08:15<00:48, 59.4MB/s][A
model-00004-of-00006.safetensors: 40%|████ | 2.00G/5.00G [08:15<00:49, 60.9MB/s][A[A[A
model-00001-of-00006.safetensors: 42%|████▏ | 2.08G/4.97G [08:15<00:46, 62.2MB/s][A
model-00004-of-00006.safetensors: 40%|████ | 2.02G/5.00G [08:15<00:49, 60.6MB/s][A[A[A
model-00001-of-00006.safetensors: 42%|████▏ | 2.10G/4.97G [08:16<00:44, 64.5MB/s][A
model-00001-of-00006.safetensors: 43%|████▎ | 2.11G/4.97G [08:16<00:41, 68.0MB/s][A
model-00004-of-00006.safetensors: 41%|████ | 2.03G/5.00G [08:16<00:50, 58.3MB/s][A[A[A
model-00001-of-00006.safetensors: 43%|████▎ | 2.13G/4.97G [08:16<00:40, 69.8MB/s][A
model-00004-of-00006.safetensors: 41%|████ | 2.05G/5.00G [08:16<00:48, 61.3MB/s][A[A[A
model-00004-of-00006.safetensors: 41%|████▏ | 2.06G/5.00G [08:16<00:45, 64.4MB/s][A[A[A
model-00001-of-00006.safetensors: 43%|████▎ | 2.14G/4.97G [08:16<00:45, 62.4MB/s][A
model-00004-of-00006.safetensors: 42%|████▏ | 2.08G/5.00G [08:16<00:45, 64.1MB/s][A[A[A
model-00001-of-00006.safetensors: 43%|████▎ | 2.16G/4.97G [08:17<00:42, 66.7MB/s][A
model-00004-of-00006.safetensors: 42%|████▏ | 2.10G/5.00G [08:17<00:44, 65.3MB/s][A[A[A
model-00001-of-00006.safetensors: 44%|████▍ | 2.18G/4.97G [08:17<00:44, 63.1MB/s][A
model-00004-of-00006.safetensors: 42%|████▏ | 2.11G/5.00G [08:17<00:44, 65.2MB/s][A[A[A
model-00001-of-00006.safetensors: 44%|████▍ | 2.19G/4.97G [08:17<00:50, 54.5MB/s][A
model-00004-of-00006.safetensors: 43%|████▎ | 2.13G/5.00G [08:17<00:46, 62.3MB/s][A[A[A
model-00001-of-00006.safetensors: 44%|████▍ | 2.21G/4.97G [08:17<00:48, 57.1MB/s][A
model-00004-of-00006.safetensors: 43%|████▎ | 2.14G/5.00G [08:17<00:45, 62.8MB/s][A[A[A
model-00001-of-00006.safetensors: 45%|████▍ | 2.22G/4.97G [08:18<00:49, 55.8MB/s][A
model-00004-of-00006.safetensors: 43%|████▎ | 2.16G/5.00G [08:18<00:47, 60.0MB/s][A[A[A
model-00001-of-00006.safetensors: 45%|████▌ | 2.24G/4.97G [08:18<00:46, 59.2MB/s][A
model-00004-of-00006.safetensors: 44%|████▎ | 2.18G/5.00G [08:18<00:44, 64.0MB/s][A[A[A
model-00001-of-00006.safetensors: 45%|████▌ | 2.26G/4.97G [08:18<00:42, 63.8MB/s][A
model-00001-of-00006.safetensors: 46%|████▌ | 2.27G/4.97G [08:19<00:46, 57.7MB/s][A
model-00001-of-00006.safetensors: 46%|████▌ | 2.29G/4.97G [08:19<00:45, 58.8MB/s][A
model-00001-of-00006.safetensors: 46%|████▋ | 2.30G/4.97G [08:19<00:48, 54.4MB/s][A
model-00001-of-00006.safetensors: 47%|████▋ | 2.32G/4.97G [08:19<00:44, 59.1MB/s][A
model-00001-of-00006.safetensors: 47%|████▋ | 2.34G/4.97G [08:20<00:48, 54.4MB/s][A
model-00001-of-00006.safetensors: 47%|████▋ | 2.35G/4.97G [08:20<00:45, 57.8MB/s][A
model-00001-of-00006.safetensors: 48%|████▊ | 2.37G/4.97G [08:20<00:40, 64.6MB/s][A
model-00001-of-00006.safetensors: 48%|████▊ | 2.38G/4.97G [08:20<00:40, 64.2MB/s][A
model-00001-of-00006.safetensors: 48%|████▊ | 2.40G/4.97G [08:21<00:39, 64.5MB/s][A
model-00001-of-00006.safetensors: 49%|████▊ | 2.42G/4.97G [08:21<00:39, 65.0MB/s][A
model-00004-of-00006.safetensors: 44%|████▍ | 2.19G/5.00G [08:21<03:09, 14.8MB/s][A[A[A
model-00001-of-00006.safetensors: 49%|████▉ | 2.43G/4.97G [08:21<00:42, 60.2MB/s][A
model-00004-of-00006.safetensors: 44%|████▍ | 2.21G/5.00G [08:21<02:26, 19.0MB/s][A[A[A
model-00001-of-00006.safetensors: 49%|████▉ | 2.45G/4.97G [08:21<00:41, 61.2MB/s][A
model-00004-of-00006.safetensors: 44%|████▍ | 2.22G/5.00G [08:22<01:55, 24.1MB/s][A[A[A
model-00001-of-00006.safetensors: 50%|████▉ | 2.46G/4.97G [08:22<00:40, 61.2MB/s][A
model-00004-of-00006.safetensors: 45%|████▍ | 2.24G/5.00G [08:22<01:30, 30.5MB/s][A[A[A
model-00001-of-00006.safetensors: 50%|████▉ | 2.48G/4.97G [08:22<00:39, 63.4MB/s][A
model-00004-of-00006.safetensors: 45%|████▌ | 2.26G/5.00G [08:22<01:16, 35.7MB/s][A[A[A
model-00001-of-00006.safetensors: 50%|█████ | 2.50G/4.97G [08:22<00:41, 58.9MB/s][A
model-00004-of-00006.safetensors: 45%|████▌ | 2.27G/5.00G [08:22<01:07, 40.7MB/s][A[A[A
model-00001-of-00006.safetensors: 51%|█████ | 2.51G/4.97G [08:22<00:41, 58.6MB/s][A
model-00004-of-00006.safetensors: 46%|████▌ | 2.29G/5.00G [08:23<01:00, 45.1MB/s][A[A[A
model-00001-of-00006.safetensors: 51%|█████ | 2.53G/4.97G [08:23<00:43, 55.7MB/s][A
model-00004-of-00006.safetensors: 46%|████▌ | 2.30G/5.00G [08:23<00:55, 48.4MB/s][A[A[A
model-00004-of-00006.safetensors: 46%|████▋ | 2.32G/5.00G [08:24<01:13, 36.6MB/s][A[A[A
model-00001-of-00006.safetensors: 51%|█████ | 2.54G/4.97G [08:24<01:07, 35.8MB/s][A
model-00001-of-00006.safetensors: 52%|█████▏ | 2.56G/4.97G [08:24<00:59, 40.7MB/s][A
model-00004-of-00006.safetensors: 47%|████▋ | 2.34G/5.00G [08:24<01:18, 33.7MB/s][A[A[A
model-00004-of-00006.safetensors: 47%|████▋ | 2.35G/5.00G [08:24<01:04, 41.1MB/s][A[A[A
model-00001-of-00006.safetensors: 52%|█████▏ | 2.58G/4.97G [08:24<01:06, 36.1MB/s][A
model-00004-of-00006.safetensors: 47%|████▋ | 2.37G/5.00G [08:25<01:04, 41.1MB/s][A[A[A
model-00001-of-00006.safetensors: 52%|█████▏ | 2.59G/4.97G [08:25<00:57, 41.5MB/s][A
model-00004-of-00006.safetensors: 48%|████▊ | 2.38G/5.00G [08:25<00:56, 46.0MB/s][A[A[A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.61G/4.97G [08:25<00:52, 44.7MB/s][A
model-00004-of-00006.safetensors: 48%|████▊ | 2.40G/5.00G [08:25<00:49, 52.6MB/s][A[A[A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.62G/4.97G [08:25<00:46, 50.4MB/s][A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.64G/4.97G [08:26<00:45, 50.6MB/s][A
model-00004-of-00006.safetensors: 48%|████▊ | 2.42G/5.00G [08:26<00:56, 45.4MB/s][A[A[A
model-00001-of-00006.safetensors: 53%|█████▎ | 2.66G/4.97G [08:26<00:43, 52.7MB/s][A
model-00004-of-00006.safetensors: 49%|████▊ | 2.43G/5.00G [08:26<00:52, 48.7MB/s][A[A[A
model-00001-of-00006.safetensors: 54%|█████▍ | 2.67G/4.97G [08:26<00:40, 56.6MB/s][A
model-00004-of-00006.safetensors: 49%|████▉ | 2.45G/5.00G [08:26<00:48, 52.9MB/s][A[A[A
model-00001-of-00006.safetensors: 54%|█████▍ | 2.69G/4.97G [08:26<00:38, 58.9MB/s][A
model-00004-of-00006.safetensors: 49%|████▉ | 2.46G/5.00G [08:26<00:43, 58.9MB/s][A[A[A
model-00004-of-00006.safetensors: 50%|████▉ | 2.48G/5.00G [08:27<00:40, 61.7MB/s][A[A[A
model-00001-of-00006.safetensors: 54%|█████▍ | 2.70G/4.97G [08:27<00:38, 58.4MB/s][A
model-00004-of-00006.safetensors: 50%|████▉ | 2.50G/5.00G [08:27<00:40, 62.0MB/s][A[A[A
model-00001-of-00006.safetensors: 55%|█████▍ | 2.72G/4.97G [08:27<00:35, 62.4MB/s][A
model-00001-of-00006.safetensors: 55%|█████▌ | 2.74G/4.97G [08:27<00:35, 63.1MB/s][A
model-00004-of-00006.safetensors: 50%|█████ | 2.51G/5.00G [08:27<00:39, 62.2MB/s][A[A[A
model-00004-of-00006.safetensors: 51%|█████ | 2.53G/5.00G [08:27<00:38, 63.7MB/s][A[A[A
model-00001-of-00006.safetensors: 55%|█████▌ | 2.75G/4.97G [08:27<00:36, 60.2MB/s][A
model-00001-of-00006.safetensors: 56%|█████▌ | 2.77G/4.97G [08:28<00:33, 65.9MB/s][A
model-00004-of-00006.safetensors: 51%|█████ | 2.54G/5.00G [08:27<00:38, 64.2MB/s][A[A[A
model-00001-of-00006.safetensors: 56%|█████▌ | 2.78G/4.97G [08:28<00:32, 67.3MB/s][A
model-00004-of-00006.safetensors: 51%|█████ | 2.56G/5.00G [08:28<00:37, 64.2MB/s][A[A[A
model-00001-of-00006.safetensors: 56%|█████▋ | 2.80G/4.97G [08:28<00:30, 71.5MB/s][A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.58G/5.00G [08:28<00:35, 68.1MB/s][A[A[A
model-00001-of-00006.safetensors: 57%|█████▋ | 2.82G/4.97G [08:28<00:31, 68.3MB/s][A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.59G/5.00G [08:28<00:36, 65.7MB/s][A[A[A
model-00001-of-00006.safetensors: 57%|█████▋ | 2.83G/4.97G [08:28<00:31, 68.1MB/s][A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.61G/5.00G [08:28<00:35, 66.6MB/s][A[A[A
model-00001-of-00006.safetensors: 57%|█████▋ | 2.85G/4.97G [08:29<00:32, 64.5MB/s][A
model-00004-of-00006.safetensors: 52%|█████▏ | 2.62G/5.00G [08:29<00:36, 64.8MB/s][A[A[A
model-00004-of-00006.safetensors: 53%|█████▎ | 2.64G/5.00G [08:29<00:34, 69.0MB/s][A[A[A
model-00004-of-00006.safetensors: 53%|█████▎ | 2.66G/5.00G [08:29<00:35, 67.0MB/s][A[A[A
model-00001-of-00006.safetensors: 58%|█████▊ | 2.86G/4.97G [08:29<00:42, 49.8MB/s][A
model-00004-of-00006.safetensors: 53%|█████▎ | 2.67G/5.00G [08:29<00:34, 68.4MB/s][A[A[A
model-00001-of-00006.safetensors: 58%|█████▊ | 2.88G/4.97G [08:29<00:41, 50.9MB/s][A
model-00004-of-00006.safetensors: 54%|█████▍ | 2.69G/5.00G [08:30<00:34, 66.4MB/s][A[A[A
model-00001-of-00006.safetensors: 58%|█████▊ | 2.90G/4.97G [08:30<00:38, 53.6MB/s][A
model-00004-of-00006.safetensors: 54%|█████▍ | 2.70G/5.00G [08:30<00:35, 64.7MB/s][A[A[A
model-00001-of-00006.safetensors: 59%|█████▊ | 2.91G/4.97G [08:30<00:35, 58.1MB/s][A
model-00001-of-00006.safetensors: 59%|█████▉ | 2.93G/4.97G [08:30<00:42, 47.8MB/s][A
model-00001-of-00006.safetensors: 59%|█████▉ | 2.94G/4.97G [08:31<00:39, 51.1MB/s][A
model-00004-of-00006.safetensors: 54%|█████▍ | 2.72G/5.00G [08:31<01:15, 30.2MB/s][A[A[A
model-00001-of-00006.safetensors: 60%|█████▉ | 2.96G/4.97G [08:31<00:50, 39.5MB/s][A
model-00004-of-00006.safetensors: 55%|█████▍ | 2.74G/5.00G [08:31<01:04, 35.0MB/s][A[A[A
model-00001-of-00006.safetensors: 60%|█████▉ | 2.98G/4.97G [08:32<00:44, 44.3MB/s][A
model-00004-of-00006.safetensors: 55%|█████▌ | 2.75G/5.00G [08:32<00:55, 40.6MB/s][A[A[A
model-00001-of-00006.safetensors: 60%|██████ | 2.99G/4.97G [08:32<00:39, 49.6MB/s][A
model-00004-of-00006.safetensors: 55%|█████▌ | 2.77G/5.00G [08:32<00:47, 47.4MB/s][A[A[A
model-00001-of-00006.safetensors: 61%|██████ | 3.01G/4.97G [08:32<00:35, 54.4MB/s][A
model-00004-of-00006.safetensors: 56%|█████▌ | 2.78G/5.00G [08:32<00:42, 52.5MB/s][A[A[A
model-00001-of-00006.safetensors: 61%|██████ | 3.02G/4.97G [08:32<00:34, 56.0MB/s][A
model-00004-of-00006.safetensors: 56%|█████▌ | 2.80G/5.00G [08:32<00:42, 52.0MB/s][A[A[A
model-00001-of-00006.safetensors: 61%|██████ | 3.04G/4.97G [08:33<00:32, 59.4MB/s][A
model-00004-of-00006.safetensors: 56%|█████▋ | 2.82G/5.00G [08:33<00:38, 57.1MB/s][A[A[A
model-00001-of-00006.safetensors: 62%|██████▏ | 3.06G/4.97G [08:33<00:29, 65.0MB/s][A
model-00004-of-00006.safetensors: 57%|█████▋ | 2.83G/5.00G [08:33<00:39, 55.6MB/s][A[A[A
model-00001-of-00006.safetensors: 62%|██████▏ | 3.07G/4.97G [08:33<00:30, 61.1MB/s][A
model-00004-of-00006.safetensors: 57%|█████▋ | 2.85G/5.00G [08:33<00:37, 56.8MB/s][A[A[A
model-00001-of-00006.safetensors: 62%|██████▏ | 3.09G/4.97G [08:33<00:30, 61.8MB/s][A
model-00004-of-00006.safetensors: 57%|█████▋ | 2.86G/5.00G [08:33<00:38, 56.0MB/s][A[A[A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.10G/4.97G [08:33<00:28, 65.8MB/s][A
model-00004-of-00006.safetensors: 58%|█████▊ | 2.88G/5.00G [08:34<00:35, 60.1MB/s][A[A[A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.12G/4.97G [08:34<00:28, 64.1MB/s][A
model-00004-of-00006.safetensors: 58%|█████▊ | 2.90G/5.00G [08:34<00:34, 61.8MB/s][A[A[A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.14G/4.97G [08:34<00:29, 63.0MB/s][A
model-00001-of-00006.safetensors: 63%|██████▎ | 3.15G/4.97G [08:34<00:27, 65.4MB/s][A
model-00004-of-00006.safetensors: 58%|█████▊ | 2.91G/5.00G [08:34<00:42, 49.6MB/s][A[A[A
model-00001-of-00006.safetensors: 64%|██████▍ | 3.17G/4.97G [08:34<00:26, 66.7MB/s][A
model-00004-of-00006.safetensors: 59%|█████▊ | 2.93G/5.00G [08:35<00:38, 53.6MB/s][A[A[A
model-00001-of-00006.safetensors: 64%|██████▍ | 3.18G/4.97G [08:35<00:25, 69.3MB/s][A
model-00004-of-00006.safetensors: 59%|█████▉ | 2.94G/5.00G [08:35<00:34, 59.0MB/s][A[A[A
model-00001-of-00006.safetensors: 64%|██████▍ | 3.20G/4.97G [08:35<00:25, 68.4MB/s][A
model-00004-of-00006.safetensors: 59%|█████▉ | 2.96G/5.00G [08:35<00:34, 59.9MB/s][A[A[A
model-00001-of-00006.safetensors: 65%|██████▍ | 3.22G/4.97G [08:35<00:25, 68.9MB/s][A
model-00001-of-00006.safetensors: 65%|██████▌ | 3.23G/4.97G [08:35<00:25, 67.2MB/s][A
model-00004-of-00006.safetensors: 60%|█████▉ | 2.98G/5.00G [08:35<00:37, 53.7MB/s][A[A[A
model-00001-of-00006.safetensors: 65%|██████▌ | 3.25G/4.97G [08:36<00:25, 68.7MB/s][A
model-00001-of-00006.safetensors: 66%|██████▌ | 3.26G/4.97G [08:36<00:24, 68.4MB/s][A
model-00004-of-00006.safetensors: 60%|█████▉ | 2.99G/5.00G [08:36<00:42, 47.1MB/s][A[A[A
model-00001-of-00006.safetensors: 66%|██████▌ | 3.28G/4.97G [08:36<00:24, 69.3MB/s][A
model-00001-of-00006.safetensors: 66%|██████▋ | 3.30G/4.97G [08:36<00:24, 69.2MB/s][A
model-00004-of-00006.safetensors: 60%|██████ | 3.01G/5.00G [08:36<00:44, 44.3MB/s][A[A[A
model-00001-of-00006.safetensors: 67%|██████▋ | 3.31G/4.97G [08:37<00:23, 69.4MB/s][A
model-00004-of-00006.safetensors: 60%|██████ | 3.02G/5.00G [08:37<00:40, 48.4MB/s][A[A[A
model-00001-of-00006.safetensors: 67%|██████▋ | 3.33G/4.97G [08:37<00:23, 69.9MB/s][A
model-00004-of-00006.safetensors: 61%|██████ | 3.04G/5.00G [08:37<00:36, 53.7MB/s][A[A[A
model-00004-of-00006.safetensors: 61%|██████ | 3.06G/5.00G [08:37<00:34, 57.1MB/s][A[A[A
model-00001-of-00006.safetensors: 67%|██████▋ | 3.34G/4.97G [08:37<00:36, 44.3MB/s][A
model-00001-of-00006.safetensors: 68%|██████▊ | 3.36G/4.97G [08:38<00:31, 50.3MB/s][A
model-00004-of-00006.safetensors: 61%|██████▏ | 3.07G/5.00G [08:38<00:48, 40.1MB/s][A[A[A
model-00001-of-00006.safetensors: 68%|██████▊ | 3.38G/4.97G [08:38<00:29, 54.2MB/s][A
model-00004-of-00006.safetensors: 62%|██████▏ | 3.09G/5.00G [08:38<00:41, 45.6MB/s][A[A[A
model-00001-of-00006.safetensors: 68%|██████▊ | 3.39G/4.97G [08:38<00:27, 57.3MB/s][A
model-00004-of-00006.safetensors: 62%|██████▏ | 3.10G/5.00G [08:38<00:38, 49.1MB/s][A[A[A
model-00001-of-00006.safetensors: 69%|██████▊ | 3.41G/4.97G [08:38<00:24, 64.4MB/s][A
model-00004-of-00006.safetensors: 62%|██████▏ | 3.12G/5.00G [08:38<00:35, 53.4MB/s][A[A[A
model-00001-of-00006.safetensors: 69%|██████▉ | 3.42G/4.97G [08:39<00:23, 66.4MB/s][A
model-00004-of-00006.safetensors: 63%|██████▎ | 3.14G/5.00G [08:39<00:33, 56.2MB/s][A[A[A
model-00001-of-00006.safetensors: 69%|██████▉ | 3.44G/4.97G [08:39<00:22, 66.8MB/s][A
model-00004-of-00006.safetensors: 63%|██████▎ | 3.15G/5.00G [08:39<00:33, 55.8MB/s][A[A[A
model-00001-of-00006.safetensors: 70%|██████▉ | 3.46G/4.97G [08:39<00:24, 62.1MB/s][A
model-00004-of-00006.safetensors: 63%|██████▎ | 3.17G/5.00G [08:39<00:31, 58.3MB/s][A[A[A
model-00001-of-00006.safetensors: 70%|██████▉ | 3.47G/4.97G [08:39<00:23, 62.8MB/s][A
model-00004-of-00006.safetensors: 64%|██████▎ | 3.18G/5.00G [08:39<00:29, 61.0MB/s][A[A[A
model-00001-of-00006.safetensors: 70%|███████ | 3.49G/4.97G [08:40<00:21, 68.8MB/s][A
model-00004-of-00006.safetensors: 64%|██████▍ | 3.20G/5.00G [08:40<00:28, 63.8MB/s][A[A[A
model-00004-of-00006.safetensors: 64%|██████▍ | 3.22G/5.00G [08:40<00:29, 61.5MB/s][A[A[A
model-00004-of-00006.safetensors: 65%|██████▍ | 3.23G/5.00G [08:40<00:28, 62.8MB/s][A[A[A
model-00001-of-00006.safetensors: 71%|███████ | 3.50G/4.97G [08:40<00:37, 38.8MB/s][A
model-00004-of-00006.safetensors: 65%|██████▍ | 3.25G/5.00G [08:40<00:26, 66.9MB/s][A[A[A
model-00001-of-00006.safetensors: 71%|███████ | 3.52G/4.97G [08:41<00:32, 44.8MB/s][A
model-00004-of-00006.safetensors: 65%|██████▌ | 3.26G/5.00G [08:41<00:27, 63.1MB/s][A[A[A
model-00001-of-00006.safetensors: 71%|███████ | 3.54G/4.97G [08:41<00:29, 48.7MB/s][A
model-00004-of-00006.safetensors: 66%|██████▌ | 3.28G/5.00G [08:41<00:24, 69.3MB/s][A[A[A
model-00004-of-00006.safetensors: 66%|██████▌ | 3.30G/5.00G [08:41<00:24, 70.9MB/s][A[A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.55G/4.97G [08:41<00:28, 49.6MB/s][A
model-00004-of-00006.safetensors: 66%|██████▌ | 3.31G/5.00G [08:41<00:24, 67.7MB/s][A[A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.57G/4.97G [08:41<00:26, 53.0MB/s][A
model-00004-of-00006.safetensors: 67%|██████▋ | 3.33G/5.00G [08:42<00:24, 68.7MB/s][A[A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.58G/4.97G [08:42<00:23, 57.8MB/s][A
model-00004-of-00006.safetensors: 67%|██████▋ | 3.34G/5.00G [08:42<00:23, 70.7MB/s][A[A[A
model-00001-of-00006.safetensors: 72%|███████▏ | 3.60G/4.97G [08:42<00:22, 60.4MB/s][A
model-00004-of-00006.safetensors: 67%|██████▋ | 3.36G/5.00G [08:42<00:22, 71.8MB/s][A[A[A
model-00001-of-00006.safetensors: 73%|███████▎ | 3.62G/4.97G [08:42<00:21, 62.3MB/s][A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.38G/5.00G [08:42<00:23, 69.7MB/s][A[A[A
model-00001-of-00006.safetensors: 73%|███████▎ | 3.63G/4.97G [08:42<00:21, 62.2MB/s][A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.39G/5.00G [08:43<00:24, 65.9MB/s][A[A[A
model-00001-of-00006.safetensors: 73%|███████▎ | 3.65G/4.97G [08:43<00:21, 62.6MB/s][A
model-00001-of-00006.safetensors: 74%|███████▍ | 3.66G/4.97G [08:43<00:19, 67.6MB/s][A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.41G/5.00G [08:43<00:25, 62.6MB/s][A[A[A
model-00001-of-00006.safetensors: 74%|███████▍ | 3.68G/4.97G [08:43<00:19, 65.7MB/s][A
model-00004-of-00006.safetensors: 68%|██████▊ | 3.42G/5.00G [08:43<00:27, 58.3MB/s][A[A[A
model-00001-of-00006.safetensors: 74%|███████▍ | 3.70G/4.97G [08:43<00:19, 65.7MB/s][A
model-00004-of-00006.safetensors: 69%|██████▉ | 3.44G/5.00G [08:43<00:27, 56.0MB/s][A[A[A
model-00001-of-00006.safetensors: 75%|███████▍ | 3.71G/4.97G [08:44<00:22, 56.1MB/s][A
model-00004-of-00006.safetensors: 69%|██████▉ | 3.46G/5.00G [08:44<00:27, 56.0MB/s][A[A[A
model-00001-of-00006.safetensors: 75%|███████▌ | 3.73G/4.97G [08:44<00:21, 57.2MB/s][A
model-00004-of-00006.safetensors: 69%|██████▉ | 3.47G/5.00G [08:44<00:27, 55.1MB/s][A[A[A
model-00001-of-00006.safetensors: 75%|███████▌ | 3.74G/4.97G [08:44<00:20, 59.9MB/s][A
model-00004-of-00006.safetensors: 70%|██████▉ | 3.49G/5.00G [08:44<00:25, 59.4MB/s][A[A[A
model-00004-of-00006.safetensors: 70%|███████ | 3.50G/5.00G [08:44<00:23, 62.6MB/s][A[A[A
model-00004-of-00006.safetensors: 70%|███████ | 3.52G/5.00G [08:45<00:24, 61.0MB/s][A[A[A
model-00001-of-00006.safetensors: 76%|███████▌ | 3.76G/4.97G [08:45<00:30, 39.7MB/s][A
model-00001-of-00006.safetensors: 76%|███████▌ | 3.78G/4.97G [08:45<00:31, 37.9MB/s][A
model-00004-of-00006.safetensors: 71%|███████ | 3.54G/5.00G [08:46<00:40, 36.2MB/s][A[A[A
model-00001-of-00006.safetensors: 76%|███████▋ | 3.79G/4.97G [08:46<00:30, 39.0MB/s][A
model-00001-of-00006.safetensors: 77%|███████▋ | 3.81G/4.97G [08:46<00:28, 40.2MB/s][A
model-00004-of-00006.safetensors: 71%|███████ | 3.55G/5.00G [08:46<00:49, 29.1MB/s][A[A[A
model-00004-of-00006.safetensors: 71%|███████▏ | 3.57G/5.00G [08:47<00:40, 35.1MB/s][A[A[A
model-00001-of-00006.safetensors: 77%|███████▋ | 3.82G/4.97G [08:47<00:32, 34.8MB/s][A
model-00004-of-00006.safetensors: 72%|███████▏ | 3.58G/5.00G [08:47<00:35, 40.3MB/s][A[A[A
model-00001-of-00006.safetensors: 77%|███████▋ | 3.84G/4.97G [08:47<00:29, 38.4MB/s][A
model-00001-of-00006.safetensors: 78%|███████▊ | 3.86G/4.97G [08:47<00:27, 41.1MB/s][A
model-00004-of-00006.safetensors: 72%|███████▏ | 3.60G/5.00G [08:47<00:37, 37.7MB/s][A[A[A
model-00004-of-00006.safetensors: 72%|███████▏ | 3.62G/5.00G [08:48<00:32, 42.0MB/s][A[A[A
model-00001-of-00006.safetensors: 78%|███████▊ | 3.87G/4.97G [08:48<00:28, 38.1MB/s][A
model-00004-of-00006.safetensors: 73%|███████▎ | 3.63G/5.00G [08:48<00:30, 45.5MB/s][A[A[A
model-00001-of-00006.safetensors: 78%|███████▊ | 3.89G/4.97G [08:48<00:25, 42.7MB/s][A
model-00004-of-00006.safetensors: 73%|███████▎ | 3.65G/5.00G [08:48<00:26, 51.2MB/s][A[A[A
model-00001-of-00006.safetensors: 79%|███████▊ | 3.90G/4.97G [08:48<00:22, 46.8MB/s][A
model-00004-of-00006.safetensors: 73%|███████▎ | 3.66G/5.00G [08:49<00:27, 48.1MB/s][A[A[A
model-00001-of-00006.safetensors: 79%|███████▉ | 3.92G/4.97G [08:49<00:19, 52.8MB/s][A
model-00004-of-00006.safetensors: 74%|███████▎ | 3.68G/5.00G [08:49<00:24, 53.5MB/s][A[A[A
model-00001-of-00006.safetensors: 79%|███████▉ | 3.94G/4.97G [08:49<00:18, 56.6MB/s][A
model-00004-of-00006.safetensors: 74%|███████▍ | 3.70G/5.00G [08:49<00:23, 55.5MB/s][A[A[A
model-00001-of-00006.safetensors: 80%|███████▉ | 3.95G/4.97G [08:49<00:18, 54.3MB/s][A
model-00004-of-00006.safetensors: 74%|███████▍ | 3.71G/5.00G [08:49<00:20, 62.4MB/s][A[A[A
model-00004-of-00006.safetensors: 75%|███████▍ | 3.73G/5.00G [08:49<00:19, 66.7MB/s][A[A[A
model-00001-of-00006.safetensors: 80%|███████▉ | 3.97G/4.97G [08:50<00:20, 48.4MB/s][A
model-00001-of-00006.safetensors: 80%|████████ | 3.98G/4.97G [08:50<00:19, 51.1MB/s][A
model-00004-of-00006.safetensors: 75%|███████▍ | 3.74G/5.00G [08:50<00:22, 55.2MB/s][A[A[A
model-00001-of-00006.safetensors: 81%|████████ | 4.00G/4.97G [08:50<00:16, 56.9MB/s][A
model-00004-of-00006.safetensors: 75%|███████▌ | 3.76G/5.00G [08:50<00:21, 58.5MB/s][A[A[A
model-00001-of-00006.safetensors: 81%|████████ | 4.02G/4.97G [08:50<00:16, 57.1MB/s][A
model-00001-of-00006.safetensors: 81%|████████ | 4.03G/4.97G [08:51<00:15, 60.9MB/s][A
model-00004-of-00006.safetensors: 76%|███████▌ | 3.78G/5.00G [08:51<00:26, 47.0MB/s][A[A[A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.05G/4.97G [08:51<00:14, 64.4MB/s][A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.06G/4.97G [08:51<00:14, 63.9MB/s][A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.08G/4.97G [08:51<00:12, 68.9MB/s][A
model-00004-of-00006.safetensors: 76%|███████▌ | 3.79G/5.00G [08:51<00:34, 34.6MB/s][A[A[A
model-00001-of-00006.safetensors: 82%|████████▏ | 4.10G/4.97G [08:51<00:12, 70.6MB/s][A
model-00004-of-00006.safetensors: 76%|███████▌ | 3.81G/5.00G [08:52<00:29, 40.4MB/s][A[A[A
model-00001-of-00006.safetensors: 83%|████████▎ | 4.11G/4.97G [08:52<00:12, 66.5MB/s][A
model-00004-of-00006.safetensors: 76%|███████▋ | 3.82G/5.00G [08:52<00:28, 41.2MB/s][A[A[A
model-00004-of-00006.safetensors: 77%|███████▋ | 3.84G/5.00G [08:52<00:23, 49.2MB/s][A[A[A
model-00001-of-00006.safetensors: 83%|████████▎ | 4.13G/4.97G [08:52<00:16, 52.2MB/s][A
model-00004-of-00006.safetensors: 77%|███████▋ | 3.86G/5.00G [08:52<00:22, 50.5MB/s][A[A[A
model-00001-of-00006.safetensors: 83%|████████▎ | 4.14G/4.97G [08:52<00:15, 54.2MB/s][A
model-00001-of-00006.safetensors: 84%|████████▍ | 4.16G/4.97G [08:53<00:13, 59.6MB/s][A
model-00001-of-00006.safetensors: 84%|████████▍ | 4.18G/4.97G [08:53<00:12, 61.2MB/s][A
model-00004-of-00006.safetensors: 77%|███████▋ | 3.87G/5.00G [08:53<00:28, 40.2MB/s][A[A[A
model-00001-of-00006.safetensors: 84%|████████▍ | 4.19G/4.97G [08:53<00:12, 62.8MB/s][A
model-00004-of-00006.safetensors: 78%|███████▊ | 3.89G/5.00G [08:53<00:23, 47.7MB/s][A[A[A
model-00001-of-00006.safetensors: 85%|████████▍ | 4.21G/4.97G [08:53<00:11, 66.1MB/s][A
model-00004-of-00006.safetensors: 78%|███████▊ | 3.90G/5.00G [08:53<00:21, 51.5MB/s][A[A[A
model-00004-of-00006.safetensors: 78%|███████▊ | 3.92G/5.00G [08:54<00:19, 55.4MB/s][A[A[A
model-00001-of-00006.safetensors: 85%|████████▌ | 4.22G/4.97G [08:54<00:14, 50.7MB/s][A
model-00004-of-00006.safetensors: 79%|███████▊ | 3.94G/5.00G [08:54<00:17, 60.9MB/s][A[A[A
model-00001-of-00006.safetensors: 85%|████████▌ | 4.24G/4.97G [08:54<00:12, 56.7MB/s][A
model-00004-of-00006.safetensors: 79%|███████▉ | 3.95G/5.00G [08:54<00:16, 63.6MB/s][A[A[A
model-00004-of-00006.safetensors: 79%|███████▉ | 3.97G/5.00G [08:54<00:15, 65.5MB/s][A[A[A
model-00001-of-00006.safetensors: 86%|████████▌ | 4.26G/4.97G [08:54<00:14, 50.4MB/s][A
model-00004-of-00006.safetensors: 80%|███████▉ | 3.98G/5.00G [08:55<00:16, 62.4MB/s][A[A[A
model-00001-of-00006.safetensors: 86%|████████▌ | 4.27G/4.97G [08:55<00:12, 53.9MB/s][A
model-00004-of-00006.safetensors: 80%|████████ | 4.00G/5.00G [08:55<00:15, 63.7MB/s][A[A[A
model-00001-of-00006.safetensors: 86%|████████▋ | 4.29G/4.97G [08:55<00:11, 58.1MB/s][A
model-00004-of-00006.safetensors: 80%|████████ | 4.02G/5.00G [08:55<00:15, 64.2MB/s][A[A[A
model-00001-of-00006.safetensors: 87%|████████▋ | 4.30G/4.97G [08:55<00:12, 54.9MB/s][A
model-00004-of-00006.safetensors: 81%|████████ | 4.03G/5.00G [08:55<00:14, 65.8MB/s][A[A[A
model-00001-of-00006.safetensors: 87%|████████▋ | 4.32G/4.97G [08:55<00:11, 57.2MB/s][A
model-00004-of-00006.safetensors: 81%|████████ | 4.05G/5.00G [08:56<00:15, 60.6MB/s][A[A[A
model-00001-of-00006.safetensors: 87%|████████▋ | 4.34G/4.97G [08:56<00:10, 59.5MB/s][A
model-00001-of-00006.safetensors: 88%|████████▊ | 4.35G/4.97G [08:56<00:10, 56.7MB/s][A
model-00001-of-00006.safetensors: 88%|████████▊ | 4.37G/4.97G [08:56<00:09, 61.0MB/s][A
model-00004-of-00006.safetensors: 81%|████████▏ | 4.06G/5.00G [08:56<00:21, 43.6MB/s][A[A[A
model-00001-of-00006.safetensors: 88%|████████▊ | 4.38G/4.97G [08:57<00:09, 61.0MB/s][A
model-00001-of-00006.safetensors: 89%|████████▊ | 4.40G/4.97G [08:57<00:09, 62.4MB/s][A
model-00004-of-00006.safetensors: 82%|████████▏ | 4.08G/5.00G [08:57<00:23, 39.0MB/s][A[A[A
model-00001-of-00006.safetensors: 89%|████████▉ | 4.42G/4.97G [08:57<00:08, 64.5MB/s][A
model-00004-of-00006.safetensors: 82%|████████▏ | 4.10G/5.00G [08:57<00:19, 45.6MB/s][A[A[A
model-00001-of-00006.safetensors: 89%|████████▉ | 4.43G/4.97G [08:57<00:08, 66.3MB/s][A
model-00001-of-00006.safetensors: 90%|████████▉ | 4.45G/4.97G [08:58<00:08, 61.7MB/s][A
model-00001-of-00006.safetensors: 90%|████████▉ | 4.46G/4.97G [08:58<00:07, 64.0MB/s][A
model-00004-of-00006.safetensors: 82%|████████▏ | 4.11G/5.00G [08:58<00:27, 32.2MB/s][A[A[A
model-00004-of-00006.safetensors: 83%|████████▎ | 4.13G/5.00G [08:58<00:22, 38.8MB/s][A[A[A
model-00001-of-00006.safetensors: 90%|█████████ | 4.48G/4.97G [08:58<00:08, 58.9MB/s][A
model-00004-of-00006.safetensors: 83%|████████▎ | 4.14G/5.00G [08:58<00:19, 43.7MB/s][A[A[A
model-00001-of-00006.safetensors: 90%|█████████ | 4.49G/4.97G [08:58<00:09, 51.1MB/s][A
model-00001-of-00006.safetensors: 91%|█████████ | 4.50G/4.97G [08:58<00:08, 53.6MB/s][A
model-00004-of-00006.safetensors: 83%|████████▎ | 4.16G/5.00G [08:59<00:18, 44.5MB/s][A[A[A
model-00001-of-00006.safetensors: 91%|█████████ | 4.51G/4.97G [08:59<00:07, 57.8MB/s][A
model-00004-of-00006.safetensors: 84%|████████▎ | 4.18G/5.00G [08:59<00:16, 48.7MB/s][A[A[A
model-00001-of-00006.safetensors: 91%|█████████ | 4.53G/4.97G [08:59<00:07, 56.7MB/s][A
model-00004-of-00006.safetensors: 84%|████████▍ | 4.19G/5.00G [08:59<00:15, 52.4MB/s][A[A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.54G/4.97G [08:59<00:07, 58.8MB/s][A
model-00004-of-00006.safetensors: 84%|████████▍ | 4.21G/5.00G [08:59<00:13, 58.4MB/s][A[A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.56G/4.97G [08:59<00:06, 61.2MB/s][A
model-00004-of-00006.safetensors: 84%|████████▍ | 4.22G/5.00G [09:00<00:12, 62.7MB/s][A[A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.58G/4.97G [09:00<00:06, 64.0MB/s][A
model-00004-of-00006.safetensors: 85%|████████▍ | 4.24G/5.00G [09:00<00:13, 57.4MB/s][A[A[A
model-00001-of-00006.safetensors: 92%|█████████▏| 4.59G/4.97G [09:00<00:05, 64.8MB/s][A
model-00001-of-00006.safetensors: 93%|█████████▎| 4.61G/4.97G [09:00<00:05, 65.0MB/s][A
model-00001-of-00006.safetensors: 93%|█████████▎| 4.62G/4.97G [09:00<00:05, 66.2MB/s][A
model-00004-of-00006.safetensors: 85%|████████▌ | 4.26G/5.00G [09:00<00:17, 42.9MB/s][A[A[A
model-00001-of-00006.safetensors: 93%|█████████▎| 4.64G/4.97G [09:01<00:05, 64.7MB/s][A
model-00004-of-00006.safetensors: 85%|████████▌ | 4.27G/5.00G [09:01<00:15, 48.0MB/s][A[A[A
model-00001-of-00006.safetensors: 94%|█████████▍| 4.66G/4.97G [09:01<00:04, 64.1MB/s][A
model-00004-of-00006.safetensors: 86%|████████▌ | 4.29G/5.00G [09:01<00:13, 52.4MB/s][A[A[A
model-00001-of-00006.safetensors: 94%|█████████▍| 4.67G/4.97G [09:01<00:04, 65.2MB/s][A
model-00004-of-00006.safetensors: 86%|████████▌ | 4.30G/5.00G [09:01<00:12, 57.6MB/s][A[A[A
model-00001-of-00006.safetensors: 94%|█████████▍| 4.69G/4.97G [09:01<00:04, 66.6MB/s][A
model-00004-of-00006.safetensors: 86%|████████▋ | 4.32G/5.00G [09:02<00:12, 55.1MB/s][A[A[A
model-00001-of-00006.safetensors: 95%|█████████▍| 4.70G/4.97G [09:02<00:04, 56.0MB/s][A
model-00004-of-00006.safetensors: 87%|████████▋ | 4.33G/5.00G [09:02<00:14, 45.4MB/s][A[A[A
model-00004-of-00006.safetensors: 87%|████████▋ | 4.34G/5.00G [09:02<00:14, 45.5MB/s][A[A[A
model-00001-of-00006.safetensors: 95%|█████████▌| 4.72G/4.97G [09:02<00:04, 57.5MB/s][A
model-00004-of-00006.safetensors: 87%|████████▋ | 4.35G/5.00G [09:02<00:12, 51.6MB/s][A[A[A
model-00001-of-00006.safetensors: 95%|█████████▌| 4.74G/4.97G [09:02<00:03, 58.2MB/s][A
model-00004-of-00006.safetensors: 87%|████████▋ | 4.37G/5.00G [09:02<00:11, 56.6MB/s][A[A[A
model-00001-of-00006.safetensors: 96%|█████████▌| 4.75G/4.97G [09:03<00:03, 56.6MB/s][A
model-00004-of-00006.safetensors: 88%|████████▊ | 4.38G/5.00G [09:03<00:10, 60.7MB/s][A[A[A
model-00001-of-00006.safetensors: 96%|█████████▌| 4.77G/4.97G [09:03<00:03, 61.8MB/s][A
model-00004-of-00006.safetensors: 88%|████████▊ | 4.40G/5.00G [09:03<00:09, 61.7MB/s][A[A[A
model-00001-of-00006.safetensors: 96%|█████████▋| 4.78G/4.97G [09:03<00:03, 57.4MB/s][A
model-00004-of-00006.safetensors: 88%|████████▊ | 4.42G/5.00G [09:03<00:09, 61.6MB/s][A[A[A
model-00001-of-00006.safetensors: 97%|█████████▋| 4.80G/4.97G [09:03<00:02, 59.4MB/s][A
model-00004-of-00006.safetensors: 89%|████████▊ | 4.43G/5.00G [09:03<00:08, 64.3MB/s][A[A[A
model-00001-of-00006.safetensors: 97%|█████████▋| 4.82G/4.97G [09:04<00:02, 61.6MB/s][A
model-00004-of-00006.safetensors: 89%|████████▉ | 4.45G/5.00G [09:04<00:08, 66.0MB/s][A[A[A
model-00004-of-00006.safetensors: 89%|████████▉ | 4.46G/5.00G [09:04<00:07, 70.0MB/s][A[A[A
model-00001-of-00006.safetensors: 97%|█████████▋| 4.83G/4.97G [09:04<00:02, 62.4MB/s][A
model-00004-of-00006.safetensors: 90%|████████▉ | 4.48G/5.00G [09:04<00:07, 70.9MB/s][A[A[A
model-00001-of-00006.safetensors: 98%|█████████▊| 4.85G/4.97G [09:04<00:01, 65.5MB/s][A
model-00001-of-00006.safetensors: 98%|█████████▊| 4.86G/4.97G [09:04<00:01, 66.1MB/s][A
model-00004-of-00006.safetensors: 90%|████████▉ | 4.50G/5.00G [09:04<00:07, 63.5MB/s][A[A[A
model-00001-of-00006.safetensors: 98%|█████████▊| 4.88G/4.97G [09:05<00:01, 67.6MB/s][A
model-00004-of-00006.safetensors: 90%|█████████ | 4.51G/5.00G [09:05<00:07, 62.9MB/s][A[A[A
model-00001-of-00006.safetensors: 99%|█████████▊| 4.90G/4.97G [09:05<00:01, 64.8MB/s][A
model-00004-of-00006.safetensors: 91%|█████████ | 4.53G/5.00G [09:05<00:07, 65.3MB/s][A[A[A
model-00001-of-00006.safetensors: 99%|█████████▉| 4.91G/4.97G [09:05<00:00, 71.2MB/s][A
model-00004-of-00006.safetensors: 91%|█████████ | 4.54G/5.00G [09:05<00:07, 64.5MB/s][A[A[A
model-00001-of-00006.safetensors: 99%|█████████▉| 4.93G/4.97G [09:05<00:00, 74.8MB/s][A
model-00004-of-00006.safetensors: 91%|█████████ | 4.56G/5.00G [09:05<00:07, 62.6MB/s][A[A[A
model-00001-of-00006.safetensors: 100%|█████████▉| 4.94G/4.97G [09:06<00:00, 63.0MB/s][A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.58G/5.00G [09:06<00:06, 66.2MB/s][A[A[A
model-00001-of-00006.safetensors: 100%|█████████▉| 4.96G/4.97G [09:06<00:00, 64.4MB/s][A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.59G/5.00G [09:06<00:05, 69.2MB/s][A[A[A
model-00001-of-00006.safetensors: 100%|██████████| 4.97G/4.97G [09:06<00:00, 9.09MB/s]
Upload 132 LFS files: 1%| | 1/132 [09:06<19:53:25, 546.60s/it][A[A[A[A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.61G/5.00G [09:06<00:06, 61.5MB/s][A[A[A
model-00004-of-00006.safetensors: 92%|█████████▏| 4.62G/5.00G [09:06<00:06, 60.7MB/s][A[A[A
model-00004-of-00006.safetensors: 93%|█████████▎| 4.64G/5.00G [09:07<00:05, 63.1MB/s][A[A[A
model-00004-of-00006.safetensors: 93%|█████████▎| 4.66G/5.00G [09:07<00:05, 66.6MB/s][A[A[A
model-00004-of-00006.safetensors: 93%|█████████▎| 4.67G/5.00G [09:07<00:04, 68.0MB/s][A[A[A
model-00004-of-00006.safetensors: 94%|█████████▍| 4.69G/5.00G [09:07<00:04, 71.5MB/s][A[A[A
model-00004-of-00006.safetensors: 94%|█████████▍| 4.70G/5.00G [09:08<00:04, 66.7MB/s][A[A[A
model-00004-of-00006.safetensors: 94%|█████████▍| 4.72G/5.00G [09:08<00:05, 50.6MB/s][A[A[A
model-00004-of-00006.safetensors: 95%|█████████▍| 4.74G/5.00G [09:08<00:05, 47.4MB/s][A[A[A
model-00004-of-00006.safetensors: 95%|█████████▌| 4.75G/5.00G [09:09<00:04, 49.6MB/s][A[A[A
model-00004-of-00006.safetensors: 95%|█████████▌| 4.77G/5.00G [09:09<00:05, 45.1MB/s][A[A[A
model-00004-of-00006.safetensors: 96%|█████████▌| 4.78G/5.00G [09:09<00:04, 50.8MB/s][A[A[A
model-00004-of-00006.safetensors: 96%|█████████▌| 4.80G/5.00G [09:10<00:04, 47.1MB/s][A[A[A
model-00004-of-00006.safetensors: 96%|█████████▋| 4.82G/5.00G [09:10<00:03, 49.1MB/s][A[A[A
model-00004-of-00006.safetensors: 97%|█████████▋| 4.83G/5.00G [09:10<00:03, 51.9MB/s][A[A[A
model-00004-of-00006.safetensors: 97%|█████████▋| 4.85G/5.00G [09:11<00:02, 56.0MB/s][A[A[A
model-00004-of-00006.safetensors: 97%|█████████▋| 4.86G/5.00G [09:11<00:02, 59.2MB/s][A[A[A
model-00004-of-00006.safetensors: 98%|█████████▊| 4.88G/5.00G [09:11<00:01, 66.7MB/s][A[A[A
model-00004-of-00006.safetensors: 98%|█████████▊| 4.90G/5.00G [09:11<00:01, 65.7MB/s][A[A[A
model-00004-of-00006.safetensors: 98%|█████████▊| 4.91G/5.00G [09:12<00:01, 61.5MB/s][A[A[A
model-00004-of-00006.safetensors: 99%|█████████▊| 4.93G/5.00G [09:12<00:01, 62.6MB/s][A[A[A
model-00004-of-00006.safetensors: 99%|█████████▉| 4.94G/5.00G [09:12<00:00, 62.9MB/s][A[A[A
model-00004-of-00006.safetensors: 99%|█████████▉| 4.96G/5.00G [09:12<00:00, 65.5MB/s][A[A[A
model-00004-of-00006.safetensors: 100%|█████████▉| 4.98G/5.00G [09:12<00:00, 65.2MB/s][A[A[A
model-00004-of-00006.safetensors: 100%|█████████▉| 4.99G/5.00G [09:13<00:00, 53.8MB/s][A[A[A
model-00004-of-00006.safetensors: 100%|██████████| 5.00G/5.00G [09:13<00:00, 9.03MB/s]
Upload 132 LFS files: 2%|▏ | 2/132 [09:13<8:16:53, 229.34s/it] [A[A[A[A
Upload 132 LFS files: 100%|██████████| 132/132 [09:13<00:00, 4.20s/it]
[rank72]:[E219 22:13:07.453120316 ProcessGroupNCCL.cpp:616] [Rank 72] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600003 milliseconds before timing out.
[rank72]:[E219 22:13:07.484236333 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 72] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank90]:[E219 22:13:08.693468178 ProcessGroupNCCL.cpp:616] [Rank 90] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600031 milliseconds before timing out.
[rank100]:[E219 22:13:07.489081001 ProcessGroupNCCL.cpp:616] [Rank 100] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600032 milliseconds before timing out.
[rank95]:[E219 22:13:08.704044233 ProcessGroupNCCL.cpp:616] [Rank 95] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
[rank102]:[E219 22:13:08.501247116 ProcessGroupNCCL.cpp:616] [Rank 102] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600009 milliseconds before timing out.
[rank113]:[E219 22:13:07.666891442 ProcessGroupNCCL.cpp:616] [Rank 113] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[rank103]:[E219 22:13:07.498838580 ProcessGroupNCCL.cpp:616] [Rank 103] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600009 milliseconds before timing out.
[rank97]:[E219 22:13:07.465755678 ProcessGroupNCCL.cpp:616] [Rank 97] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600050 milliseconds before timing out.
[rank102]:[E219 22:13:08.535471316 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 102] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank33]:[E219 22:13:07.516972831 ProcessGroupNCCL.cpp:616] [Rank 33] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600014 milliseconds before timing out.
[rank9]:[E219 22:13:07.408800414 ProcessGroupNCCL.cpp:616] [Rank 9] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600018 milliseconds before timing out.
[rank26]:[E219 22:13:08.608415336 ProcessGroupNCCL.cpp:616] [Rank 26] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
[rank54]:[E219 22:13:07.688857534 ProcessGroupNCCL.cpp:616] [Rank 54] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[rank89]:[E219 22:13:08.693462988 ProcessGroupNCCL.cpp:616] [Rank 89] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
[rank93]:[E219 22:13:08.730233627 ProcessGroupNCCL.cpp:616] [Rank 93] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
[rank101]:[E219 22:13:07.486113816 ProcessGroupNCCL.cpp:616] [Rank 101] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600035 milliseconds before timing out.
[rank99]:[E219 22:13:07.482167457 ProcessGroupNCCL.cpp:616] [Rank 99] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
[rank94]:[E219 22:13:08.695299865 ProcessGroupNCCL.cpp:616] [Rank 94] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
[rank98]:[E219 22:13:07.447880646 ProcessGroupNCCL.cpp:616] [Rank 98] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600003 milliseconds before timing out.
[rank79]:[E219 22:13:07.503623624 ProcessGroupNCCL.cpp:616] [Rank 79] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600022 milliseconds before timing out.
[rank35]:[E219 22:13:07.512093026 ProcessGroupNCCL.cpp:616] [Rank 35] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600006 milliseconds before timing out.
[rank116]:[E219 22:13:07.711382787 ProcessGroupNCCL.cpp:616] [Rank 116] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600007 milliseconds before timing out.
[rank92]:[E219 22:13:08.695542377 ProcessGroupNCCL.cpp:616] [Rank 92] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
[rank52]:[E219 22:13:07.684379700 ProcessGroupNCCL.cpp:616] [Rank 52] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600002 milliseconds before timing out.
[rank90]:[E219 22:13:08.737912920 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 90] Exception (either an error or timeout) detected by watchdog at work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank95]:[E219 22:13:08.738863970 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 95] Exception (either an error or timeout) detected by watchdog at work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank12]:[E219 22:13:08.491310541 ProcessGroupNCCL.cpp:616] [Rank 12] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16647, OpType=ALLREDUCE, NumelIn=374936232, NumelOut=374936232, Timeout(ms)=600000) ran for 600000 milliseconds before timing out.
[rank28]:[E219 22:13:08.604565813 ProcessGroupNCCL.cpp:616] [Rank 28] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
[rank76]:[E219 22:13:07.511376617 ProcessGroupNCCL.cpp:616] [Rank 76] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600025 milliseconds before timing out.
[rank75]:[E219 22:13:07.485914089 ProcessGroupNCCL.cpp:616] [Rank 75] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600023 milliseconds before timing out.
[rank103]:[E219 22:13:08.562761450 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 103] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank28]:[E219 22:13:08.668543362 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 28] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank77]:[E219 22:13:07.490970031 ProcessGroupNCCL.cpp:616] [Rank 77] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600037 milliseconds before timing out.
[rank100]:[E219 22:13:08.577519574 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 100] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank78]:[E219 22:13:08.533572167 ProcessGroupNCCL.cpp:616] [Rank 78] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600002 milliseconds before timing out.
[rank97]:[E219 22:13:08.569334180 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 97] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank9]:[E219 22:13:08.539378503 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 9] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank25]:[E219 22:13:08.619403381 ProcessGroupNCCL.cpp:616] [Rank 25] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
[rank93]:[E219 22:13:08.765991643 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 93] Exception (either an error or timeout) detected by watchdog at work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank30]:[E219 22:13:08.616212666 ProcessGroupNCCL.cpp:616] [Rank 30] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[rank98]:[E219 22:13:08.572896834 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 98] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank33]:[E219 22:13:08.635150984 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 33] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank35]:[E219 22:13:08.635160835 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 35] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank91]:[E219 22:13:08.736748819 ProcessGroupNCCL.cpp:616] [Rank 91] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
[rank99]:[E219 22:13:08.578168464 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 99] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank79]:[E219 22:13:08.599771643 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 79] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank113]:[E219 22:13:08.803532034 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 113] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank94]:[E219 22:13:08.737904389 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 94] Exception (either an error or timeout) detected by watchdog at work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank116]:[E219 22:13:08.805542311 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 116] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank31]:[E219 22:13:08.606636203 ProcessGroupNCCL.cpp:616] [Rank 31] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
[rank101]:[E219 22:13:08.588133035 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 101] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank52]:[E219 22:13:08.817452485 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 52] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank54]:[E219 22:13:08.817464826 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 54] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank77]:[E219 22:13:08.609977561 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 77] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank26]:[E219 22:13:08.701363008 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 26] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank27]:[E219 22:13:08.611026987 ProcessGroupNCCL.cpp:616] [Rank 27] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600010 milliseconds before timing out.
[rank32]:[E219 22:13:07.510674869 ProcessGroupNCCL.cpp:616] [Rank 32] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600015 milliseconds before timing out.
[rank75]:[E219 22:13:08.606305978 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 75] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank88]:[E219 22:13:08.697164082 ProcessGroupNCCL.cpp:616] [Rank 88] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
[rank73]:[E219 22:13:08.533554676 ProcessGroupNCCL.cpp:616] [Rank 73] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600001 milliseconds before timing out.
[rank89]:[E219 22:13:08.760884755 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 89] Exception (either an error or timeout) detected by watchdog at work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank29]:[E219 22:13:08.618988737 ProcessGroupNCCL.cpp:616] [Rank 29] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600006 milliseconds before timing out.
[rank76]:[E219 22:13:08.621385486 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 76] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank12]:[E219 22:13:08.583199515 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 12] Exception (either an error or timeout) detected by watchdog at work: 16647, last enqueued NCCL work: 16648, last completed NCCL work: 16646.
[rank48]:[E219 22:13:08.736780103 ProcessGroupNCCL.cpp:616] [Rank 48] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600038 milliseconds before timing out.
[rank78]:[E219 22:13:08.627963284 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 78] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank91]:[E219 22:13:08.769649295 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 91] Exception (either an error or timeout) detected by watchdog at work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank92]:[E219 22:13:08.777118866 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 92] Exception (either an error or timeout) detected by watchdog at work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank31]:[E219 22:13:08.719295845 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 31] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank25]:[E219 22:13:08.721398997 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 25] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank30]:[E219 22:13:08.722740914 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 30] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank27]:[E219 22:13:08.724851886 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 27] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank73]:[E219 22:13:08.655370805 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 73] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank29]:[E219 22:13:08.732114226 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 29] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank7]:[E219 22:13:08.596501525 ProcessGroupNCCL.cpp:616] [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600052 milliseconds before timing out.
[rank96]:[E219 22:13:07.466564150 ProcessGroupNCCL.cpp:616] [Rank 96] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600035 milliseconds before timing out.
[rank32]:[E219 22:13:08.723434007 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 32] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank88]:[E219 22:13:08.789156448 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 88] Exception (either an error or timeout) detected by watchdog at work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank48]:[E219 22:13:08.906157318 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 48] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank6]:[E219 22:13:08.567604974 ProcessGroupNCCL.cpp:616] [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600026 milliseconds before timing out.
[rank1]:[E219 22:13:08.600065134 ProcessGroupNCCL.cpp:616] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600053 milliseconds before timing out.
[rank2]:[E219 22:13:08.602781040 ProcessGroupNCCL.cpp:616] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600045 milliseconds before timing out.
[rank7]:[E219 22:13:08.757727889 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 7] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16649, last completed NCCL work: 16644.
[rank3]:[E219 22:13:08.594944020 ProcessGroupNCCL.cpp:616] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600056 milliseconds before timing out.
[rank24]:[E219 22:13:08.619992645 ProcessGroupNCCL.cpp:616] [Rank 24] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
[rank96]:[E219 22:13:08.746906085 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 96] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank6]:[E219 22:13:08.781483516 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 6] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16649, last completed NCCL work: 16644.
[rank4]:[E219 22:13:08.676892544 ProcessGroupNCCL.cpp:616] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600043 milliseconds before timing out.
[rank1]:[E219 22:13:08.834597622 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 1] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16649, last completed NCCL work: 16644.
[rank3]:[E219 22:13:08.846035063 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 3] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16649, last completed NCCL work: 16644.
[rank2]:[E219 22:13:08.845868153 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 2] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16649, last completed NCCL work: 16644.
[rank24]:[E219 22:13:08.923910883 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 24] Exception (either an error or timeout) detected by watchdog at work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank4]:[E219 22:13:08.861984991 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 4] Exception (either an error or timeout) detected by watchdog at work: 16645, last enqueued NCCL work: 16649, last completed NCCL work: 16644.
[rank88]:[E219 22:13:09.946238477 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 88] Timeout at NCCL work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank88]:[E219 22:13:09.946267108 ProcessGroupNCCL.cpp:630] [Rank 88] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank88]:[E219 22:13:09.946274029 ProcessGroupNCCL.cpp:636] [Rank 88] To avoid data inconsistency, we are taking the entire process down.
[rank88]:[E219 22:13:09.060297120 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 88] Process group watchdog thread terminated with exception: [Rank 88] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7ecb48cc1446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7ecafe02a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7ecafe031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7ecafe03361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7ecb48e1c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7ecb4d694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7ecb4d726850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 88] Process group watchdog thread terminated with exception: [Rank 88] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7ecb48cc1446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7ecafe02a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7ecafe031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7ecafe03361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7ecb48e1c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7ecb4d694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7ecb4d726850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7ecb48cc1446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7ecafdca071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7ecb48e1c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7ecb4d694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7ecb4d726850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank24]:[E219 22:13:09.990733932 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 24] Timeout at NCCL work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank24]:[E219 22:13:09.990756843 ProcessGroupNCCL.cpp:630] [Rank 24] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank24]:[E219 22:13:09.990761704 ProcessGroupNCCL.cpp:636] [Rank 24] To avoid data inconsistency, we are taking the entire process down.
[rank24]:[E219 22:13:09.054948084 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 24] Process group watchdog thread terminated with exception: [Rank 24] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7d87b636c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7d876ba2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7d876ba31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7d876ba3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7d87b67635c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7d87bae94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7d87baf26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 24] Process group watchdog thread terminated with exception: [Rank 24] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7d87b636c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7d876ba2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7d876ba31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7d876ba3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7d87b67635c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7d87bae94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7d87baf26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7d87b636c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7d876b6a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7d87b67635c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7d87bae94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7d87baf26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank96]:[E219 22:13:09.197374899 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 96] Timeout at NCCL work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank96]:[E219 22:13:09.197399809 ProcessGroupNCCL.cpp:630] [Rank 96] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank96]:[E219 22:13:09.197405329 ProcessGroupNCCL.cpp:636] [Rank 96] To avoid data inconsistency, we are taking the entire process down.
[rank96]:[E219 22:13:09.259308779 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 96] Process group watchdog thread terminated with exception: [Rank 96] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600035 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7b8cff0db446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7b8cb442a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7b8cb4431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7b8cb443361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7b8cffc585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7b8d03a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7b8d03b26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 96] Process group watchdog thread terminated with exception: [Rank 96] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600035 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7b8cff0db446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7b8cb442a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7b8cb4431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7b8cb443361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7b8cffc585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7b8d03a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7b8d03b26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7b8cff0db446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7b8cb40a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7b8cffc585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7b8d03a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7b8d03b26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank25]:[E219 22:13:12.120140151 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 25] Timeout at NCCL work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank25]:[E219 22:13:12.120169262 ProcessGroupNCCL.cpp:630] [Rank 25] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank25]:[E219 22:13:12.120174703 ProcessGroupNCCL.cpp:636] [Rank 25] To avoid data inconsistency, we are taking the entire process down.
[rank25]:[E219 22:13:12.122051581 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 25] Process group watchdog thread terminated with exception: [Rank 25] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74ec250e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x74ebda42a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x74ebda431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x74ebda43361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x74ec258735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x74ec29a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x74ec29b26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 25] Process group watchdog thread terminated with exception: [Rank 25] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74ec250e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x74ebda42a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x74ebda431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x74ebda43361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x74ec258735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x74ec29a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x74ec29b26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74ec250e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x74ebda0a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x74ec258735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x74ec29a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x74ec29b26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank91]:[E219 22:13:12.255976531 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 91] Timeout at NCCL work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank91]:[E219 22:13:12.256024224 ProcessGroupNCCL.cpp:630] [Rank 91] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank91]:[E219 22:13:12.256031204 ProcessGroupNCCL.cpp:636] [Rank 91] To avoid data inconsistency, we are taking the entire process down.
[rank89]:[E219 22:13:12.286681232 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 89] Timeout at NCCL work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank89]:[E219 22:13:12.286707263 ProcessGroupNCCL.cpp:630] [Rank 89] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank89]:[E219 22:13:12.286713873 ProcessGroupNCCL.cpp:636] [Rank 89] To avoid data inconsistency, we are taking the entire process down.
[rank91]:[E219 22:13:12.293325540 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 91] Process group watchdog thread terminated with exception: [Rank 91] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74befc56c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x74beb1c2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x74beb1c31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x74beb1c3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x74befd25c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x74bf01094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x74bf01126850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 91] Process group watchdog thread terminated with exception: [Rank 91] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74befc56c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x74beb1c2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x74beb1c31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x74beb1c3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x74befd25c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x74bf01094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x74bf01126850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74befc56c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x74beb18a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x74befd25c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x74bf01094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x74bf01126850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank93]:[E219 22:13:12.306053768 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 93] Timeout at NCCL work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank93]:[E219 22:13:12.306078839 ProcessGroupNCCL.cpp:630] [Rank 93] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank93]:[E219 22:13:12.306084990 ProcessGroupNCCL.cpp:636] [Rank 93] To avoid data inconsistency, we are taking the entire process down.
[rank31]:[E219 22:13:12.225807299 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 31] Timeout at NCCL work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank31]:[E219 22:13:12.225833861 ProcessGroupNCCL.cpp:630] [Rank 31] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank31]:[E219 22:13:12.225839191 ProcessGroupNCCL.cpp:636] [Rank 31] To avoid data inconsistency, we are taking the entire process down.
[rank31]:[E219 22:13:12.227680517 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 31] Process group watchdog thread terminated with exception: [Rank 31] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7895dfac1446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x789594e2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x789594e31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x789594e3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7895dfc1c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7895e4494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7895e4526850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 31] Process group watchdog thread terminated with exception: [Rank 31] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7895dfac1446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x789594e2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x789594e31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x789594e3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7895dfc1c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7895e4494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7895e4526850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7895dfac1446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x789594aa071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7895dfc1c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7895e4494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7895e4526850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank29]:[E219 22:13:12.240295346 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 29] Timeout at NCCL work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank29]:[E219 22:13:12.240316528 ProcessGroupNCCL.cpp:630] [Rank 29] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank29]:[E219 22:13:12.240321048 ProcessGroupNCCL.cpp:636] [Rank 29] To avoid data inconsistency, we are taking the entire process down.
[rank29]:[E219 22:13:12.242188206 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 29] Process group watchdog thread terminated with exception: [Rank 29] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600006 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x75f9d9593446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x75f98e82a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x75f98e831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x75f98e83361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x75f9d96ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x75f9dde94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x75f9ddf26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 29] Process group watchdog thread terminated with exception: [Rank 29] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600006 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x75f9d9593446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x75f98e82a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x75f98e831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x75f98e83361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x75f9d96ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x75f9dde94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x75f9ddf26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x75f9d9593446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x75f98e4a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x75f9d96ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x75f9dde94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x75f9ddf26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank89]:[E219 22:13:12.326046337 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 89] Process group watchdog thread terminated with exception: [Rank 89] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x77c7bd0e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x77c77242a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x77c772431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x77c77243361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x77c7bdc585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x77c7c1a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x77c7c1b26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 89] Process group watchdog thread terminated with exception: [Rank 89] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x77c7bd0e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x77c77242a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x77c772431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x77c77243361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x77c7bdc585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x77c7c1a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x77c7c1b26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x77c7bd0e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x77c7720a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x77c7bdc585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x77c7c1a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x77c7c1b26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank95]:[E219 22:13:12.333954052 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 95] Timeout at NCCL work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank95]:[E219 22:13:12.333972783 ProcessGroupNCCL.cpp:630] [Rank 95] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank95]:[E219 22:13:12.333980913 ProcessGroupNCCL.cpp:636] [Rank 95] To avoid data inconsistency, we are taking the entire process down.
[rank27]:[E219 22:13:12.275522083 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 27] Timeout at NCCL work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank27]:[E219 22:13:12.275550555 ProcessGroupNCCL.cpp:630] [Rank 27] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank27]:[E219 22:13:12.275557195 ProcessGroupNCCL.cpp:636] [Rank 27] To avoid data inconsistency, we are taking the entire process down.
[rank27]:[E219 22:13:12.277437994 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 27] Process group watchdog thread terminated with exception: [Rank 27] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600010 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7cfa8a16c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7cfa3f82a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7cfa3f831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7cfa3f83361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7cfa8b25d5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7cfa8ee94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7cfa8ef26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 27] Process group watchdog thread terminated with exception: [Rank 27] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600010 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7cfa8a16c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7cfa3f82a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7cfa3f831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7cfa3f83361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7cfa8b25d5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7cfa8ee94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7cfa8ef26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7cfa8a16c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7cfa3f4a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7cfa8b25d5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7cfa8ee94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7cfa8ef26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank93]:[E219 22:13:12.393354778 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 93] Process group watchdog thread terminated with exception: [Rank 93] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7b91b052a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7b916582a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7b9165831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7b916583361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7b91b0c735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7b91b4e94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7b91b4f26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 93] Process group watchdog thread terminated with exception: [Rank 93] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7b91b052a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7b916582a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7b9165831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7b916583361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7b91b0c735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7b91b4e94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7b91b4f26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7b91b052a446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7b91654a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7b91b0c735c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7b91b4e94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7b91b4f26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank26]:[E219 22:13:12.347706986 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 26] Timeout at NCCL work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank26]:[E219 22:13:12.347725577 ProcessGroupNCCL.cpp:630] [Rank 26] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank26]:[E219 22:13:12.347730737 ProcessGroupNCCL.cpp:636] [Rank 26] To avoid data inconsistency, we are taking the entire process down.
[rank28]:[E219 22:13:12.353057965 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 28] Timeout at NCCL work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank28]:[E219 22:13:12.353084017 ProcessGroupNCCL.cpp:630] [Rank 28] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank28]:[E219 22:13:12.353091157 ProcessGroupNCCL.cpp:636] [Rank 28] To avoid data inconsistency, we are taking the entire process down.
[rank28]:[E219 22:13:12.354993657 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 28] Process group watchdog thread terminated with exception: [Rank 28] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7c64634e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7c641882a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7c6418831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7c641883361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7c64640585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7c6467e94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7c6467f26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 28] Process group watchdog thread terminated with exception: [Rank 28] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7c64634e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7c641882a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7c6418831bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7c641883361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7c64640585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7c6467e94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7c6467f26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7c64634e7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7c64184a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7c64640585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7c6467e94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7c6467f26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank95]:[E219 22:13:12.461808889 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 95] Process group watchdog thread terminated with exception: [Rank 95] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74b4b9704446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x74b46ea2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x74b46ea31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x74b46ea3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x74b4b985f5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x74b4be094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x74b4be126850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 95] Process group watchdog thread terminated with exception: [Rank 95] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74b4b9704446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x74b46ea2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x74b46ea31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x74b46ea3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x74b4b985f5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x74b4be094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x74b4be126850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x74b4b9704446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x74b46e6a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x74b4b985f5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x74b4be094ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x74b4be126850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank26]:[E219 22:13:12.382843287 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 26] Process group watchdog thread terminated with exception: [Rank 26] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e587f8b7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7e5834c2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7e5834c31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7e5834c3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7e587fa125c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7e5884294ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7e5884326850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 26] Process group watchdog thread terminated with exception: [Rank 26] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600004 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e587f8b7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7e5834c2a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7e5834c31bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7e5834c3361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7e587fa125c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7e5884294ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7e5884326850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e587f8b7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7e58348a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7e587fa125c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7e5884294ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7e5884326850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank94]:[E219 22:13:12.496112818 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 94] Timeout at NCCL work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank94]:[E219 22:13:12.496138950 ProcessGroupNCCL.cpp:630] [Rank 94] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank94]:[E219 22:13:12.496143860 ProcessGroupNCCL.cpp:636] [Rank 94] To avoid data inconsistency, we are taking the entire process down.
[rank92]:[E219 22:13:12.496228024 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 92] Timeout at NCCL work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank92]:[E219 22:13:12.496252545 ProcessGroupNCCL.cpp:630] [Rank 92] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank92]:[E219 22:13:12.496257626 ProcessGroupNCCL.cpp:636] [Rank 92] To avoid data inconsistency, we are taking the entire process down.
[rank90]:[E219 22:13:12.497749584 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 90] Timeout at NCCL work: 16648, last enqueued NCCL work: 16648, last completed NCCL work: 16647.
[rank90]:[E219 22:13:12.497766895 ProcessGroupNCCL.cpp:630] [Rank 90] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank90]:[E219 22:13:12.497772505 ProcessGroupNCCL.cpp:636] [Rank 90] To avoid data inconsistency, we are taking the entire process down.
[rank30]:[E219 22:13:12.421555295 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 30] Timeout at NCCL work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank30]:[E219 22:13:12.421578496 ProcessGroupNCCL.cpp:630] [Rank 30] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank30]:[E219 22:13:12.421584416 ProcessGroupNCCL.cpp:636] [Rank 30] To avoid data inconsistency, we are taking the entire process down.
[rank30]:[E219 22:13:12.423459795 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 30] Process group watchdog thread terminated with exception: [Rank 30] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x761b2f0db446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x761ae442a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x761ae4431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x761ae443361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x761b2fc585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x761b33a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x761b33b26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 30] Process group watchdog thread terminated with exception: [Rank 30] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600005 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x761b2f0db446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x761ae442a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x761ae4431bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x761ae443361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x761b2fc585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x761b33a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x761b33b26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x761b2f0db446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x761ae40a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x761b2fc585c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x761b33a94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x761b33b26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank92]:[E219 22:13:12.554566105 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 92] Process group watchdog thread terminated with exception: [Rank 92] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7adde196c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7add9702a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7add97031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7add9703361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7adde265c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7adde6694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7adde6726850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 92] Process group watchdog thread terminated with exception: [Rank 92] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7adde196c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7add9702a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7add97031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7add9703361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7adde265c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7adde6694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7adde6726850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7adde196c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7add96ca071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7adde265c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7adde6694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7adde6726850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank94]:[E219 22:13:12.563420049 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 94] Process group watchdog thread terminated with exception: [Rank 94] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x797864393446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x79781962a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x797819631bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x79781963361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7978644ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x797868c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x797868d26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 94] Process group watchdog thread terminated with exception: [Rank 94] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600033 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x797864393446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x79781962a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x797819631bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x79781963361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7978644ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x797868c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x797868d26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x797864393446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7978192a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7978644ee5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x797868c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x797868d26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank90]:[E219 22:13:12.624572417 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 90] Process group watchdog thread terminated with exception: [Rank 90] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600031 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e49ce376446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7e498362a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7e4983631bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7e498363361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7e49cee555c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7e49d2c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7e49d2d26850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 90] Process group watchdog thread terminated with exception: [Rank 90] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16648, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=600000) ran for 600031 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e49ce376446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7e498362a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7e4983631bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7e498363361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7e49cee555c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7e49d2c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7e49d2d26850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e49ce376446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7e49832a071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7e49cee555c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7e49d2c94ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7e49d2d26850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank100]:[E219 22:13:13.994296216 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 100] Timeout at NCCL work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank100]:[E219 22:13:13.994323897 ProcessGroupNCCL.cpp:630] [Rank 100] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank100]:[E219 22:13:13.994329207 ProcessGroupNCCL.cpp:636] [Rank 100] To avoid data inconsistency, we are taking the entire process down.
[rank100]:[E219 22:13:13.034318674 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 100] Process group watchdog thread terminated with exception: [Rank 100] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600032 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x751e1cce7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x751dd202a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x751dd2031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x751dd203361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x751e1d85e5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x751e21694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x751e21726850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 100] Process group watchdog thread terminated with exception: [Rank 100] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600032 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x751e1cce7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x751dd202a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x751dd2031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x751dd203361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x751e1d85e5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x751e21694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x751e21726850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x751e1cce7446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x751dd1ca071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x751e1d85e5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x751e21694ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x751e21726850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank98]:[E219 22:13:13.094323644 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 98] Timeout at NCCL work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank98]:[E219 22:13:13.094351485 ProcessGroupNCCL.cpp:630] [Rank 98] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank98]:[E219 22:13:13.094358405 ProcessGroupNCCL.cpp:636] [Rank 98] To avoid data inconsistency, we are taking the entire process down.
[rank98]:[E219 22:13:13.096253944 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 98] Process group watchdog thread terminated with exception: [Rank 98] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600003 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x727a8feb5446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x727a4522a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x727a45231bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x727a4523361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x727a9001c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x727a94894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x727a94926850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 98] Process group watchdog thread terminated with exception: [Rank 98] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600003 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x727a8feb5446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x727a4522a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x727a45231bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x727a4523361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x727a9001c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x727a94894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x727a94926850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x727a8feb5446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x727a44ea071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x727a9001c5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x727a94894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x727a94926850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank102]:[E219 22:13:13.108968957 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 102] Timeout at NCCL work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank102]:[E219 22:13:13.108996237 ProcessGroupNCCL.cpp:630] [Rank 102] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank102]:[E219 22:13:13.109023968 ProcessGroupNCCL.cpp:636] [Rank 102] To avoid data inconsistency, we are taking the entire process down.
[rank102]:[E219 22:13:13.110868916 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 102] Process group watchdog thread terminated with exception: [Rank 102] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600009 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7743e7e9f446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x77439d22a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x77439d231bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x77439d23361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7743e7ffa5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7743ec894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7743ec926850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 102] Process group watchdog thread terminated with exception: [Rank 102] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16646, OpType=ALLREDUCE, NumelIn=495074300, NumelOut=495074300, Timeout(ms)=600000) ran for 600009 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7743e7e9f446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x77439d22a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x77439d231bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x77439d23361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7743e7ffa5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7743ec894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7743ec926850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7743e7e9f446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x77439cea071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7743e7ffa5c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: + 0x94ac3 (0x7743ec894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: + 0x126850 (0x7743ec926850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank101]:[E219 22:13:13.346103496 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 101] Timeout at NCCL work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank101]:[E219 22:13:13.346130386 ProcessGroupNCCL.cpp:630] [Rank 101] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank101]:[E219 22:13:13.346137376 ProcessGroupNCCL.cpp:636] [Rank 101] To avoid data inconsistency, we are taking the entire process down.
[rank97]:[E219 22:13:13.346425901 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 97] Timeout at NCCL work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank97]:[E219 22:13:13.346450951 ProcessGroupNCCL.cpp:630] [Rank 97] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank97]:[E219 22:13:13.346457331 ProcessGroupNCCL.cpp:636] [Rank 97] To avoid data inconsistency, we are taking the entire process down.
[rank99]:[E219 22:13:13.346684084 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 99] Timeout at NCCL work: 16645, last enqueued NCCL work: 16648, last completed NCCL work: 16644.
[rank99]:[E219 22:13:13.346702845 ProcessGroupNCCL.cpp:630] [Rank 99] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank99]:[E219 22:13:13.346707815 ProcessGroupNCCL.cpp:636] [Rank 99] To avoid data inconsistency, we are taking the entire process down.
[rank103]:[E219 22:13:13.348077956 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 103] Timeout at NCCL work: 16646, last enqueued NCCL work: 16648, last completed NCCL work: 16645.
[rank103]:[E219 22:13:13.348105736 ProcessGroupNCCL.cpp:630] [Rank 103] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank103]:[E219 22:13:13.348112316 ProcessGroupNCCL.cpp:636] [Rank 103] To avoid data inconsistency, we are taking the entire process down.
[rank97]:[E219 22:13:13.348368120 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 97] Process group watchdog thread terminated with exception: [Rank 97] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600050 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f313096c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7f30e602a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7f30e6031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7f30e603361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7f3131a525c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7f3135494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7f3135526850 in /lib/x86_64-linux-gnu/libc.so.6)
[rank101]:[E219 22:13:13.348381630 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 101] Process group watchdog thread terminated with exception: [Rank 101] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600035 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x76123cb6c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7611f222a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7611f2231bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7611f223361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x76123dc635c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x761241894ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x761241926850 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 1 PG GUID 1 Rank 97] Process group watchdog thread terminated with exception: [Rank 97] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=16645, OpType=ALLREDUCE, NumelIn=495229180, NumelOut=495229180, Timeout(ms)=600000) ran for 600050 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f313096c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional > >) + 0x282 (0x7f30e602a772 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7f30e6031bb3 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7f30e603361d in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7f3131a525c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7f3135494ac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: + 0x126850 (0x7f3135526850 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7f313096c446 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: + 0xe4271b (0x7f30e5ca071b in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0x145c0 (0x7f3131a525c0 in /home/zhaojiang/.local/lib/python3.10/site-packages/torch/lib/libtorch.so)
frame #3: