|
+ deepspeed |
|
[rank4]:[W528 19:41:25.891062542 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank1]:[W528 19:41:25.898976539 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank3]:[W528 19:41:25.925995158 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank0]:[W528 19:41:25.947532303 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank6]:[W528 19:41:25.953822925 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank7]:[W528 19:41:25.964563187 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank2]:[W528 19:41:25.969465241 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank5]:[W528 19:41:25.973669167 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/config.json |
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 1024, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 2816, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 16, |
|
"num_hidden_layers": 24, |
|
"num_key_value_heads": 16, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 1000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": true, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 1024, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 2816, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 16, |
|
"num_hidden_layers": 24, |
|
"num_key_value_heads": 16, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 1000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": true, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 1024, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 2816, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 16, |
|
"num_hidden_layers": 24, |
|
"num_key_value_heads": 16, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 1000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": true, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 1024, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 2816, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 16, |
|
"num_hidden_layers": 24, |
|
"num_key_value_heads": 16, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 1000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": true, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 1024, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 2816, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 16, |
|
"num_hidden_layers": 24, |
|
"num_key_value_heads": 16, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 1000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": true, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 1024, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 2816, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 16, |
|
"num_hidden_layers": 24, |
|
"num_key_value_heads": 16, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 1000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": true, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 1024, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 2816, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 16, |
|
"num_hidden_layers": 24, |
|
"num_key_value_heads": 16, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 1000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": true, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 1024, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 2816, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 16, |
|
"num_hidden_layers": 24, |
|
"num_key_value_heads": 16, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 1000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": true, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k/pytorch_model.bin |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
Generation config file not found, using a generation config created from the model config. |
|
Generation config file not found, using a generation config created from the model config. |
|
Generation config file not found, using a generation config created from the model config. |
|
Generation config file not found, using a generation config created from the model config. |
|
Generation config file not found, using a generation config created from the model config. |
|
Generation config file not found, using a generation config created from the model config. |
|
Generation config file not found, using a generation config created from the model config. |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
loading file vocab.json |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file merges.txt |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file vocab.json |
|
loading file chat_template.jinja |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
Generation config file not found, using a generation config created from the model config. |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Detected CUDA files, patching ldflags |
|
Emitting ninja build file /home/hansirui_1st/.cache/torch_extensions/py311_cu124/fused_adam/build.ninja... |
|
/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/torch/utils/cpp_extension.py:2059: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. |
|
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST']. |
|
warnings.warn( |
|
Building extension module fused_adam... |
|
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) |
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam...Loading extension module fused_adam... |
|
|
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam... |
|
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
wandb: Currently logged in as: xtom to https://api.wandb.ai. Use `wandb login |
|
wandb: Tracking run with wandb version 0.19.8 |
|
wandb: Run data is saved locally in /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k-Q2-500/wandb/run-20250528_194136-q3qa5vvj |
|
wandb: Run `wandb offline` to turn off syncing. |
|
wandb: Syncing run qwen-0.5b-s3-Q1-2k-Q2-500 |
|
wandb: βοΈ View project at https://wandb.ai/xtom/Inverse_Alignment |
|
wandb: π View run at https://wandb.ai/xtom/Inverse_Alignment/runs/q3qa5vvj |
|
Training 1/1 epoch: 0%| | 0/16 [00:00<?, ?it/s]`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
Training 1/1 epoch (loss 2.0794): 0%| | 0/16 [00:06<?, ?it/s]
Training 1/1 epoch (loss 2.0794): 6%|β | 1/16 [00:06<01:32, 6.15s/it]
Training 1/1 epoch (loss 2.1384): 6%|β | 1/16 [00:08<01:32, 6.15s/it]
Training 1/1 epoch (loss 2.1384): 12%|ββ | 2/16 [00:08<00:58, 4.17s/it]
Training 1/1 epoch (loss 2.1982): 12%|ββ | 2/16 [00:09<00:58, 4.17s/it]
Training 1/1 epoch (loss 2.1982): 19%|ββ | 3/16 [00:09<00:31, 2.46s/it]
Training 1/1 epoch (loss 2.2579): 19%|ββ | 3/16 [00:09<00:31, 2.46s/it]
Training 1/1 epoch (loss 2.2579): 25%|βββ | 4/16 [00:09<00:19, 1.61s/it]
Training 1/1 epoch (loss 2.2292): 25%|βββ | 4/16 [00:09<00:19, 1.61s/it]
Training 1/1 epoch (loss 2.2292): 31%|ββββ | 5/16 [00:09<00:12, 1.15s/it]
Training 1/1 epoch (loss 2.1540): 31%|ββββ | 5/16 [00:10<00:12, 1.15s/it]
Training 1/1 epoch (loss 2.1540): 38%|ββββ | 6/16 [00:10<00:08, 1.14it/s]
Training 1/1 epoch (loss 2.2187): 38%|ββββ | 6/16 [00:10<00:08, 1.14it/s]
Training 1/1 epoch (loss 2.2187): 44%|βββββ | 7/16 [00:10<00:06, 1.42it/s]
Training 1/1 epoch (loss 2.0539): 44%|βββββ | 7/16 [00:11<00:06, 1.42it/s]
Training 1/1 epoch (loss 2.0539): 50%|βββββ | 8/16 [00:11<00:05, 1.60it/s]
Training 1/1 epoch (loss 2.1018): 50%|βββββ | 8/16 [00:11<00:05, 1.60it/s]
Training 1/1 epoch (loss 2.1018): 56%|ββββββ | 9/16 [00:11<00:03, 1.87it/s]
Training 1/1 epoch (loss 2.0796): 56%|ββββββ | 9/16 [00:11<00:03, 1.87it/s]
Training 1/1 epoch (loss 2.0796): 62%|βββββββ | 10/16 [00:11<00:02, 2.16it/s]
Training 1/1 epoch (loss 2.2038): 62%|βββββββ | 10/16 [00:12<00:02, 2.16it/s]
Training 1/1 epoch (loss 2.2038): 69%|βββββββ | 11/16 [00:12<00:02, 2.39it/s]
Training 1/1 epoch (loss 1.9474): 69%|βββββββ | 11/16 [00:12<00:02, 2.39it/s]
Training 1/1 epoch (loss 1.9474): 75%|ββββββββ | 12/16 [00:12<00:01, 2.52it/s]
Training 1/1 epoch (loss 2.2482): 75%|ββββββββ | 12/16 [00:12<00:01, 2.52it/s]
Training 1/1 epoch (loss 2.2482): 81%|βββββββββ | 13/16 [00:12<00:01, 2.60it/s]
Training 1/1 epoch (loss 2.1256): 81%|βββββββββ | 13/16 [00:13<00:01, 2.60it/s]
Training 1/1 epoch (loss 2.1256): 88%|βββββββββ | 14/16 [00:13<00:00, 2.68it/s]
Training 1/1 epoch (loss 2.0429): 88%|βββββββββ | 14/16 [00:13<00:00, 2.68it/s]
Training 1/1 epoch (loss 2.0429): 94%|ββββββββββ| 15/16 [00:13<00:00, 2.72it/s]
Training 1/1 epoch (loss 2.1137): 94%|ββββββββββ| 15/16 [00:13<00:00, 2.72it/s]
Training 1/1 epoch (loss 2.1137): 100%|ββββββββββ| 16/16 [00:13<00:00, 2.78it/s]
Training 1/1 epoch (loss 2.1137): 100%|ββββββββββ| 16/16 [00:13<00:00, 1.15it/s] |
|
tokenizer config file saved in /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k-Q2-500/tokenizer_config.json |
|
Special tokens file saved in /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k-Q2-500/special_tokens_map.json |
|
wandb: |
|
wandb: |
|
wandb: Run history: |
|
wandb: train/epoch βββββββββ
β
ββββββ |
|
wandb: train/loss ββ
ββββββββββββ
ββ
|
|
wandb: train/lr ββββββββββββββββ |
|
wandb: train/step βββββββββ
β
ββββββ |
|
wandb: |
|
wandb: Run summary: |
|
wandb: train/epoch 1 |
|
wandb: train/loss 2.11368 |
|
wandb: train/lr 1e-05 |
|
wandb: train/step 16 |
|
wandb: |
|
wandb: π View run qwen-0.5b-s3-Q1-2k-Q2-500 at: https://wandb.ai/xtom/Inverse_Alignment/runs/q3qa5vvj |
|
wandb: βοΈ View project at: https://wandb.ai/xtom/Inverse_Alignment |
|
wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s) |
|
wandb: Find logs at: /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-0.5B/Qwen1.5-0.5B-s3-Q1-2k-Q2-500/wandb/run-20250528_194136-q3qa5vvj/logs |
|
|