|
+ deepspeed |
|
[rank7]:[W528 22:09:21.036407041 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank3]:[W528 22:09:22.318229028 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank2]:[W528 22:09:22.392686726 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank4]:[W528 22:09:22.419481521 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank6]:[W528 22:09:22.503251480 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank1]:[W528 22:09:22.567638531 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank5]:[W528 22:09:22.570437274 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
[rank0]:[W528 22:09:22.572483520 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/config.json |
|
loading configuration file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/config.json |
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 2560, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 6912, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 20, |
|
"num_hidden_layers": 40, |
|
"num_key_value_heads": 20, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 5000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 2560, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 6912, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 20, |
|
"num_hidden_layers": 40, |
|
"num_key_value_heads": 20, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 5000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 2560, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 6912, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 20, |
|
"num_hidden_layers": 40, |
|
"num_key_value_heads": 20, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 5000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 2560, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 6912, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 20, |
|
"num_hidden_layers": 40, |
|
"num_key_value_heads": 20, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 5000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 2560, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 6912, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 20, |
|
"num_hidden_layers": 40, |
|
"num_key_value_heads": 20, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 5000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 2560, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 6912, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 20, |
|
"num_hidden_layers": 40, |
|
"num_key_value_heads": 20, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 5000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 2560, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 6912, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 20, |
|
"num_hidden_layers": 40, |
|
"num_key_value_heads": 20, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 5000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
Model config Qwen2Config { |
|
"_attn_implementation_autoset": true, |
|
"_name_or_path": "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"hidden_act": "silu", |
|
"hidden_size": 2560, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 6912, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 20, |
|
"num_hidden_layers": 40, |
|
"num_key_value_heads": 20, |
|
"pad_token_id": 151643, |
|
"rms_norm_eps": 1e-06, |
|
"rope_scaling": null, |
|
"rope_theta": 5000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.49.0", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151646 |
|
} |
|
|
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/pytorch_model.bin |
|
loading weights file /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k/pytorch_model.bin |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Will use torch_dtype=torch.bfloat16 as defined in model's config object |
|
Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16. |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Detected DeepSpeed ZeRO-3: activating zero.init() for this model |
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Generate config GenerationConfig { |
|
"bos_token_id": 128245, |
|
"eos_token_id": 151643, |
|
"pad_token_id": 151643 |
|
} |
|
|
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
Sliding Window Attention is enabled but not implemented for `eager`; unexpected results may be encountered. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
Generation config file not found, using a generation config created from the model config. |
|
Generation config file not found, using a generation config created from the model config. |
|
Generation config file not found, using a generation config created from the model config. |
|
Generation config file not found, using a generation config created from the model config. |
|
Generation config file not found, using a generation config created from the model config. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
Generation config file not found, using a generation config created from the model config. |
|
Generation config file not found, using a generation config created from the model config. |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file vocab.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file merges.txt |
|
loading file chat_template.jinja |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
loading file vocab.json |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
loading file chat_template.jinja |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
Generation config file not found, using a generation config created from the model config. |
|
loading file vocab.json |
|
loading file merges.txt |
|
loading file tokenizer.json |
|
loading file added_tokens.json |
|
loading file special_tokens_map.json |
|
loading file tokenizer_config.json |
|
loading file chat_template.jinja |
|
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root...Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root...Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
|
|
|
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Using /home/hansirui_1st/.cache/torch_extensions/py311_cu124 as PyTorch extensions root... |
|
Detected CUDA files, patching ldflags |
|
Emitting ninja build file /home/hansirui_1st/.cache/torch_extensions/py311_cu124/fused_adam/build.ninja... |
|
/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/torch/utils/cpp_extension.py:2059: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. |
|
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST']. |
|
warnings.warn( |
|
Building extension module fused_adam... |
|
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) |
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam... |
|
Loading extension module fused_adam... |
|
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
wandb: Currently logged in as: xtom to https://api.wandb.ai. Use `wandb login |
|
wandb: Tracking run with wandb version 0.19.8 |
|
wandb: Run data is saved locally in /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k-Q2-2k/wandb/run-20250528_220942-06gqeb33 |
|
wandb: Run `wandb offline` to turn off syncing. |
|
wandb: Syncing run qwen-4b-s3-Q1-10k-Q2-2k |
|
wandb: βοΈ View project at https://wandb.ai/xtom/Inverse_Alignment |
|
wandb: π View run at https://wandb.ai/xtom/Inverse_Alignment/runs/06gqeb33 |
|
Training 1/1 epoch: 0%| | 0/63 [00:00<?, ?it/s]`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. |
|
Training 1/1 epoch (loss 1.7146): 0%| | 0/63 [00:07<?, ?it/s]
Training 1/1 epoch (loss 1.7146): 2%|β | 1/63 [00:07<07:23, 7.16s/it]
Training 1/1 epoch (loss 1.6844): 2%|β | 1/63 [00:09<07:23, 7.16s/it]
Training 1/1 epoch (loss 1.6844): 3%|β | 2/63 [00:09<04:29, 4.41s/it]
Training 1/1 epoch (loss 1.7634): 3%|β | 2/63 [00:10<04:29, 4.41s/it]
Training 1/1 epoch (loss 1.7634): 5%|β | 3/63 [00:10<02:41, 2.70s/it]
Training 1/1 epoch (loss 1.6159): 5%|β | 3/63 [00:10<02:41, 2.70s/it]
Training 1/1 epoch (loss 1.6159): 6%|β | 4/63 [00:10<01:52, 1.90s/it]
Training 1/1 epoch (loss 1.6885): 6%|β | 4/63 [00:11<01:52, 1.90s/it]
Training 1/1 epoch (loss 1.6885): 8%|β | 5/63 [00:11<01:24, 1.46s/it]
Training 1/1 epoch (loss 1.7611): 8%|β | 5/63 [00:12<01:24, 1.46s/it]
Training 1/1 epoch (loss 1.7611): 10%|β | 6/63 [00:12<01:08, 1.19s/it]
Training 1/1 epoch (loss 1.7074): 10%|β | 6/63 [00:13<01:08, 1.19s/it]
Training 1/1 epoch (loss 1.7074): 11%|β | 7/63 [00:13<00:57, 1.02s/it]
Training 1/1 epoch (loss 1.7700): 11%|β | 7/63 [00:13<00:57, 1.02s/it]
Training 1/1 epoch (loss 1.7700): 13%|ββ | 8/63 [00:13<00:54, 1.01it/s]
Training 1/1 epoch (loss 1.6734): 13%|ββ | 8/63 [00:14<00:54, 1.01it/s]
Training 1/1 epoch (loss 1.6734): 14%|ββ | 9/63 [00:14<00:48, 1.12it/s]
Training 1/1 epoch (loss 1.6348): 14%|ββ | 9/63 [00:15<00:48, 1.12it/s]
Training 1/1 epoch (loss 1.6348): 16%|ββ | 10/63 [00:15<00:43, 1.22it/s]
Training 1/1 epoch (loss 1.7091): 16%|ββ | 10/63 [00:15<00:43, 1.22it/s]
Training 1/1 epoch (loss 1.7091): 17%|ββ | 11/63 [00:15<00:40, 1.30it/s]
Training 1/1 epoch (loss 1.7345): 17%|ββ | 11/63 [00:16<00:40, 1.30it/s]
Training 1/1 epoch (loss 1.7345): 19%|ββ | 12/63 [00:16<00:37, 1.37it/s]
Training 1/1 epoch (loss 1.7241): 19%|ββ | 12/63 [00:17<00:37, 1.37it/s]
Training 1/1 epoch (loss 1.7241): 21%|ββ | 13/63 [00:17<00:35, 1.41it/s]
Training 1/1 epoch (loss 1.6559): 21%|ββ | 13/63 [00:17<00:35, 1.41it/s]
Training 1/1 epoch (loss 1.6559): 22%|βββ | 14/63 [00:17<00:34, 1.41it/s]
Training 1/1 epoch (loss 1.6420): 22%|βββ | 14/63 [00:18<00:34, 1.41it/s]
Training 1/1 epoch (loss 1.6420): 24%|βββ | 15/63 [00:18<00:33, 1.45it/s]
Training 1/1 epoch (loss 1.6910): 24%|βββ | 15/63 [00:19<00:33, 1.45it/s]
Training 1/1 epoch (loss 1.6910): 25%|βββ | 16/63 [00:19<00:32, 1.44it/s]
Training 1/1 epoch (loss 1.8611): 25%|βββ | 16/63 [00:19<00:32, 1.44it/s]
Training 1/1 epoch (loss 1.8611): 27%|βββ | 17/63 [00:19<00:31, 1.46it/s]
Training 1/1 epoch (loss 1.7476): 27%|βββ | 17/63 [00:20<00:31, 1.46it/s]
Training 1/1 epoch (loss 1.7476): 29%|βββ | 18/63 [00:20<00:30, 1.49it/s]
Training 1/1 epoch (loss 1.8110): 29%|βββ | 18/63 [00:21<00:30, 1.49it/s]
Training 1/1 epoch (loss 1.8110): 30%|βββ | 19/63 [00:21<00:29, 1.49it/s]
Training 1/1 epoch (loss 1.8342): 30%|βββ | 19/63 [00:21<00:29, 1.49it/s]
Training 1/1 epoch (loss 1.8342): 32%|ββββ | 20/63 [00:21<00:28, 1.52it/s]
Training 1/1 epoch (loss 1.7625): 32%|ββββ | 20/63 [00:22<00:28, 1.52it/s]
Training 1/1 epoch (loss 1.7625): 33%|ββββ | 21/63 [00:22<00:27, 1.51it/s]
Training 1/1 epoch (loss 1.7169): 33%|ββββ | 21/63 [00:23<00:27, 1.51it/s]
Training 1/1 epoch (loss 1.7169): 35%|ββββ | 22/63 [00:23<00:27, 1.47it/s]
Training 1/1 epoch (loss 1.7240): 35%|ββββ | 22/63 [00:23<00:27, 1.47it/s]
Training 1/1 epoch (loss 1.7240): 37%|ββββ | 23/63 [00:23<00:26, 1.50it/s]
Training 1/1 epoch (loss 1.7048): 37%|ββββ | 23/63 [00:24<00:26, 1.50it/s]
Training 1/1 epoch (loss 1.7048): 38%|ββββ | 24/63 [00:24<00:26, 1.46it/s]
Training 1/1 epoch (loss 1.6184): 38%|ββββ | 24/63 [00:25<00:26, 1.46it/s]
Training 1/1 epoch (loss 1.6184): 40%|ββββ | 25/63 [00:25<00:25, 1.47it/s]
Training 1/1 epoch (loss 1.6789): 40%|ββββ | 25/63 [00:25<00:25, 1.47it/s]
Training 1/1 epoch (loss 1.6789): 41%|βββββ | 26/63 [00:25<00:24, 1.50it/s]
Training 1/1 epoch (loss 1.7730): 41%|βββββ | 26/63 [00:26<00:24, 1.50it/s]
Training 1/1 epoch (loss 1.7730): 43%|βββββ | 27/63 [00:26<00:23, 1.50it/s]
Training 1/1 epoch (loss 1.7459): 43%|βββββ | 27/63 [00:27<00:23, 1.50it/s]
Training 1/1 epoch (loss 1.7459): 44%|βββββ | 28/63 [00:27<00:23, 1.51it/s]
Training 1/1 epoch (loss 1.7494): 44%|βββββ | 28/63 [00:27<00:23, 1.51it/s]
Training 1/1 epoch (loss 1.7494): 46%|βββββ | 29/63 [00:27<00:22, 1.49it/s]
Training 1/1 epoch (loss 1.6847): 46%|βββββ | 29/63 [00:28<00:22, 1.49it/s]
Training 1/1 epoch (loss 1.6847): 48%|βββββ | 30/63 [00:28<00:23, 1.43it/s]
Training 1/1 epoch (loss 1.8040): 48%|βββββ | 30/63 [00:29<00:23, 1.43it/s]
Training 1/1 epoch (loss 1.8040): 49%|βββββ | 31/63 [00:29<00:22, 1.42it/s]
Training 1/1 epoch (loss 1.7556): 49%|βββββ | 31/63 [00:30<00:22, 1.42it/s]
Training 1/1 epoch (loss 1.7556): 51%|βββββ | 32/63 [00:30<00:22, 1.35it/s]
Training 1/1 epoch (loss 1.6802): 51%|βββββ | 32/63 [00:31<00:22, 1.35it/s]
Training 1/1 epoch (loss 1.6802): 52%|ββββββ | 33/63 [00:31<00:22, 1.34it/s]
Training 1/1 epoch (loss 1.7203): 52%|ββββββ | 33/63 [00:31<00:22, 1.34it/s]
Training 1/1 epoch (loss 1.7203): 54%|ββββββ | 34/63 [00:31<00:21, 1.33it/s]
Training 1/1 epoch (loss 1.6790): 54%|ββββββ | 34/63 [00:32<00:21, 1.33it/s]
Training 1/1 epoch (loss 1.6790): 56%|ββββββ | 35/63 [00:32<00:20, 1.35it/s]
Training 1/1 epoch (loss 1.7255): 56%|ββββββ | 35/63 [00:33<00:20, 1.35it/s]
Training 1/1 epoch (loss 1.7255): 57%|ββββββ | 36/63 [00:33<00:20, 1.34it/s]
Training 1/1 epoch (loss 1.6554): 57%|ββββββ | 36/63 [00:34<00:20, 1.34it/s]
Training 1/1 epoch (loss 1.6554): 59%|ββββββ | 37/63 [00:34<00:19, 1.34it/s]
Training 1/1 epoch (loss 1.6295): 59%|ββββββ | 37/63 [00:34<00:19, 1.34it/s]
Training 1/1 epoch (loss 1.6295): 60%|ββββββ | 38/63 [00:34<00:18, 1.35it/s]
Training 1/1 epoch (loss 1.7125): 60%|ββββββ | 38/63 [00:35<00:18, 1.35it/s]
Training 1/1 epoch (loss 1.7125): 62%|βββββββ | 39/63 [00:35<00:18, 1.33it/s]
Training 1/1 epoch (loss 1.7542): 62%|βββββββ | 39/63 [00:36<00:18, 1.33it/s]
Training 1/1 epoch (loss 1.7542): 63%|βββββββ | 40/63 [00:36<00:17, 1.30it/s]
Training 1/1 epoch (loss 1.7085): 63%|βββββββ | 40/63 [00:37<00:17, 1.30it/s]
Training 1/1 epoch (loss 1.7085): 65%|βββββββ | 41/63 [00:37<00:16, 1.30it/s]
Training 1/1 epoch (loss 1.5987): 65%|βββββββ | 41/63 [00:37<00:16, 1.30it/s]
Training 1/1 epoch (loss 1.5987): 67%|βββββββ | 42/63 [00:37<00:15, 1.32it/s]
Training 1/1 epoch (loss 1.7950): 67%|βββββββ | 42/63 [00:38<00:15, 1.32it/s]
Training 1/1 epoch (loss 1.7950): 68%|βββββββ | 43/63 [00:38<00:15, 1.33it/s]
Training 1/1 epoch (loss 1.6070): 68%|βββββββ | 43/63 [00:39<00:15, 1.33it/s]
Training 1/1 epoch (loss 1.6070): 70%|βββββββ | 44/63 [00:39<00:14, 1.33it/s]
Training 1/1 epoch (loss 1.7500): 70%|βββββββ | 44/63 [00:40<00:14, 1.33it/s]
Training 1/1 epoch (loss 1.7500): 71%|ββββββββ | 45/63 [00:40<00:13, 1.33it/s]
Training 1/1 epoch (loss 1.6836): 71%|ββββββββ | 45/63 [00:40<00:13, 1.33it/s]
Training 1/1 epoch (loss 1.6836): 73%|ββββββββ | 46/63 [00:40<00:12, 1.37it/s]
Training 1/1 epoch (loss 1.6940): 73%|ββββββββ | 46/63 [00:41<00:12, 1.37it/s]
Training 1/1 epoch (loss 1.6940): 75%|ββββββββ | 47/63 [00:41<00:11, 1.38it/s]
Training 1/1 epoch (loss 1.6777): 75%|ββββββββ | 47/63 [00:42<00:11, 1.38it/s]
Training 1/1 epoch (loss 1.6777): 76%|ββββββββ | 48/63 [00:42<00:10, 1.39it/s]
Training 1/1 epoch (loss 1.7644): 76%|ββββββββ | 48/63 [00:42<00:10, 1.39it/s]
Training 1/1 epoch (loss 1.7644): 78%|ββββββββ | 49/63 [00:42<00:10, 1.36it/s]
Training 1/1 epoch (loss 1.6275): 78%|ββββββββ | 49/63 [00:43<00:10, 1.36it/s]
Training 1/1 epoch (loss 1.6275): 79%|ββββββββ | 50/63 [00:43<00:09, 1.38it/s]
Training 1/1 epoch (loss 1.6958): 79%|ββββββββ | 50/63 [00:44<00:09, 1.38it/s]
Training 1/1 epoch (loss 1.6958): 81%|ββββββββ | 51/63 [00:44<00:08, 1.37it/s]
Training 1/1 epoch (loss 1.6001): 81%|ββββββββ | 51/63 [00:45<00:08, 1.37it/s]
Training 1/1 epoch (loss 1.6001): 83%|βββββββββ | 52/63 [00:45<00:07, 1.40it/s]
Training 1/1 epoch (loss 1.6260): 83%|βββββββββ | 52/63 [00:45<00:07, 1.40it/s]
Training 1/1 epoch (loss 1.6260): 84%|βββββββββ | 53/63 [00:45<00:07, 1.42it/s]
Training 1/1 epoch (loss 1.7592): 84%|βββββββββ | 53/63 [00:46<00:07, 1.42it/s]
Training 1/1 epoch (loss 1.7592): 86%|βββββββββ | 54/63 [00:46<00:06, 1.45it/s]
Training 1/1 epoch (loss 1.7547): 86%|βββββββββ | 54/63 [00:47<00:06, 1.45it/s]
Training 1/1 epoch (loss 1.7547): 87%|βββββββββ | 55/63 [00:47<00:05, 1.49it/s]
Training 1/1 epoch (loss 1.6434): 87%|βββββββββ | 55/63 [00:47<00:05, 1.49it/s]
Training 1/1 epoch (loss 1.6434): 89%|βββββββββ | 56/63 [00:47<00:04, 1.46it/s]
Training 1/1 epoch (loss 1.5495): 89%|βββββββββ | 56/63 [00:48<00:04, 1.46it/s]
Training 1/1 epoch (loss 1.5495): 90%|βββββββββ | 57/63 [00:48<00:04, 1.48it/s]
Training 1/1 epoch (loss 1.7184): 90%|βββββββββ | 57/63 [00:49<00:04, 1.48it/s]
Training 1/1 epoch (loss 1.7184): 92%|ββββββββββ| 58/63 [00:49<00:03, 1.49it/s]
Training 1/1 epoch (loss 1.7613): 92%|ββββββββββ| 58/63 [00:49<00:03, 1.49it/s]
Training 1/1 epoch (loss 1.7613): 94%|ββββββββββ| 59/63 [00:49<00:02, 1.46it/s]
Training 1/1 epoch (loss 1.7492): 94%|ββββββββββ| 59/63 [00:50<00:02, 1.46it/s]
Training 1/1 epoch (loss 1.7492): 95%|ββββββββββ| 60/63 [00:50<00:02, 1.46it/s]
Training 1/1 epoch (loss 1.6354): 95%|ββββββββββ| 60/63 [00:51<00:02, 1.46it/s]
Training 1/1 epoch (loss 1.6354): 97%|ββββββββββ| 61/63 [00:51<00:01, 1.41it/s]
Training 1/1 epoch (loss 1.6877): 97%|ββββββββββ| 61/63 [00:51<00:01, 1.41it/s]
Training 1/1 epoch (loss 1.6877): 98%|ββββββββββ| 62/63 [00:51<00:00, 1.44it/s]
Training 1/1 epoch (loss 1.6930): 98%|ββββββββββ| 62/63 [00:52<00:00, 1.44it/s]
Training 1/1 epoch (loss 1.6930): 100%|ββββββββββ| 63/63 [00:52<00:00, 1.48it/s]
Training 1/1 epoch (loss 1.6930): 100%|ββββββββββ| 63/63 [00:52<00:00, 1.20it/s] |
|
tokenizer config file saved in /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k-Q2-2k/tokenizer_config.json |
|
Special tokens file saved in /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/Qwen1.5-4B/Qwen1.5-4B-s3-Q1-10k-Q2-2k/special_tokens_map.json |
|
wandb: ERROR Problem finishing run |
|
Exception ignored in atexit callback: <bound method rank_zero_only.<locals>.wrapper of <safe_rlhf.logger.Logger object at 0x1551041f9150>> |
|
Traceback (most recent call last): |
|
File "/home/hansirui_1st/jiayi/resist/setting3/safe_rlhf/utils.py", line 212, in wrapper |
|
return func(*args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^ |
|
File "/home/hansirui_1st/jiayi/resist/setting3/safe_rlhf/logger.py", line 183, in close |
|
self.wandb.finish() |
|
File "/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/wandb/sdk/wandb_run.py", line 449, in wrapper |
|
return func(self, *args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/wandb/sdk/wandb_run.py", line 391, in wrapper |
|
return func(self, *args, **kwargs) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/wandb/sdk/wandb_run.py", line 2106, in finish |
|
return self._finish(exit_code) |
|
^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/wandb/sdk/wandb_run.py", line 2127, in _finish |
|
self._atexit_cleanup(exit_code=exit_code) |
|
File "/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/wandb/sdk/wandb_run.py", line 2352, in _atexit_cleanup |
|
self._on_finish() |
|
File "/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/wandb/sdk/wandb_run.py", line 2609, in _on_finish |
|
wait_with_progress( |
|
File "/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/wandb/sdk/mailbox/wait_with_progress.py", line 24, in wait_with_progress |
|
return wait_all_with_progress( |
|
^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/wandb/sdk/mailbox/wait_with_progress.py", line 87, in wait_all_with_progress |
|
return asyncio_compat.run(progress_loop_with_timeout) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/site-packages/wandb/sdk/lib/asyncio_compat.py", line 27, in run |
|
future = executor.submit(runner.run, fn) |
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
File "/aifs4su/hansirui_1st/miniconda3/envs/by-align/lib/python3.11/concurrent/futures/thread.py", line 169, in submit |
|
raise RuntimeError('cannot schedule new futures after ' |
|
RuntimeError: cannot schedule new futures after interpreter shutdown |
|
|