Question on Training Parameters (Axolotl Error)
First off, I want to say - Thank you so much for publishing your training parameters. I have been having a huge amount of trouble finding good parameters and this was extremely helpful. Much appreciated!
Second, I wondered if you had any insight into an error I'm getting when I pretty closely replicate your setup. I am running my config on RunPod, using their axolotl-latest image with 4tb of disk and 4tb of network volume and 8 H100s. The config is represented near the bottom of this message. The training proceeds very well after the preprocessing step, until I get to the last step, and then I get error below. I didn't do anything particular to install the plugins that are mentioned in your config... so I am wondering if this is the reason? Also, I don't know how to install axolotl plugins. In any case, any insight is appreciated.
Thanks again!
Error below:
100%|ββββββββββ| 58/58 [1:35:42<00:00, 75.44s/it]Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/workspace/axolotl/src/axolotl/cli/train.py", line 113, in
fire.Fire(do_cli)
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/fire/core.py", line 135, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/fire/core.py", line 468, in _Fire
component, remaining_args = _CallAndUpdateTrace(
^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/axolotl/src/axolotl/cli/train.py", line 87, in do_cli
return do_train(parsed_cfg, parsed_cli_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/axolotl/src/axolotl/cli/train.py", line 46, in do_train
model, tokenizer, trainer = train(cfg=cfg, dataset_meta=dataset_meta)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/axolotl/src/axolotl/train.py", line 502, in train
execute_training(cfg, trainer, resume_from_checkpoint)
File "/workspace/axolotl/src/axolotl/train.py", line 189, in execute_training
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 2241, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 2701, in _inner_training_loop
self.control = self.callback_handler.on_train_end(args, self.state, self.control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer_callback.py", line 510, in on_train_end
return self.call_event("on_train_end", args, state, control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer_callback.py", line 557, in call_event
result = getattr(callback, event)(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/integrations/integration_utils.py", line 924, in on_train_end
fake_trainer = Trainer(args=args, model=model, processing_class=processing_class, eval_dataset=["fake"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 461, in init
self.create_accelerator_and_postprocess()
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 5099, in create_accelerator_and_postprocess
self.accelerator = Accelerator(**args)
^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/accelerate/accelerator.py", line 331, in init
raise NotImplementedError(
NotImplementedError: You cannot pass in a deepspeed_plugin
when creating a second Accelerator
. Please make sure the first Accelerator
is initialized with all the plugins you want to use.
[rank0]: Traceback (most recent call last):
[rank0]: File "", line 198, in _run_module_as_main
[rank0]: File "", line 88, in _run_code
[rank0]: File "/workspace/axolotl/src/axolotl/cli/train.py", line 113, in
[rank0]: fire.Fire(do_cli)
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/fire/core.py", line 135, in Fire
[rank0]: component_trace = _Fire(component, args, parsed_flag_args, context, name)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/fire/core.py", line 468, in _Fire
[rank0]: component, remaining_args = _CallAndUpdateTrace(
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
[rank0]: component = fn(*varargs, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/workspace/axolotl/src/axolotl/cli/train.py", line 87, in do_cli
[rank0]: return do_train(parsed_cfg, parsed_cli_args)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/workspace/axolotl/src/axolotl/cli/train.py", line 46, in do_train
[rank0]: model, tokenizer, trainer = train(cfg=cfg, dataset_meta=dataset_meta)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/workspace/axolotl/src/axolotl/train.py", line 502, in train
[rank0]: execute_training(cfg, trainer, resume_from_checkpoint)
[rank0]: File "/workspace/axolotl/src/axolotl/train.py", line 189, in execute_training
[rank0]: trainer.train(resume_from_checkpoint=resume_from_checkpoint)
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 2241, in train
[rank0]: return inner_training_loop(
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 2701, in _inner_training_loop
[rank0]: self.control = self.callback_handler.on_train_end(args, self.state, self.control)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer_callback.py", line 510, in on_train_end
[rank0]: return self.call_event("on_train_end", args, state, control)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer_callback.py", line 557, in call_event
[rank0]: result = getattr(callback, event)(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/integrations/integration_utils.py", line 924, in on_train_end
[rank0]: fake_trainer = Trainer(args=args, model=model, processing_class=processing_class, eval_dataset=["fake"])
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 461, in init
[rank0]: self.create_accelerator_and_postprocess()
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 5099, in create_accelerator_and_postprocess
[rank0]: self.accelerator = Accelerator(**args)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/accelerate/accelerator.py", line 331, in init
[rank0]: raise NotImplementedError(
[rank0]: NotImplementedError: You cannot pass in a deepspeed_plugin
when creating a second Accelerator
. Please make sure the first Accelerator
is initialized with all the plugins you want to use.
[rank3]:[E322 09:24:54.411476724 ProcessGroupNCCL.cpp:616] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800014 milliseconds before timing out.
[rank3]:[E322 09:24:54.411862768 ProcessGroupNCCL.cpp:1785] [PG ID 0 PG GUID 0(default_pg) Rank 3] Exception (either an error or timeout) detected by watchdog at work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank6]:[E322 09:24:54.420110609 ProcessGroupNCCL.cpp:616] [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800022 milliseconds before timing out.
[rank6]:[E322 09:24:54.420465264 ProcessGroupNCCL.cpp:1785] [PG ID 0 PG GUID 0(default_pg) Rank 6] Exception (either an error or timeout) detected by watchdog at work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank7]:[E322 09:24:54.428111725 ProcessGroupNCCL.cpp:616] [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800030 milliseconds before timing out.
[rank7]:[E322 09:24:54.428442508 ProcessGroupNCCL.cpp:1785] [PG ID 0 PG GUID 0(default_pg) Rank 7] Exception (either an error or timeout) detected by watchdog at work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank5]:[E322 09:24:54.451689040 ProcessGroupNCCL.cpp:616] [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800053 milliseconds before timing out.
[rank5]:[E322 09:24:54.452063103 ProcessGroupNCCL.cpp:1785] [PG ID 0 PG GUID 0(default_pg) Rank 5] Exception (either an error or timeout) detected by watchdog at work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank4]:[E322 09:24:54.457386126 ProcessGroupNCCL.cpp:616] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800059 milliseconds before timing out.
[rank4]:[E322 09:24:54.457728495 ProcessGroupNCCL.cpp:1785] [PG ID 0 PG GUID 0(default_pg) Rank 4] Exception (either an error or timeout) detected by watchdog at work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank2]:[E322 09:24:54.461126969 ProcessGroupNCCL.cpp:616] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800063 milliseconds before timing out.
[rank2]:[E322 09:24:54.461500830 ProcessGroupNCCL.cpp:1785] [PG ID 0 PG GUID 0(default_pg) Rank 2] Exception (either an error or timeout) detected by watchdog at work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank1]:[E322 09:24:54.472466779 ProcessGroupNCCL.cpp:616] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800074 milliseconds before timing out.
[rank1]:[E322 09:24:54.472791483 ProcessGroupNCCL.cpp:1785] [PG ID 0 PG GUID 0(default_pg) Rank 1] Exception (either an error or timeout) detected by watchdog at work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank7]:[E322 09:24:54.717816150 ProcessGroupNCCL.cpp:1834] [PG ID 0 PG GUID 0(default_pg) Rank 7] Timeout at NCCL work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank7]:[E322 09:24:54.717851440 ProcessGroupNCCL.cpp:630] [Rank 7] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank7]:[E322 09:24:54.717866398 ProcessGroupNCCL.cpp:636] [Rank 7] To avoid data inconsistency, we are taking the entire process down.
[rank7]:[E322 09:24:54.721903952 ProcessGroupNCCL.cpp:1595] [PG ID 0 PG GUID 0(default_pg) Rank 7] Process group watchdog thread terminated with exception: [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800030 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e7f9e835446 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x282 (0x7e7f54019772 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7e7f54020bb3 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7e7f5402261d in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7e7f9e99c5c0 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7e7f9fc50ac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x7e7f9fce1a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
[rank6]:[E322 09:24:55.311119689 ProcessGroupNCCL.cpp:1834] [PG ID 0 PG GUID 0(default_pg) Rank 6] Timeout at NCCL work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank6]:[E322 09:24:55.311160077 ProcessGroupNCCL.cpp:630] [Rank 6] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank6]:[E322 09:24:55.311170412 ProcessGroupNCCL.cpp:636] [Rank 6] To avoid data inconsistency, we are taking the entire process down.
[rank6]:[E322 09:24:55.316181405 ProcessGroupNCCL.cpp:1595] [PG ID 0 PG GUID 0(default_pg) Rank 6] Process group watchdog thread terminated with exception: [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800022 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x76898622d446 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x282 (0x76893ba19772 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x76893ba20bb3 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x76893ba2261d in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7689863945c0 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x768987648ac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x7689876d9a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
[rank5]:[E322 09:24:55.478823430 ProcessGroupNCCL.cpp:1834] [PG ID 0 PG GUID 0(default_pg) Rank 5] Timeout at NCCL work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank5]:[E322 09:24:55.478857933 ProcessGroupNCCL.cpp:630] [Rank 5] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank5]:[E322 09:24:55.478871076 ProcessGroupNCCL.cpp:636] [Rank 5] To avoid data inconsistency, we are taking the entire process down.
[rank5]:[E322 09:24:55.484512028 ProcessGroupNCCL.cpp:1595] [PG ID 0 PG GUID 0(default_pg) Rank 5] Process group watchdog thread terminated with exception: [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800053 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x79a21022a446 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x282 (0x79a1c5a19772 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x79a1c5a20bb3 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x79a1c5a2261d in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x79a2103915c0 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x79a211645ac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x79a2116d6a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
[rank3]:[E322 09:24:55.722933591 ProcessGroupNCCL.cpp:1834] [PG ID 0 PG GUID 0(default_pg) Rank 3] Timeout at NCCL work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank3]:[E322 09:24:55.722972758 ProcessGroupNCCL.cpp:630] [Rank 3] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank3]:[E322 09:24:55.722986538 ProcessGroupNCCL.cpp:636] [Rank 3] To avoid data inconsistency, we are taking the entire process down.
[rank3]:[E322 09:24:55.728233106 ProcessGroupNCCL.cpp:1595] [PG ID 0 PG GUID 0(default_pg) Rank 3] Process group watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800014 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7ae0d5da8446 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x282 (0x7ae08b619772 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7ae08b620bb3 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7ae08b62261d in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x7ae0d5f0f5c0 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x7ae0d71c3ac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x7ae0d7254a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
[rank4]:[E322 09:24:55.747429565 ProcessGroupNCCL.cpp:1834] [PG ID 0 PG GUID 0(default_pg) Rank 4] Timeout at NCCL work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank4]:[E322 09:24:55.747466704 ProcessGroupNCCL.cpp:630] [Rank 4] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank4]:[E322 09:24:55.747478211 ProcessGroupNCCL.cpp:636] [Rank 4] To avoid data inconsistency, we are taking the entire process down.
[rank2]:[E322 09:24:55.749023098 ProcessGroupNCCL.cpp:1834] [PG ID 0 PG GUID 0(default_pg) Rank 2] Timeout at NCCL work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank2]:[E322 09:24:55.749063720 ProcessGroupNCCL.cpp:630] [Rank 2] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank2]:[E322 09:24:55.749088017 ProcessGroupNCCL.cpp:636] [Rank 2] To avoid data inconsistency, we are taking the entire process down.
[rank2]:[E322 09:24:55.752861440 ProcessGroupNCCL.cpp:1595] [PG ID 0 PG GUID 0(default_pg) Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800063 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x72a0fa0cb446 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x282 (0x72a0af819772 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x72a0af820bb3 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x72a0af82261d in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x72a0fa2325c0 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x72a0fb4e6ac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x72a0fb577a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
[rank4]:[E322 09:24:55.752955380 ProcessGroupNCCL.cpp:1595] [PG ID 0 PG GUID 0(default_pg) Rank 4] Process group watchdog thread terminated with exception: [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800059 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x71db1ca94446 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x282 (0x71dad2219772 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x71dad2220bb3 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x71dad222261d in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x71db1cbfb5c0 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x71db1deafac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x71db1df40a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
[rank1]:[E322 09:24:55.773772999 ProcessGroupNCCL.cpp:1834] [PG ID 0 PG GUID 0(default_pg) Rank 1] Timeout at NCCL work: 190078, last enqueued NCCL work: 190078, last completed NCCL work: 190077.
[rank1]:[E322 09:24:55.773798265 ProcessGroupNCCL.cpp:630] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank1]:[E322 09:24:55.773808126 ProcessGroupNCCL.cpp:636] [Rank 1] To avoid data inconsistency, we are taking the entire process down.
[rank1]:[E322 09:24:55.778186852 ProcessGroupNCCL.cpp:1595] [PG ID 0 PG GUID 0(default_pg) Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=190078, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800074 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x70f8ed5ec446 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x282 (0x70f8a2e19772 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x70f8a2e20bb3 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x70f8a2e2261d in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: + 0x145c0 (0x70f8ed7535c0 in /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #5: + 0x94ac3 (0x70f8eea07ac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x70f8eea98a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
W0322 09:25:07.408000 14053 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 14184 closing signal SIGTERM
W0322 09:25:07.411000 14053 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 14185 closing signal SIGTERM
W0322 09:25:07.412000 14053 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 14186 closing signal SIGTERM
W0322 09:25:07.412000 14053 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 14187 closing signal SIGTERM
W0322 09:25:07.413000 14053 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 14188 closing signal SIGTERM
W0322 09:25:07.413000 14053 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 14189 closing signal SIGTERM
W0322 09:25:07.414000 14053 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 14191 closing signal SIGTERM
W0322 09:25:37.414000 14053 site-packages/torch/distributed/elastic/multiprocessing/api.py:916] Unable to shutdown process 14184 via 15, forcefully exiting via 9
E0322 09:25:52.728000 14053 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: -6) local_rank: 6 (pid: 14190) of binary: /root/miniconda3/envs/py3.11/bin/python3
Traceback (most recent call last):
File "/root/miniconda3/envs/py3.11/bin/accelerate", line 8, in
sys.exit(main())
^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/accelerate/commands/launch.py", line 1185, in launch_command
multi_gpu_launcher(args)
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/accelerate/commands/launch.py", line 810, in multi_gpu_launcher
distrib_run.run(args)
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/distributed/run.py", line 910, in run
elastic_launch(
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 138, in call
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
axolotl.cli.train FAILED
Failures:
Root Cause (first observed failure):
[0]:
time : 2025-03-22_09:25:07
host : b9cdad076347
rank : 6 (local_rank: 6)
exitcode : -6 (pid: 14190)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 14190
My own config looks like:
base_model: Qwen/QwQ-32B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
push_dataset_to_hub:
#hf_use_auth_token: true
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
load_in_8bit: false
load_in_4bit: false
strict: false
dataset_prepared_path: last_run_prepared
datasets:
- data_files: /workspace/combinedThinking20250317-163923.jsonl
field_messages: messages
message_property_mappings:
content: content
role: role
path: json
roles:
assistant:- assistant
user: - user
roles_to_train: - assistant
train_on_eos: turn
type: chat_template
- assistant
val_set_size: 0.05
output_dir: ./qwq-reddit
sequence_len: 32768
sample_packing: true
pad_to_sequence_len: true
wandb_entity: my_entity
wandb_log_model: "checkpoint"
wandb_name: null
wandb_project: qwen32b-finetune
wandb_watch: gradients
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 5e-6
max_grad_norm: 0.2
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 40
saves_per_epoch: 2
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.02
fsdp:
fsdp_config:
special_tokens:
I haven't seen this error before. Maybe because the Runpod container is main:latest which is Out of date? I'd ask in the Axolotl Discord
https://discord.gg/q3QNp7J9