2025-06-16 15:20:51,193 - running_modal_trl_modal - INFO - 📊 Configuration: 2025-06-16 15:20:51,194 - running_modal_trl_modal - INFO - Model: Qwen/Qwen3-1.7B-Base 2025-06-16 15:20:51,194 - running_modal_trl_modal - INFO - Dataset: DuongTrongChi/luatvn-split-v_0.2.0 2025-06-16 15:20:51,195 - running_modal_trl_modal - INFO - Training mode: Full parameter training 2025-06-16 15:20:51,195 - running_modal_trl_modal - INFO - Distributed strategy: DDP (DistributedDataParallel) 2025-06-16 15:20:51,196 - running_modal_trl_modal - INFO - Accelerator state: Distributed environment: DistributedType.NO Num processes: 1 Process index: 0 Local process index: 0 Device: cuda Mixed precision type: no 2025-06-16 15:20:51,197 - running_modal_trl_modal - INFO - Number of processes: 1 2025-06-16 15:20:51,197 - running_modal_trl_modal - INFO - Device: cuda 2025-06-16 15:20:51,198 - running_modal_trl_modal - INFO - Mixed precision: no 2025-06-16 15:20:51,204 - running_modal_trl_modal - INFO - 📚 Loading tokenizer... 2025-06-16 15:20:53,725 - running_modal_trl_modal - INFO - 🔧 Loading model... 2025-06-16 15:21:13,765 - running_modal_trl_modal - INFO - 🔥 Full Parameter Training Enabled 2025-06-16 15:21:13,765 - running_modal_trl_modal - INFO - Total parameters: 1,720,574,976 2025-06-16 15:21:13,766 - running_modal_trl_modal - INFO - Trainable parameters: 1,720,574,976 2025-06-16 15:21:13,766 - running_modal_trl_modal - INFO - Trainable %: 100.00% 2025-06-16 15:21:13,768 - running_modal_trl_modal - INFO - 📊 Preparing dataset... 2025-06-16 15:22:52,395 - running_modal_trl_modal - INFO - Dataset size: 151294 training examples 2025-06-16 15:25:55,129 - modal-client - WARNING - Received a cancellation signal while processing input ('in-01JXWN7VJFCDJA27HPQJDC4TMX:1750087233108-0',) 2025-06-16 15:28:23,881 - running_modal_trl_modal - INFO - 📊 Configuration: 2025-06-16 15:28:23,883 - running_modal_trl_modal - INFO - Model: Qwen/Qwen3-1.7B-Base 2025-06-16 15:28:23,883 - running_modal_trl_modal - INFO - Dataset: DuongTrongChi/luatvn-split-v_0.2.0 2025-06-16 15:28:23,885 - running_modal_trl_modal - INFO - Training mode: Full parameter training 2025-06-16 15:28:23,885 - running_modal_trl_modal - INFO - Distributed strategy: DDP (DistributedDataParallel) 2025-06-16 15:28:23,886 - running_modal_trl_modal - INFO - Accelerator state: Distributed environment: DistributedType.NO Num processes: 1 Process index: 0 Local process index: 0 Device: cuda Mixed precision type: no 2025-06-16 15:28:23,887 - running_modal_trl_modal - INFO - Number of processes: 1 2025-06-16 15:28:23,887 - running_modal_trl_modal - INFO - Device: cuda 2025-06-16 15:28:23,887 - running_modal_trl_modal - INFO - Mixed precision: no 2025-06-16 15:28:23,892 - running_modal_trl_modal - INFO - 📚 Loading tokenizer... 2025-06-16 15:28:26,992 - running_modal_trl_modal - INFO - 🔧 Loading model... 2025-06-16 15:28:46,840 - running_modal_trl_modal - INFO - 🔥 Full Parameter Training Enabled 2025-06-16 15:28:46,841 - running_modal_trl_modal - INFO - Total parameters: 1,720,574,976 2025-06-16 15:28:46,842 - running_modal_trl_modal - INFO - Trainable parameters: 1,720,574,976 2025-06-16 15:28:46,842 - running_modal_trl_modal - INFO - Trainable %: 100.00% 2025-06-16 15:28:46,847 - running_modal_trl_modal - INFO - 📊 Preparing dataset... 2025-06-16 15:29:53,783 - modal-client - WARNING - Received a cancellation signal while processing input ('in-01JXWNNRD96M7PBQD9GSPA271Z:1750087688619-0',) 2025-06-16 15:29:55,361 - modal-client - WARNING - Successfully canceled input ('in-01JXWNNRD96M7PBQD9GSPA271Z:1750087688619-0',) 2025-06-16 15:48:04,930 - running_modal_trl_modal - INFO - 📊 Configuration: 2025-06-16 15:48:05,158 - running_modal_trl_modal - INFO - Model: Qwen/Qwen3-1.7B-Base 2025-06-16 15:48:05,159 - running_modal_trl_modal - INFO - Dataset: DuongTrongChi/luatvn-split-v_0.2.0 2025-06-16 15:48:05,160 - running_modal_trl_modal - INFO - Training mode: Full parameter training 2025-06-16 15:48:05,161 - running_modal_trl_modal - INFO - Distributed strategy: DDP (DistributedDataParallel) 2025-06-16 15:48:05,162 - running_modal_trl_modal - INFO - Accelerator state: Distributed environment: DistributedType.NO Num processes: 1 Process index: 0 Local process index: 0 Device: cuda Mixed precision type: bf16 2025-06-16 15:48:05,163 - running_modal_trl_modal - INFO - Number of processes: 1 2025-06-16 15:48:05,164 - running_modal_trl_modal - INFO - Device: cuda 2025-06-16 15:48:05,164 - running_modal_trl_modal - INFO - Mixed precision: bf16 2025-06-16 15:48:05,171 - running_modal_trl_modal - INFO - 📚 Loading tokenizer... 2025-06-16 15:48:07,057 - running_modal_trl_modal - INFO - 🔧 Loading model... 2025-06-16 15:48:26,288 - running_modal_trl_modal - INFO - 🔥 Full Parameter Training Enabled 2025-06-16 15:48:26,290 - running_modal_trl_modal - INFO - Total parameters: 1,720,574,976 2025-06-16 15:48:26,291 - running_modal_trl_modal - INFO - Trainable parameters: 1,720,574,976 2025-06-16 15:48:26,292 - running_modal_trl_modal - INFO - Trainable %: 100.00% 2025-06-16 15:48:26,299 - running_modal_trl_modal - INFO - 📊 Preparing dataset... 2025-06-16 16:36:50,234 - __main__ - INFO - 📊 Configuration: 2025-06-16 16:36:50,416 - __main__ - INFO - Model: Qwen/Qwen3-1.7B-Base 2025-06-16 16:36:50,417 - __main__ - INFO - Dataset: DuongTrongChi/luatvn-split-v_0.2.0 2025-06-16 16:36:50,417 - __main__ - INFO - Training mode: Full parameter training 2025-06-16 16:36:50,417 - __main__ - INFO - Distributed strategy: DDP (DistributedDataParallel) 2025-06-16 16:36:50,418 - __main__ - INFO - Accelerator state: Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 0 Local process index: 0 Device: cuda:0 Mixed precision type: bf16 2025-06-16 16:36:50,418 - __main__ - INFO - Number of processes: 8 2025-06-16 16:36:50,418 - __main__ - INFO - Device: cuda:0 2025-06-16 16:36:50,419 - __main__ - INFO - Mixed precision: bf16 2025-06-16 16:36:50,419 - __main__ - INFO - 🚀 DDP Optimizations: 2025-06-16 16:36:50,419 - __main__ - INFO - DDP bucket size: 25MB 2025-06-16 16:36:50,420 - __main__ - INFO - DDP broadcast buffers: True 2025-06-16 16:36:50,420 - __main__ - INFO - DDP find unused parameters: False 2025-06-16 16:36:50,420 - __main__ - INFO - Strategy: Data Parallelism - each GPU has full model copy 2025-06-16 16:36:50,426 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-16 16:36:50,456 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-16 16:36:50,460 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-16 16:36:50,485 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-16 16:36:50,489 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-16 16:36:50,491 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-16 16:36:50,501 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-16 16:36:50,502 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-16 16:36:52,216 - __main__ - INFO - 🔧 Loading model... 2025-06-16 16:36:52,222 - __main__ - INFO - 🔧 Loading model... 2025-06-16 16:36:52,226 - __main__ - INFO - 🔧 Loading model... 2025-06-16 16:36:52,234 - __main__ - INFO - 🔧 Loading model... 2025-06-16 16:36:52,235 - __main__ - INFO - 🔧 Loading model... 2025-06-16 16:36:52,237 - __main__ - INFO - 🔧 Loading model... 2025-06-16 16:36:52,253 - __main__ - INFO - 🔧 Loading model... 2025-06-16 16:36:52,423 - __main__ - INFO - 🔧 Loading model... 2025-06-16 16:37:10,792 - __main__ - INFO - 📊 Preparing dataset... 2025-06-16 16:37:10,799 - __main__ - INFO - 📊 Preparing dataset... 2025-06-16 16:37:10,800 - __main__ - INFO - 📊 Preparing dataset... 2025-06-16 16:37:10,820 - __main__ - INFO - 📊 Preparing dataset... 2025-06-16 16:37:10,837 - __main__ - INFO - 🔥 Full Parameter Training Enabled 2025-06-16 16:37:10,838 - __main__ - INFO - Total parameters: 1,720,574,976 2025-06-16 16:37:10,838 - __main__ - INFO - Trainable parameters: 1,720,574,976 2025-06-16 16:37:10,839 - __main__ - INFO - Trainable %: 100.00% 2025-06-16 16:37:10,843 - __main__ - INFO - 📊 Preparing dataset... 2025-06-16 16:37:10,846 - __main__ - INFO - 📊 Preparing dataset... 2025-06-16 16:37:10,883 - __main__ - INFO - 📊 Preparing dataset... 2025-06-16 16:37:10,898 - __main__ - INFO - 📊 Preparing dataset... 2025-06-16 16:38:12,513 - __main__ - INFO - Dataset size: 151294 training examples 2025-06-16 16:38:12,553 - __main__ - INFO - Dataset size: 151294 training examples 2025-06-16 16:38:12,563 - __main__ - INFO - Dataset size: 151294 training examples 2025-06-16 16:38:12,620 - __main__ - INFO - Dataset size: 151294 training examples 2025-06-16 16:38:12,670 - __main__ - INFO - Dataset size: 151294 training examples 2025-06-16 16:38:12,719 - __main__ - INFO - Dataset size: 151294 training examples 2025-06-16 16:38:12,769 - __main__ - INFO - Dataset size: 151294 training examples 2025-06-16 16:38:12,837 - __main__ - INFO - Dataset size: 151294 training examples 2025-06-18 16:09:59,770 - __main__ - INFO - 📊 Configuration: 2025-06-18 16:09:59,954 - __main__ - INFO - Model: Qwen/Qwen3-1.7B-Base 2025-06-18 16:09:59,954 - __main__ - INFO - Dataset: thangvip/tokenized-ds-qwen3-legal-mixed 2025-06-18 16:09:59,955 - __main__ - INFO - Training mode: Full parameter training 2025-06-18 16:09:59,955 - __main__ - INFO - Distributed strategy: DDP (DistributedDataParallel) 2025-06-18 16:09:59,956 - __main__ - INFO - Accelerator state: Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 0 Local process index: 0 Device: cuda:0 Mixed precision type: bf16 2025-06-18 16:09:59,956 - __main__ - INFO - Number of processes: 8 2025-06-18 16:09:59,957 - __main__ - INFO - Device: cuda:0 2025-06-18 16:09:59,957 - __main__ - INFO - Mixed precision: bf16 2025-06-18 16:09:59,957 - __main__ - INFO - 🚀 DDP Optimizations: 2025-06-18 16:09:59,958 - __main__ - INFO - DDP bucket size: 25MB 2025-06-18 16:09:59,958 - __main__ - INFO - DDP broadcast buffers: True 2025-06-18 16:09:59,959 - __main__ - INFO - DDP find unused parameters: False 2025-06-18 16:09:59,959 - __main__ - INFO - Strategy: Data Parallelism - each GPU has full model copy 2025-06-18 16:09:59,959 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:09:59,962 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:09:59,987 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:09:59,994 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:09:59,994 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:10:00,015 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:10:00,016 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:10:00,017 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:10:01,904 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:10:01,907 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:10:01,909 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:10:01,914 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:10:01,922 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:10:01,946 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:10:01,999 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:10:02,555 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:10:20,442 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:10:20,445 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:10:20,451 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:10:20,455 - __main__ - INFO - 🔥 Full Parameter Training Enabled 2025-06-18 16:10:20,455 - __main__ - INFO - Total parameters: 1,720,574,976 2025-06-18 16:10:20,456 - __main__ - INFO - Trainable parameters: 1,720,574,976 2025-06-18 16:10:20,456 - __main__ - INFO - Trainable %: 100.00% 2025-06-18 16:10:20,460 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:10:20,466 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:10:20,623 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:10:20,628 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:10:20,704 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:15:37,409 - __main__ - INFO - 📊 Configuration: 2025-06-18 16:15:37,699 - __main__ - INFO - Model: Qwen/Qwen3-1.7B-Base 2025-06-18 16:15:37,700 - __main__ - INFO - Dataset: thangvip/tokenized-ds-qwen3-legal-mixed 2025-06-18 16:15:37,700 - __main__ - INFO - Training mode: Full parameter training 2025-06-18 16:15:37,700 - __main__ - INFO - Distributed strategy: DDP (DistributedDataParallel) 2025-06-18 16:15:37,701 - __main__ - INFO - Accelerator state: Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 0 Local process index: 0 Device: cuda:0 Mixed precision type: bf16 2025-06-18 16:15:37,701 - __main__ - INFO - Number of processes: 8 2025-06-18 16:15:37,702 - __main__ - INFO - Device: cuda:0 2025-06-18 16:15:37,702 - __main__ - INFO - Mixed precision: bf16 2025-06-18 16:15:37,702 - __main__ - INFO - 🚀 DDP Optimizations: 2025-06-18 16:15:37,703 - __main__ - INFO - DDP bucket size: 25MB 2025-06-18 16:15:37,703 - __main__ - INFO - DDP broadcast buffers: True 2025-06-18 16:15:37,704 - __main__ - INFO - DDP find unused parameters: False 2025-06-18 16:15:37,704 - __main__ - INFO - Strategy: Data Parallelism - each GPU has full model copy 2025-06-18 16:15:37,708 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:15:38,268 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:15:38,310 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:15:38,318 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:15:38,320 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:15:38,320 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:15:38,320 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:15:38,331 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:15:39,368 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:15:39,388 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:15:39,389 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:15:39,389 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:15:39,408 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:15:39,430 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:15:39,471 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:15:39,583 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:15:57,581 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:15:57,627 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:15:57,659 - __main__ - INFO - 🔥 Full Parameter Training Enabled 2025-06-18 16:15:57,661 - __main__ - INFO - Total parameters: 1,720,574,976 2025-06-18 16:15:57,661 - __main__ - INFO - Trainable parameters: 1,720,574,976 2025-06-18 16:15:57,662 - __main__ - INFO - Trainable %: 100.00% 2025-06-18 16:15:57,668 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:15:57,687 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:15:57,697 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:15:57,748 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:15:57,880 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:15:57,888 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:16:41,073 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:16:41,087 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:16:41,139 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:16:41,176 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:16:41,190 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:16:41,194 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:16:41,242 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:16:41,248 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:16:41,293 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:16:41,297 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:16:41,342 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:16:41,343 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:16:41,393 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:16:41,397 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:16:41,441 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:16:41,494 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:16:47,466 - accelerate.utils.other - WARNING - Detected kernel version 4.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher. 2025-06-18 16:16:49,491 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:16:49,704 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:16:49,704 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:16:49,704 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:16:49,704 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:16:49,704 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:16:49,704 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:16:49,705 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:28:40,015 - __main__ - INFO - 📊 Configuration: 2025-06-18 16:28:40,196 - __main__ - INFO - Model: Qwen/Qwen3-1.7B-Base 2025-06-18 16:28:40,197 - __main__ - INFO - Dataset: thangvip/tokenized-ds-qwen3-legal-mixed 2025-06-18 16:28:40,197 - __main__ - INFO - Training mode: Full parameter training 2025-06-18 16:28:40,198 - __main__ - INFO - Distributed strategy: DDP (DistributedDataParallel) 2025-06-18 16:28:40,198 - __main__ - INFO - Accelerator state: Distributed environment: DistributedType.MULTI_GPU Backend: nccl Num processes: 8 Process index: 0 Local process index: 0 Device: cuda:0 Mixed precision type: bf16 2025-06-18 16:28:40,198 - __main__ - INFO - Number of processes: 8 2025-06-18 16:28:40,199 - __main__ - INFO - Device: cuda:0 2025-06-18 16:28:40,199 - __main__ - INFO - Mixed precision: bf16 2025-06-18 16:28:40,199 - __main__ - INFO - 🚀 DDP Optimizations: 2025-06-18 16:28:40,200 - __main__ - INFO - DDP bucket size: 25MB 2025-06-18 16:28:40,200 - __main__ - INFO - DDP broadcast buffers: True 2025-06-18 16:28:40,200 - __main__ - INFO - DDP find unused parameters: False 2025-06-18 16:28:40,201 - __main__ - INFO - Strategy: Data Parallelism - each GPU has full model copy 2025-06-18 16:28:40,201 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:28:40,205 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:28:40,210 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:28:40,215 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:28:40,217 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:28:40,226 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:28:40,230 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:28:40,234 - __main__ - INFO - 📚 Loading tokenizer... 2025-06-18 16:28:42,149 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:28:42,172 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:28:42,222 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:28:42,255 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:28:42,277 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:28:42,285 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:28:42,318 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:28:42,382 - __main__ - INFO - 🔧 Loading model... 2025-06-18 16:29:00,350 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:29:00,354 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:29:00,393 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:29:00,397 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:29:00,397 - __main__ - INFO - 🔥 Full Parameter Training Enabled 2025-06-18 16:29:00,398 - __main__ - INFO - Total parameters: 1,720,574,976 2025-06-18 16:29:00,398 - __main__ - INFO - Trainable parameters: 1,720,574,976 2025-06-18 16:29:00,398 - __main__ - INFO - Trainable %: 100.00% 2025-06-18 16:29:00,402 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:29:00,471 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:29:00,499 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:29:00,692 - __main__ - INFO - 📊 Preparing dataset... 2025-06-18 16:29:48,530 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:29:48,567 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:29:48,618 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:29:48,634 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:29:48,668 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:29:48,668 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:29:48,720 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:29:48,723 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:29:48,770 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:29:48,772 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:29:48,820 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:29:48,825 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:29:48,872 - __main__ - INFO - Dataset size: 392686 training examples 2025-06-18 16:29:48,873 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:29:48,917 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:29:48,973 - __main__ - INFO - 🎯 Creating SFT Trainer... 2025-06-18 16:29:55,008 - accelerate.utils.other - WARNING - Detected kernel version 4.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher. 2025-06-18 16:29:56,436 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:29:57,134 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:29:57,134 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:29:57,134 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:29:57,134 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:29:57,134 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:29:57,134 - __main__ - INFO - 🚂 Starting TRL training... 2025-06-18 16:29:57,251 - __main__ - INFO - 🚂 Starting TRL training...