|
[2025-08-26 15:23:01,662][89187] Saving configuration to /home/ubuntu/train_dir/default_experiment/config.json... |
|
[2025-08-26 15:23:01,663][89187] Rollout worker 0 uses device cpu |
|
[2025-08-26 15:23:01,663][89187] Rollout worker 1 uses device cpu |
|
[2025-08-26 15:23:01,664][89187] Rollout worker 2 uses device cpu |
|
[2025-08-26 15:23:01,664][89187] Rollout worker 3 uses device cpu |
|
[2025-08-26 15:23:01,665][89187] Rollout worker 4 uses device cpu |
|
[2025-08-26 15:23:01,665][89187] Rollout worker 5 uses device cpu |
|
[2025-08-26 15:23:01,666][89187] Rollout worker 6 uses device cpu |
|
[2025-08-26 15:23:01,666][89187] Rollout worker 7 uses device cpu |
|
[2025-08-26 15:23:01,707][89187] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:23:01,708][89187] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-08-26 15:23:01,738][89187] Starting all processes... |
|
[2025-08-26 15:23:01,739][89187] Starting process learner_proc0 |
|
[2025-08-26 15:23:01,788][89187] Starting all processes... |
|
[2025-08-26 15:23:01,793][89187] Starting process inference_proc0-0 |
|
[2025-08-26 15:23:01,794][89187] Starting process rollout_proc0 |
|
[2025-08-26 15:23:01,794][89187] Starting process rollout_proc1 |
|
[2025-08-26 15:23:01,794][89187] Starting process rollout_proc2 |
|
[2025-08-26 15:23:01,795][89187] Starting process rollout_proc3 |
|
[2025-08-26 15:23:01,795][89187] Starting process rollout_proc4 |
|
[2025-08-26 15:23:01,795][89187] Starting process rollout_proc5 |
|
[2025-08-26 15:23:01,795][89187] Starting process rollout_proc6 |
|
[2025-08-26 15:23:01,822][89187] Starting process rollout_proc7 |
|
[2025-08-26 15:23:03,996][89752] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:23:03,996][89752] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-08-26 15:23:04,010][89752] Num visible devices: 1 |
|
[2025-08-26 15:23:04,027][89752] Starting seed is not provided |
|
[2025-08-26 15:23:04,027][89752] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:23:04,027][89752] Initializing actor-critic model on device cuda:0 |
|
[2025-08-26 15:23:04,027][89752] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:23:04,028][89752] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:23:04,030][89770] Worker 4 uses CPU cores [4] |
|
[2025-08-26 15:23:04,040][89752] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:23:04,070][89768] Worker 2 uses CPU cores [2] |
|
[2025-08-26 15:23:04,114][89767] Worker 0 uses CPU cores [0] |
|
[2025-08-26 15:23:04,158][89766] Worker 1 uses CPU cores [1] |
|
[2025-08-26 15:23:04,182][89769] Worker 3 uses CPU cores [3] |
|
[2025-08-26 15:23:04,200][89772] Worker 7 uses CPU cores [7] |
|
[2025-08-26 15:23:04,210][89752] Conv encoder output size: 512 |
|
[2025-08-26 15:23:04,210][89752] Policy head output size: 512 |
|
[2025-08-26 15:23:04,214][89774] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:23:04,214][89774] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-08-26 15:23:04,214][89771] Worker 5 uses CPU cores [5] |
|
[2025-08-26 15:23:04,218][89752] Created Actor Critic model with architecture: |
|
[2025-08-26 15:23:04,218][89752] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-08-26 15:23:04,228][89774] Num visible devices: 1 |
|
[2025-08-26 15:23:04,360][89752] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-08-26 15:23:04,368][89773] Worker 6 uses CPU cores [6] |
|
[2025-08-26 15:23:04,897][89752] No checkpoints found |
|
[2025-08-26 15:23:04,897][89752] Did not load from checkpoint, starting from scratch! |
|
[2025-08-26 15:23:04,897][89752] Initialized policy 0 weights for model version 0 |
|
[2025-08-26 15:23:04,899][89752] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:23:04,901][89752] LearnerWorker_p0 finished initialization! |
|
[2025-08-26 15:23:04,951][89774] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:23:04,952][89774] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:23:04,959][89774] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:23:05,010][89774] Conv encoder output size: 512 |
|
[2025-08-26 15:23:05,011][89774] Policy head output size: 512 |
|
[2025-08-26 15:23:05,034][89187] Inference worker 0-0 is ready! |
|
[2025-08-26 15:23:05,035][89187] All inference workers are ready! Signal rollout workers to start! |
|
[2025-08-26 15:23:05,062][89771] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:23:05,063][89768] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:23:05,063][89766] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:23:05,063][89770] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:23:05,063][89769] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:23:05,063][89767] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:23:05,063][89773] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:23:05,064][89772] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:23:05,225][89771] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:23:05,229][89766] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:23:05,280][89769] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:23:05,281][89768] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:23:05,282][89772] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:23:05,284][89773] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:23:05,363][89766] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:23:05,414][89770] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:23:05,450][89771] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:23:05,472][89772] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:23:05,474][89773] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:23:05,552][89766] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:23:05,562][89769] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:23:05,668][89773] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:23:05,688][89767] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:23:05,769][89766] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:23:05,773][89768] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:23:05,800][89769] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:23:05,835][89773] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:23:05,851][89770] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:23:05,881][89767] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:23:05,949][89772] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:23:05,965][89768] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:23:06,026][89770] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:23:06,102][89772] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:23:06,170][89769] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:23:06,179][89768] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:23:06,281][89770] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:23:06,314][89767] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:23:06,468][89767] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:23:06,482][89771] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:23:06,635][89771] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:23:07,143][89752] Signal inference workers to stop experience collection... |
|
[2025-08-26 15:23:07,145][89774] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-08-26 15:23:08,362][89752] Signal inference workers to resume experience collection... |
|
[2025-08-26 15:23:08,363][89774] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-08-26 15:23:09,362][89187] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 36864. Throughput: 0: nan. Samples: 2744. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2025-08-26 15:23:09,363][89187] Avg episode reward: [(0, '3.823')] |
|
[2025-08-26 15:23:09,425][89774] Updated weights for policy 0, policy_version 10 (0.0051) |
|
[2025-08-26 15:23:10,715][89774] Updated weights for policy 0, policy_version 20 (0.0007) |
|
[2025-08-26 15:23:11,999][89774] Updated weights for policy 0, policy_version 30 (0.0008) |
|
[2025-08-26 15:23:13,265][89774] Updated weights for policy 0, policy_version 40 (0.0006) |
|
[2025-08-26 15:23:14,362][89187] Fps is (10 sec: 31948.7, 60 sec: 31948.7, 300 sec: 31948.7). Total num frames: 196608. Throughput: 0: 8986.4. Samples: 47676. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:23:14,363][89187] Avg episode reward: [(0, '4.447')] |
|
[2025-08-26 15:23:14,363][89752] Saving new best policy, reward=4.447! |
|
[2025-08-26 15:23:14,557][89774] Updated weights for policy 0, policy_version 50 (0.0008) |
|
[2025-08-26 15:23:15,877][89774] Updated weights for policy 0, policy_version 60 (0.0006) |
|
[2025-08-26 15:23:17,164][89774] Updated weights for policy 0, policy_version 70 (0.0007) |
|
[2025-08-26 15:23:18,461][89774] Updated weights for policy 0, policy_version 80 (0.0006) |
|
[2025-08-26 15:23:19,362][89187] Fps is (10 sec: 31539.1, 60 sec: 31539.1, 300 sec: 31539.1). Total num frames: 352256. Throughput: 0: 6858.8. Samples: 71332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-26 15:23:19,363][89187] Avg episode reward: [(0, '4.396')] |
|
[2025-08-26 15:23:19,792][89774] Updated weights for policy 0, policy_version 90 (0.0006) |
|
[2025-08-26 15:23:21,062][89774] Updated weights for policy 0, policy_version 100 (0.0007) |
|
[2025-08-26 15:23:21,700][89187] Heartbeat connected on Batcher_0 |
|
[2025-08-26 15:23:21,703][89187] Heartbeat connected on LearnerWorker_p0 |
|
[2025-08-26 15:23:21,709][89187] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-08-26 15:23:21,716][89187] Heartbeat connected on RolloutWorker_w0 |
|
[2025-08-26 15:23:21,720][89187] Heartbeat connected on RolloutWorker_w2 |
|
[2025-08-26 15:23:21,725][89187] Heartbeat connected on RolloutWorker_w3 |
|
[2025-08-26 15:23:21,729][89187] Heartbeat connected on RolloutWorker_w1 |
|
[2025-08-26 15:23:21,730][89187] Heartbeat connected on RolloutWorker_w4 |
|
[2025-08-26 15:23:21,733][89187] Heartbeat connected on RolloutWorker_w5 |
|
[2025-08-26 15:23:21,735][89187] Heartbeat connected on RolloutWorker_w6 |
|
[2025-08-26 15:23:21,737][89187] Heartbeat connected on RolloutWorker_w7 |
|
[2025-08-26 15:23:22,443][89774] Updated weights for policy 0, policy_version 110 (0.0008) |
|
[2025-08-26 15:23:23,770][89774] Updated weights for policy 0, policy_version 120 (0.0006) |
|
[2025-08-26 15:23:24,362][89187] Fps is (10 sec: 31129.5, 60 sec: 31402.5, 300 sec: 31402.5). Total num frames: 507904. Throughput: 0: 7673.2. Samples: 117842. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:23:24,363][89187] Avg episode reward: [(0, '4.591')] |
|
[2025-08-26 15:23:24,364][89752] Saving new best policy, reward=4.591! |
|
[2025-08-26 15:23:25,145][89774] Updated weights for policy 0, policy_version 130 (0.0006) |
|
[2025-08-26 15:23:26,468][89774] Updated weights for policy 0, policy_version 140 (0.0007) |
|
[2025-08-26 15:23:27,859][89774] Updated weights for policy 0, policy_version 150 (0.0008) |
|
[2025-08-26 15:23:29,362][89774] Updated weights for policy 0, policy_version 160 (0.0007) |
|
[2025-08-26 15:23:29,362][89187] Fps is (10 sec: 30310.5, 60 sec: 30924.8, 300 sec: 30924.8). Total num frames: 655360. Throughput: 0: 7952.1. Samples: 161786. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:23:29,363][89187] Avg episode reward: [(0, '4.584')] |
|
[2025-08-26 15:23:30,650][89774] Updated weights for policy 0, policy_version 170 (0.0006) |
|
[2025-08-26 15:23:31,927][89774] Updated weights for policy 0, policy_version 180 (0.0007) |
|
[2025-08-26 15:23:33,222][89774] Updated weights for policy 0, policy_version 190 (0.0006) |
|
[2025-08-26 15:23:34,362][89187] Fps is (10 sec: 30310.5, 60 sec: 30965.7, 300 sec: 30965.7). Total num frames: 811008. Throughput: 0: 7324.5. Samples: 185856. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-26 15:23:34,363][89187] Avg episode reward: [(0, '5.248')] |
|
[2025-08-26 15:23:34,364][89752] Saving new best policy, reward=5.248! |
|
[2025-08-26 15:23:34,532][89774] Updated weights for policy 0, policy_version 200 (0.0008) |
|
[2025-08-26 15:23:35,808][89774] Updated weights for policy 0, policy_version 210 (0.0007) |
|
[2025-08-26 15:23:37,092][89774] Updated weights for policy 0, policy_version 220 (0.0008) |
|
[2025-08-26 15:23:38,386][89774] Updated weights for policy 0, policy_version 230 (0.0007) |
|
[2025-08-26 15:23:39,362][89187] Fps is (10 sec: 31539.1, 60 sec: 31129.6, 300 sec: 31129.6). Total num frames: 970752. Throughput: 0: 7687.1. Samples: 233356. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:23:39,363][89187] Avg episode reward: [(0, '4.654')] |
|
[2025-08-26 15:23:39,654][89774] Updated weights for policy 0, policy_version 240 (0.0006) |
|
[2025-08-26 15:23:40,914][89774] Updated weights for policy 0, policy_version 250 (0.0006) |
|
[2025-08-26 15:23:42,248][89774] Updated weights for policy 0, policy_version 260 (0.0006) |
|
[2025-08-26 15:23:43,546][89774] Updated weights for policy 0, policy_version 270 (0.0006) |
|
[2025-08-26 15:23:44,362][89187] Fps is (10 sec: 31948.6, 60 sec: 31246.6, 300 sec: 31246.6). Total num frames: 1130496. Throughput: 0: 7947.4. Samples: 280902. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:23:44,363][89187] Avg episode reward: [(0, '5.155')] |
|
[2025-08-26 15:23:44,879][89774] Updated weights for policy 0, policy_version 280 (0.0008) |
|
[2025-08-26 15:23:46,149][89774] Updated weights for policy 0, policy_version 290 (0.0006) |
|
[2025-08-26 15:23:47,454][89774] Updated weights for policy 0, policy_version 300 (0.0006) |
|
[2025-08-26 15:23:48,721][89774] Updated weights for policy 0, policy_version 310 (0.0007) |
|
[2025-08-26 15:23:49,362][89187] Fps is (10 sec: 31539.3, 60 sec: 31232.0, 300 sec: 31232.0). Total num frames: 1286144. Throughput: 0: 7544.5. Samples: 304524. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:23:49,362][89187] Avg episode reward: [(0, '5.855')] |
|
[2025-08-26 15:23:49,366][89752] Saving new best policy, reward=5.855! |
|
[2025-08-26 15:23:50,008][89774] Updated weights for policy 0, policy_version 320 (0.0006) |
|
[2025-08-26 15:23:51,295][89774] Updated weights for policy 0, policy_version 330 (0.0007) |
|
[2025-08-26 15:23:52,562][89774] Updated weights for policy 0, policy_version 340 (0.0007) |
|
[2025-08-26 15:23:53,860][89774] Updated weights for policy 0, policy_version 350 (0.0007) |
|
[2025-08-26 15:23:54,362][89187] Fps is (10 sec: 31948.9, 60 sec: 31402.6, 300 sec: 31402.6). Total num frames: 1449984. Throughput: 0: 7774.3. Samples: 352588. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:23:54,363][89187] Avg episode reward: [(0, '7.038')] |
|
[2025-08-26 15:23:54,363][89752] Saving new best policy, reward=7.038! |
|
[2025-08-26 15:23:55,134][89774] Updated weights for policy 0, policy_version 360 (0.0006) |
|
[2025-08-26 15:23:56,435][89774] Updated weights for policy 0, policy_version 370 (0.0006) |
|
[2025-08-26 15:23:57,688][89774] Updated weights for policy 0, policy_version 380 (0.0007) |
|
[2025-08-26 15:23:58,959][89774] Updated weights for policy 0, policy_version 390 (0.0007) |
|
[2025-08-26 15:23:59,362][89187] Fps is (10 sec: 32357.7, 60 sec: 31457.2, 300 sec: 31457.2). Total num frames: 1609728. Throughput: 0: 7844.1. Samples: 400660. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-26 15:23:59,363][89187] Avg episode reward: [(0, '8.243')] |
|
[2025-08-26 15:23:59,367][89752] Saving new best policy, reward=8.243! |
|
[2025-08-26 15:24:00,258][89774] Updated weights for policy 0, policy_version 400 (0.0006) |
|
[2025-08-26 15:24:01,500][89774] Updated weights for policy 0, policy_version 410 (0.0006) |
|
[2025-08-26 15:24:02,731][89774] Updated weights for policy 0, policy_version 420 (0.0007) |
|
[2025-08-26 15:24:04,002][89774] Updated weights for policy 0, policy_version 430 (0.0007) |
|
[2025-08-26 15:24:04,362][89187] Fps is (10 sec: 31948.9, 60 sec: 31502.0, 300 sec: 31502.0). Total num frames: 1769472. Throughput: 0: 7857.0. Samples: 424898. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-26 15:24:04,363][89187] Avg episode reward: [(0, '10.982')] |
|
[2025-08-26 15:24:04,373][89752] Saving new best policy, reward=10.982! |
|
[2025-08-26 15:24:05,250][89774] Updated weights for policy 0, policy_version 440 (0.0006) |
|
[2025-08-26 15:24:06,560][89774] Updated weights for policy 0, policy_version 450 (0.0007) |
|
[2025-08-26 15:24:07,850][89774] Updated weights for policy 0, policy_version 460 (0.0006) |
|
[2025-08-26 15:24:09,109][89774] Updated weights for policy 0, policy_version 470 (0.0005) |
|
[2025-08-26 15:24:09,362][89187] Fps is (10 sec: 32359.0, 60 sec: 31607.5, 300 sec: 31607.5). Total num frames: 1933312. Throughput: 0: 7898.4. Samples: 473270. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:24:09,363][89187] Avg episode reward: [(0, '12.293')] |
|
[2025-08-26 15:24:09,365][89752] Saving new best policy, reward=12.293! |
|
[2025-08-26 15:24:10,359][89774] Updated weights for policy 0, policy_version 480 (0.0006) |
|
[2025-08-26 15:24:11,622][89774] Updated weights for policy 0, policy_version 490 (0.0006) |
|
[2025-08-26 15:24:12,848][89774] Updated weights for policy 0, policy_version 500 (0.0007) |
|
[2025-08-26 15:24:14,133][89774] Updated weights for policy 0, policy_version 510 (0.0007) |
|
[2025-08-26 15:24:14,362][89187] Fps is (10 sec: 32358.3, 60 sec: 31607.5, 300 sec: 31633.7). Total num frames: 2093056. Throughput: 0: 8007.5. Samples: 522124. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-26 15:24:14,363][89187] Avg episode reward: [(0, '14.502')] |
|
[2025-08-26 15:24:14,364][89752] Saving new best policy, reward=14.502! |
|
[2025-08-26 15:24:15,400][89774] Updated weights for policy 0, policy_version 520 (0.0006) |
|
[2025-08-26 15:24:16,648][89774] Updated weights for policy 0, policy_version 530 (0.0006) |
|
[2025-08-26 15:24:17,976][89774] Updated weights for policy 0, policy_version 540 (0.0007) |
|
[2025-08-26 15:24:19,223][89774] Updated weights for policy 0, policy_version 550 (0.0006) |
|
[2025-08-26 15:24:19,362][89187] Fps is (10 sec: 32358.3, 60 sec: 31744.0, 300 sec: 31714.7). Total num frames: 2256896. Throughput: 0: 8009.6. Samples: 546286. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:24:19,363][89187] Avg episode reward: [(0, '15.657')] |
|
[2025-08-26 15:24:19,365][89752] Saving new best policy, reward=15.657! |
|
[2025-08-26 15:24:20,465][89774] Updated weights for policy 0, policy_version 560 (0.0007) |
|
[2025-08-26 15:24:21,734][89774] Updated weights for policy 0, policy_version 570 (0.0007) |
|
[2025-08-26 15:24:23,002][89774] Updated weights for policy 0, policy_version 580 (0.0006) |
|
[2025-08-26 15:24:24,250][89774] Updated weights for policy 0, policy_version 590 (0.0006) |
|
[2025-08-26 15:24:24,362][89187] Fps is (10 sec: 32358.3, 60 sec: 31812.3, 300 sec: 31730.3). Total num frames: 2416640. Throughput: 0: 8031.6. Samples: 594776. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-26 15:24:24,363][89187] Avg episode reward: [(0, '17.068')] |
|
[2025-08-26 15:24:24,363][89752] Saving new best policy, reward=17.068! |
|
[2025-08-26 15:24:25,513][89774] Updated weights for policy 0, policy_version 600 (0.0006) |
|
[2025-08-26 15:24:26,804][89774] Updated weights for policy 0, policy_version 610 (0.0006) |
|
[2025-08-26 15:24:28,120][89774] Updated weights for policy 0, policy_version 620 (0.0006) |
|
[2025-08-26 15:24:29,362][89187] Fps is (10 sec: 31948.7, 60 sec: 32017.0, 300 sec: 31744.0). Total num frames: 2576384. Throughput: 0: 8038.0. Samples: 642614. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-26 15:24:29,363][89187] Avg episode reward: [(0, '15.875')] |
|
[2025-08-26 15:24:29,420][89774] Updated weights for policy 0, policy_version 630 (0.0008) |
|
[2025-08-26 15:24:30,674][89774] Updated weights for policy 0, policy_version 640 (0.0006) |
|
[2025-08-26 15:24:31,961][89774] Updated weights for policy 0, policy_version 650 (0.0007) |
|
[2025-08-26 15:24:33,210][89774] Updated weights for policy 0, policy_version 660 (0.0006) |
|
[2025-08-26 15:24:34,362][89187] Fps is (10 sec: 32358.5, 60 sec: 32153.6, 300 sec: 31804.2). Total num frames: 2740224. Throughput: 0: 8052.8. Samples: 666902. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:24:34,363][89187] Avg episode reward: [(0, '15.814')] |
|
[2025-08-26 15:24:34,453][89774] Updated weights for policy 0, policy_version 670 (0.0006) |
|
[2025-08-26 15:24:35,717][89774] Updated weights for policy 0, policy_version 680 (0.0006) |
|
[2025-08-26 15:24:36,984][89774] Updated weights for policy 0, policy_version 690 (0.0006) |
|
[2025-08-26 15:24:38,230][89774] Updated weights for policy 0, policy_version 700 (0.0006) |
|
[2025-08-26 15:24:39,362][89187] Fps is (10 sec: 32358.0, 60 sec: 32153.5, 300 sec: 31812.2). Total num frames: 2899968. Throughput: 0: 8070.3. Samples: 715752. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:24:39,363][89187] Avg episode reward: [(0, '20.151')] |
|
[2025-08-26 15:24:39,366][89752] Saving new best policy, reward=20.151! |
|
[2025-08-26 15:24:39,551][89774] Updated weights for policy 0, policy_version 710 (0.0007) |
|
[2025-08-26 15:24:40,814][89774] Updated weights for policy 0, policy_version 720 (0.0007) |
|
[2025-08-26 15:24:42,077][89774] Updated weights for policy 0, policy_version 730 (0.0006) |
|
[2025-08-26 15:24:43,319][89774] Updated weights for policy 0, policy_version 740 (0.0007) |
|
[2025-08-26 15:24:44,362][89187] Fps is (10 sec: 32358.5, 60 sec: 32221.9, 300 sec: 31862.6). Total num frames: 3063808. Throughput: 0: 8078.9. Samples: 764208. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:24:44,363][89187] Avg episode reward: [(0, '19.523')] |
|
[2025-08-26 15:24:44,587][89774] Updated weights for policy 0, policy_version 750 (0.0006) |
|
[2025-08-26 15:24:45,831][89774] Updated weights for policy 0, policy_version 760 (0.0006) |
|
[2025-08-26 15:24:47,093][89774] Updated weights for policy 0, policy_version 770 (0.0006) |
|
[2025-08-26 15:24:48,361][89774] Updated weights for policy 0, policy_version 780 (0.0006) |
|
[2025-08-26 15:24:49,362][89187] Fps is (10 sec: 32768.4, 60 sec: 32358.4, 300 sec: 31907.8). Total num frames: 3227648. Throughput: 0: 8083.1. Samples: 788638. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-26 15:24:49,363][89187] Avg episode reward: [(0, '20.350')] |
|
[2025-08-26 15:24:49,366][89752] Saving new best policy, reward=20.350! |
|
[2025-08-26 15:24:49,608][89774] Updated weights for policy 0, policy_version 790 (0.0006) |
|
[2025-08-26 15:24:50,936][89774] Updated weights for policy 0, policy_version 800 (0.0006) |
|
[2025-08-26 15:24:52,181][89774] Updated weights for policy 0, policy_version 810 (0.0007) |
|
[2025-08-26 15:24:53,517][89774] Updated weights for policy 0, policy_version 820 (0.0006) |
|
[2025-08-26 15:24:54,362][89187] Fps is (10 sec: 31948.8, 60 sec: 32221.9, 300 sec: 31870.8). Total num frames: 3383296. Throughput: 0: 8073.2. Samples: 836564. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:24:54,363][89187] Avg episode reward: [(0, '19.603')] |
|
[2025-08-26 15:24:54,783][89774] Updated weights for policy 0, policy_version 830 (0.0008) |
|
[2025-08-26 15:24:56,026][89774] Updated weights for policy 0, policy_version 840 (0.0006) |
|
[2025-08-26 15:24:57,308][89774] Updated weights for policy 0, policy_version 850 (0.0007) |
|
[2025-08-26 15:24:58,701][89774] Updated weights for policy 0, policy_version 860 (0.0006) |
|
[2025-08-26 15:24:59,362][89187] Fps is (10 sec: 31539.2, 60 sec: 32221.9, 300 sec: 31874.3). Total num frames: 3543040. Throughput: 0: 8044.0. Samples: 884106. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:24:59,363][89187] Avg episode reward: [(0, '24.276')] |
|
[2025-08-26 15:24:59,367][89752] Saving /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000865_3543040.pth... |
|
[2025-08-26 15:24:59,416][89752] Saving new best policy, reward=24.276! |
|
[2025-08-26 15:24:59,959][89774] Updated weights for policy 0, policy_version 870 (0.0006) |
|
[2025-08-26 15:25:01,279][89774] Updated weights for policy 0, policy_version 880 (0.0008) |
|
[2025-08-26 15:25:02,568][89774] Updated weights for policy 0, policy_version 890 (0.0006) |
|
[2025-08-26 15:25:03,846][89774] Updated weights for policy 0, policy_version 900 (0.0006) |
|
[2025-08-26 15:25:04,362][89187] Fps is (10 sec: 31129.4, 60 sec: 32085.3, 300 sec: 31806.3). Total num frames: 3694592. Throughput: 0: 8031.7. Samples: 907714. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:25:04,363][89187] Avg episode reward: [(0, '20.412')] |
|
[2025-08-26 15:25:06,416][89774] Updated weights for policy 0, policy_version 910 (0.0009) |
|
[2025-08-26 15:25:08,489][89774] Updated weights for policy 0, policy_version 920 (0.0008) |
|
[2025-08-26 15:25:09,362][89187] Fps is (10 sec: 24985.6, 60 sec: 30993.0, 300 sec: 31300.3). Total num frames: 3792896. Throughput: 0: 7663.3. Samples: 939626. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:25:09,363][89187] Avg episode reward: [(0, '19.739')] |
|
[2025-08-26 15:25:09,946][89774] Updated weights for policy 0, policy_version 930 (0.0008) |
|
[2025-08-26 15:25:11,516][89774] Updated weights for policy 0, policy_version 940 (0.0006) |
|
[2025-08-26 15:25:13,022][89774] Updated weights for policy 0, policy_version 950 (0.0008) |
|
[2025-08-26 15:25:14,362][89187] Fps is (10 sec: 22937.6, 60 sec: 30515.2, 300 sec: 31096.8). Total num frames: 3923968. Throughput: 0: 7504.4. Samples: 980314. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:25:14,363][89187] Avg episode reward: [(0, '23.487')] |
|
[2025-08-26 15:25:14,513][89774] Updated weights for policy 0, policy_version 960 (0.0007) |
|
[2025-08-26 15:25:16,041][89774] Updated weights for policy 0, policy_version 970 (0.0007) |
|
[2025-08-26 15:25:17,182][89752] Saving /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:25:17,184][89752] Stopping Batcher_0... |
|
[2025-08-26 15:25:17,185][89752] Loop batcher_evt_loop terminating... |
|
[2025-08-26 15:25:17,182][89187] Component Batcher_0 stopped! |
|
[2025-08-26 15:25:17,199][89774] Weights refcount: 2 0 |
|
[2025-08-26 15:25:17,201][89774] Stopping InferenceWorker_p0-w0... |
|
[2025-08-26 15:25:17,201][89774] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-08-26 15:25:17,201][89187] Component InferenceWorker_p0-w0 stopped! |
|
[2025-08-26 15:25:17,211][89772] Stopping RolloutWorker_w7... |
|
[2025-08-26 15:25:17,212][89772] Loop rollout_proc7_evt_loop terminating... |
|
[2025-08-26 15:25:17,212][89187] Component RolloutWorker_w7 stopped! |
|
[2025-08-26 15:25:17,220][89773] Stopping RolloutWorker_w6... |
|
[2025-08-26 15:25:17,221][89767] Stopping RolloutWorker_w0... |
|
[2025-08-26 15:25:17,220][89187] Component RolloutWorker_w6 stopped! |
|
[2025-08-26 15:25:17,221][89773] Loop rollout_proc6_evt_loop terminating... |
|
[2025-08-26 15:25:17,221][89767] Loop rollout_proc0_evt_loop terminating... |
|
[2025-08-26 15:25:17,221][89187] Component RolloutWorker_w0 stopped! |
|
[2025-08-26 15:25:17,227][89770] Stopping RolloutWorker_w4... |
|
[2025-08-26 15:25:17,228][89770] Loop rollout_proc4_evt_loop terminating... |
|
[2025-08-26 15:25:17,227][89187] Component RolloutWorker_w4 stopped! |
|
[2025-08-26 15:25:17,235][89768] Stopping RolloutWorker_w2... |
|
[2025-08-26 15:25:17,235][89187] Component RolloutWorker_w2 stopped! |
|
[2025-08-26 15:25:17,235][89768] Loop rollout_proc2_evt_loop terminating... |
|
[2025-08-26 15:25:17,235][89752] Saving /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:25:17,237][89771] Stopping RolloutWorker_w5... |
|
[2025-08-26 15:25:17,237][89771] Loop rollout_proc5_evt_loop terminating... |
|
[2025-08-26 15:25:17,237][89187] Component RolloutWorker_w5 stopped! |
|
[2025-08-26 15:25:17,248][89766] Stopping RolloutWorker_w1... |
|
[2025-08-26 15:25:17,248][89766] Loop rollout_proc1_evt_loop terminating... |
|
[2025-08-26 15:25:17,248][89187] Component RolloutWorker_w1 stopped! |
|
[2025-08-26 15:25:17,250][89769] Stopping RolloutWorker_w3... |
|
[2025-08-26 15:25:17,251][89769] Loop rollout_proc3_evt_loop terminating... |
|
[2025-08-26 15:25:17,254][89187] Component RolloutWorker_w3 stopped! |
|
[2025-08-26 15:25:17,325][89752] Stopping LearnerWorker_p0... |
|
[2025-08-26 15:25:17,325][89752] Loop learner_proc0_evt_loop terminating... |
|
[2025-08-26 15:25:17,325][89187] Component LearnerWorker_p0 stopped! |
|
[2025-08-26 15:25:17,326][89187] Waiting for process learner_proc0 to stop... |
|
[2025-08-26 15:25:18,162][89187] Waiting for process inference_proc0-0 to join... |
|
[2025-08-26 15:25:18,164][89187] Waiting for process rollout_proc0 to join... |
|
[2025-08-26 15:25:18,165][89187] Waiting for process rollout_proc1 to join... |
|
[2025-08-26 15:25:18,165][89187] Waiting for process rollout_proc2 to join... |
|
[2025-08-26 15:25:18,166][89187] Waiting for process rollout_proc3 to join... |
|
[2025-08-26 15:25:18,167][89187] Waiting for process rollout_proc4 to join... |
|
[2025-08-26 15:25:18,167][89187] Waiting for process rollout_proc5 to join... |
|
[2025-08-26 15:25:18,168][89187] Waiting for process rollout_proc6 to join... |
|
[2025-08-26 15:25:18,168][89187] Waiting for process rollout_proc7 to join... |
|
[2025-08-26 15:25:18,169][89187] Batcher 0 profile tree view: |
|
batching: 6.4040, releasing_batches: 0.0148 |
|
[2025-08-26 15:25:18,169][89187] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 3.8458 |
|
update_model: 2.0387 |
|
weight_update: 0.0007 |
|
one_step: 0.0011 |
|
handle_policy_step: 118.4660 |
|
deserialize: 5.4686, stack: 0.7081, obs_to_device_normalize: 25.7784, forward: 56.2610, send_messages: 8.1610 |
|
prepare_outputs: 16.4081 |
|
to_cpu: 10.2349 |
|
[2025-08-26 15:25:18,170][89187] Learner 0 profile tree view: |
|
misc: 0.0033, prepare_batch: 4.1677 |
|
train: 10.1227 |
|
epoch_init: 0.0031, minibatch_init: 0.0032, losses_postprocess: 0.1358, kl_divergence: 0.1723, after_optimizer: 1.4787 |
|
calculate_losses: 3.9020 |
|
losses_init: 0.0017, forward_head: 0.3282, bptt_initial: 2.0125, tail: 0.3131, advantages_returns: 0.0783, losses: 0.5483 |
|
bptt: 0.5381 |
|
bptt_forward_core: 0.5099 |
|
update: 4.2198 |
|
clip: 0.4218 |
|
[2025-08-26 15:25:18,170][89187] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.0856, enqueue_policy_requests: 4.2614, env_step: 61.2950, overhead: 5.1159, complete_rollouts: 0.2467 |
|
save_policy_outputs: 4.9281 |
|
split_output_tensors: 2.4055 |
|
[2025-08-26 15:25:18,170][89187] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.0830, enqueue_policy_requests: 4.2969, env_step: 61.4537, overhead: 5.0304, complete_rollouts: 0.3091 |
|
save_policy_outputs: 4.8909 |
|
split_output_tensors: 2.3940 |
|
[2025-08-26 15:25:18,171][89187] Loop Runner_EvtLoop terminating... |
|
[2025-08-26 15:25:18,171][89187] Runner profile tree view: |
|
main_loop: 136.4337 |
|
[2025-08-26 15:25:18,172][89187] Collected {0: 4005888}, FPS: 29361.4 |
|
[2025-08-26 15:25:18,258][89187] Loading existing experiment configuration from /home/ubuntu/train_dir/default_experiment/config.json |
|
[2025-08-26 15:25:18,259][89187] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-26 15:25:18,259][89187] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-26 15:25:18,259][89187] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-26 15:25:18,260][89187] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:25:18,260][89187] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-26 15:25:18,260][89187] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:25:18,261][89187] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-26 15:25:18,261][89187] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-26 15:25:18,261][89187] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-26 15:25:18,262][89187] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-26 15:25:18,263][89187] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-26 15:25:18,263][89187] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-26 15:25:18,263][89187] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-26 15:25:18,264][89187] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-26 15:25:18,279][89187] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:25:18,280][89187] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:25:18,281][89187] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:25:18,287][89187] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:25:18,339][89187] Conv encoder output size: 512 |
|
[2025-08-26 15:25:18,339][89187] Policy head output size: 512 |
|
[2025-08-26 15:25:18,436][89187] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:25:18,438][89187] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:25:18,440][89187] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:25:18,440][89187] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:25:18,441][89187] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:25:18,442][89187] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:25:46,325][89187] Loading existing experiment configuration from /home/ubuntu/train_dir/default_experiment/config.json |
|
[2025-08-26 15:25:46,326][89187] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-26 15:25:46,326][89187] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-26 15:25:46,327][89187] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-26 15:25:46,327][89187] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:25:46,327][89187] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-26 15:25:46,328][89187] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:25:46,328][89187] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-26 15:25:46,329][89187] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-26 15:25:46,329][89187] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-26 15:25:46,329][89187] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-26 15:25:46,330][89187] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-26 15:25:46,330][89187] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-26 15:25:46,330][89187] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-26 15:25:46,331][89187] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-26 15:25:46,346][89187] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:25:46,347][89187] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:25:46,352][89187] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:25:46,379][89187] Conv encoder output size: 512 |
|
[2025-08-26 15:25:46,379][89187] Policy head output size: 512 |
|
[2025-08-26 15:25:46,391][89187] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:25:46,392][89187] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:25:46,392][89187] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:25:46,393][89187] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:25:46,394][89187] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:25:46,394][89187] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:38:04,937][104163] Saving configuration to /home/ubuntu/train_dir/default_experiment/config.json... |
|
[2025-08-26 15:38:04,938][104163] Rollout worker 0 uses device cpu |
|
[2025-08-26 15:38:04,939][104163] Rollout worker 1 uses device cpu |
|
[2025-08-26 15:38:04,939][104163] Rollout worker 2 uses device cpu |
|
[2025-08-26 15:38:04,940][104163] Rollout worker 3 uses device cpu |
|
[2025-08-26 15:38:04,940][104163] Rollout worker 4 uses device cpu |
|
[2025-08-26 15:38:04,940][104163] Rollout worker 5 uses device cpu |
|
[2025-08-26 15:38:04,941][104163] Rollout worker 6 uses device cpu |
|
[2025-08-26 15:38:04,941][104163] Rollout worker 7 uses device cpu |
|
[2025-08-26 15:38:04,980][104163] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:38:04,981][104163] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-08-26 15:38:05,010][104163] Starting all processes... |
|
[2025-08-26 15:38:05,010][104163] Starting process learner_proc0 |
|
[2025-08-26 15:38:05,060][104163] Starting all processes... |
|
[2025-08-26 15:38:05,063][104163] Starting process inference_proc0-0 |
|
[2025-08-26 15:38:05,064][104163] Starting process rollout_proc0 |
|
[2025-08-26 15:38:05,064][104163] Starting process rollout_proc1 |
|
[2025-08-26 15:38:05,064][104163] Starting process rollout_proc2 |
|
[2025-08-26 15:38:05,065][104163] Starting process rollout_proc3 |
|
[2025-08-26 15:38:05,066][104163] Starting process rollout_proc4 |
|
[2025-08-26 15:38:05,066][104163] Starting process rollout_proc5 |
|
[2025-08-26 15:38:05,069][104163] Starting process rollout_proc6 |
|
[2025-08-26 15:38:05,073][104163] Starting process rollout_proc7 |
|
[2025-08-26 15:38:07,205][104382] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:38:07,205][104382] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-08-26 15:38:07,210][104383] Worker 2 uses CPU cores [2] |
|
[2025-08-26 15:38:07,210][104368] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:38:07,210][104368] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-08-26 15:38:07,218][104382] Num visible devices: 1 |
|
[2025-08-26 15:38:07,223][104368] Num visible devices: 1 |
|
[2025-08-26 15:38:07,224][104368] Starting seed is not provided |
|
[2025-08-26 15:38:07,224][104368] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:38:07,224][104368] Initializing actor-critic model on device cuda:0 |
|
[2025-08-26 15:38:07,224][104368] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:38:07,225][104368] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:38:07,235][104368] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:38:07,236][104387] Worker 5 uses CPU cores [5] |
|
[2025-08-26 15:38:07,254][104384] Worker 3 uses CPU cores [3] |
|
[2025-08-26 15:38:07,278][104385] Worker 1 uses CPU cores [1] |
|
[2025-08-26 15:38:07,291][104395] Worker 6 uses CPU cores [6] |
|
[2025-08-26 15:38:07,302][104386] Worker 4 uses CPU cores [4] |
|
[2025-08-26 15:38:07,312][104368] Conv encoder output size: 512 |
|
[2025-08-26 15:38:07,312][104368] Policy head output size: 512 |
|
[2025-08-26 15:38:07,313][104381] Worker 0 uses CPU cores [0] |
|
[2025-08-26 15:38:07,320][104368] Created Actor Critic model with architecture: |
|
[2025-08-26 15:38:07,320][104368] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-08-26 15:38:07,368][104396] Worker 7 uses CPU cores [7] |
|
[2025-08-26 15:38:07,388][104368] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-08-26 15:38:07,937][104368] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:38:07,939][104368] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:38:07,939][104368] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:38:07,939][104368] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:38:07,940][104368] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:38:07,940][104368] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:38:07,940][104368] Did not load from checkpoint, starting from scratch! |
|
[2025-08-26 15:38:07,940][104368] Initialized policy 0 weights for model version 0 |
|
[2025-08-26 15:38:07,942][104368] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:38:07,942][104368] LearnerWorker_p0 finished initialization! |
|
[2025-08-26 15:38:07,994][104382] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:38:07,995][104382] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:38:08,003][104382] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:38:08,054][104382] Conv encoder output size: 512 |
|
[2025-08-26 15:38:08,055][104382] Policy head output size: 512 |
|
[2025-08-26 15:38:08,078][104163] Inference worker 0-0 is ready! |
|
[2025-08-26 15:38:08,079][104163] All inference workers are ready! Signal rollout workers to start! |
|
[2025-08-26 15:38:08,088][104396] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:38:08,088][104395] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:38:08,088][104384] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:38:08,088][104387] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:38:08,088][104386] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:38:08,088][104383] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:38:08,088][104385] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:38:08,088][104381] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:38:08,233][104383] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:38:08,234][104385] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:38:08,289][104396] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:38:08,289][104386] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:38:08,289][104387] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:38:08,291][104395] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:38:08,390][104383] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:38:08,422][104396] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:38:08,435][104386] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:38:08,469][104384] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:38:08,469][104385] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:38:08,558][104387] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:38:08,575][104396] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:38:08,616][104384] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:38:08,633][104385] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:38:08,681][104386] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:38:08,725][104383] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:38:08,728][104396] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:38:08,791][104387] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:38:08,816][104381] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:38:08,849][104385] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:38:08,928][104395] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:38:08,941][104383] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:38:08,945][104384] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:38:08,975][104381] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:38:08,975][104387] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:38:09,038][104386] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:38:09,113][104384] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:38:09,144][104395] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:38:09,303][104381] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:38:09,329][104395] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:38:09,502][104381] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:38:09,670][104368] Signal inference workers to stop experience collection... |
|
[2025-08-26 15:38:09,672][104382] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-08-26 15:38:10,387][104368] Signal inference workers to resume experience collection... |
|
[2025-08-26 15:38:10,388][104382] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-08-26 15:38:11,482][104382] Updated weights for policy 0, policy_version 10 (0.0044) |
|
[2025-08-26 15:38:12,634][104163] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 77824. Throughput: 0: nan. Samples: 7656. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-08-26 15:38:12,635][104163] Avg episode reward: [(0, '4.254')] |
|
[2025-08-26 15:38:12,762][104382] Updated weights for policy 0, policy_version 20 (0.0007) |
|
[2025-08-26 15:38:14,040][104382] Updated weights for policy 0, policy_version 30 (0.0007) |
|
[2025-08-26 15:38:15,334][104382] Updated weights for policy 0, policy_version 40 (0.0006) |
|
[2025-08-26 15:38:16,644][104382] Updated weights for policy 0, policy_version 50 (0.0006) |
|
[2025-08-26 15:38:17,634][104163] Fps is (10 sec: 31129.6, 60 sec: 31129.6, 300 sec: 31129.6). Total num frames: 233472. Throughput: 0: 9558.8. Samples: 55450. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:38:17,635][104163] Avg episode reward: [(0, '4.372')] |
|
[2025-08-26 15:38:17,646][104368] Saving new best policy, reward=4.372! |
|
[2025-08-26 15:38:17,912][104382] Updated weights for policy 0, policy_version 60 (0.0006) |
|
[2025-08-26 15:38:19,186][104382] Updated weights for policy 0, policy_version 70 (0.0006) |
|
[2025-08-26 15:38:20,454][104382] Updated weights for policy 0, policy_version 80 (0.0006) |
|
[2025-08-26 15:38:21,690][104382] Updated weights for policy 0, policy_version 90 (0.0006) |
|
[2025-08-26 15:38:22,634][104163] Fps is (10 sec: 31948.7, 60 sec: 31948.7, 300 sec: 31948.7). Total num frames: 397312. Throughput: 0: 7190.0. Samples: 79556. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-08-26 15:38:22,635][104163] Avg episode reward: [(0, '4.791')] |
|
[2025-08-26 15:38:22,638][104368] Saving new best policy, reward=4.791! |
|
[2025-08-26 15:38:23,018][104382] Updated weights for policy 0, policy_version 100 (0.0006) |
|
[2025-08-26 15:38:24,266][104382] Updated weights for policy 0, policy_version 110 (0.0007) |
|
[2025-08-26 15:38:24,972][104163] Heartbeat connected on Batcher_0 |
|
[2025-08-26 15:38:24,976][104163] Heartbeat connected on LearnerWorker_p0 |
|
[2025-08-26 15:38:24,982][104163] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-08-26 15:38:24,985][104163] Heartbeat connected on RolloutWorker_w0 |
|
[2025-08-26 15:38:24,993][104163] Heartbeat connected on RolloutWorker_w2 |
|
[2025-08-26 15:38:24,997][104163] Heartbeat connected on RolloutWorker_w1 |
|
[2025-08-26 15:38:24,999][104163] Heartbeat connected on RolloutWorker_w3 |
|
[2025-08-26 15:38:25,000][104163] Heartbeat connected on RolloutWorker_w4 |
|
[2025-08-26 15:38:25,003][104163] Heartbeat connected on RolloutWorker_w5 |
|
[2025-08-26 15:38:25,007][104163] Heartbeat connected on RolloutWorker_w6 |
|
[2025-08-26 15:38:25,009][104163] Heartbeat connected on RolloutWorker_w7 |
|
[2025-08-26 15:38:25,574][104382] Updated weights for policy 0, policy_version 120 (0.0006) |
|
[2025-08-26 15:38:26,840][104382] Updated weights for policy 0, policy_version 130 (0.0008) |
|
[2025-08-26 15:38:27,634][104163] Fps is (10 sec: 32358.5, 60 sec: 31948.9, 300 sec: 31948.9). Total num frames: 557056. Throughput: 0: 7992.1. Samples: 127538. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:38:27,635][104163] Avg episode reward: [(0, '4.507')] |
|
[2025-08-26 15:38:28,117][104382] Updated weights for policy 0, policy_version 140 (0.0008) |
|
[2025-08-26 15:38:29,377][104382] Updated weights for policy 0, policy_version 150 (0.0006) |
|
[2025-08-26 15:38:30,652][104382] Updated weights for policy 0, policy_version 160 (0.0006) |
|
[2025-08-26 15:38:31,912][104382] Updated weights for policy 0, policy_version 170 (0.0006) |
|
[2025-08-26 15:38:32,634][104163] Fps is (10 sec: 31948.9, 60 sec: 31948.8, 300 sec: 31948.8). Total num frames: 716800. Throughput: 0: 8420.7. Samples: 176070. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:38:32,635][104163] Avg episode reward: [(0, '4.768')] |
|
[2025-08-26 15:38:33,192][104382] Updated weights for policy 0, policy_version 180 (0.0007) |
|
[2025-08-26 15:38:34,427][104382] Updated weights for policy 0, policy_version 190 (0.0006) |
|
[2025-08-26 15:38:35,749][104382] Updated weights for policy 0, policy_version 200 (0.0009) |
|
[2025-08-26 15:38:37,057][104382] Updated weights for policy 0, policy_version 210 (0.0006) |
|
[2025-08-26 15:38:37,634][104163] Fps is (10 sec: 31948.6, 60 sec: 31948.8, 300 sec: 31948.8). Total num frames: 876544. Throughput: 0: 7704.1. Samples: 200258. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:38:37,635][104163] Avg episode reward: [(0, '5.803')] |
|
[2025-08-26 15:38:37,636][104368] Saving new best policy, reward=5.803! |
|
[2025-08-26 15:38:38,333][104382] Updated weights for policy 0, policy_version 220 (0.0007) |
|
[2025-08-26 15:38:39,601][104382] Updated weights for policy 0, policy_version 230 (0.0006) |
|
[2025-08-26 15:38:40,881][104382] Updated weights for policy 0, policy_version 240 (0.0007) |
|
[2025-08-26 15:38:42,136][104382] Updated weights for policy 0, policy_version 250 (0.0008) |
|
[2025-08-26 15:38:42,634][104163] Fps is (10 sec: 31948.9, 60 sec: 31948.8, 300 sec: 31948.8). Total num frames: 1036288. Throughput: 0: 8015.1. Samples: 248110. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-26 15:38:42,635][104163] Avg episode reward: [(0, '7.211')] |
|
[2025-08-26 15:38:42,654][104368] Saving new best policy, reward=7.211! |
|
[2025-08-26 15:38:43,412][104382] Updated weights for policy 0, policy_version 260 (0.0006) |
|
[2025-08-26 15:38:44,649][104382] Updated weights for policy 0, policy_version 270 (0.0007) |
|
[2025-08-26 15:38:45,934][104382] Updated weights for policy 0, policy_version 280 (0.0006) |
|
[2025-08-26 15:38:47,213][104382] Updated weights for policy 0, policy_version 290 (0.0007) |
|
[2025-08-26 15:38:47,634][104163] Fps is (10 sec: 32358.5, 60 sec: 32065.8, 300 sec: 32065.8). Total num frames: 1200128. Throughput: 0: 8253.6. Samples: 296532. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-26 15:38:47,636][104163] Avg episode reward: [(0, '8.603')] |
|
[2025-08-26 15:38:47,637][104368] Saving new best policy, reward=8.603! |
|
[2025-08-26 15:38:48,516][104382] Updated weights for policy 0, policy_version 300 (0.0006) |
|
[2025-08-26 15:38:49,760][104382] Updated weights for policy 0, policy_version 310 (0.0007) |
|
[2025-08-26 15:38:51,053][104382] Updated weights for policy 0, policy_version 320 (0.0006) |
|
[2025-08-26 15:38:52,322][104382] Updated weights for policy 0, policy_version 330 (0.0006) |
|
[2025-08-26 15:38:52,634][104163] Fps is (10 sec: 32358.1, 60 sec: 32051.1, 300 sec: 32051.1). Total num frames: 1359872. Throughput: 0: 7820.9. Samples: 320492. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:38:52,636][104163] Avg episode reward: [(0, '10.093')] |
|
[2025-08-26 15:38:52,638][104368] Saving new best policy, reward=10.093! |
|
[2025-08-26 15:38:53,601][104382] Updated weights for policy 0, policy_version 340 (0.0007) |
|
[2025-08-26 15:38:54,865][104382] Updated weights for policy 0, policy_version 350 (0.0007) |
|
[2025-08-26 15:38:56,138][104382] Updated weights for policy 0, policy_version 360 (0.0007) |
|
[2025-08-26 15:38:57,392][104382] Updated weights for policy 0, policy_version 370 (0.0006) |
|
[2025-08-26 15:38:57,634][104163] Fps is (10 sec: 31948.9, 60 sec: 32039.9, 300 sec: 32039.9). Total num frames: 1519616. Throughput: 0: 8028.1. Samples: 368922. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:38:57,635][104163] Avg episode reward: [(0, '12.715')] |
|
[2025-08-26 15:38:57,645][104368] Saving new best policy, reward=12.715! |
|
[2025-08-26 15:38:58,691][104382] Updated weights for policy 0, policy_version 380 (0.0006) |
|
[2025-08-26 15:38:59,966][104382] Updated weights for policy 0, policy_version 390 (0.0007) |
|
[2025-08-26 15:39:01,246][104382] Updated weights for policy 0, policy_version 400 (0.0006) |
|
[2025-08-26 15:39:02,501][104382] Updated weights for policy 0, policy_version 410 (0.0006) |
|
[2025-08-26 15:39:02,634][104163] Fps is (10 sec: 32358.7, 60 sec: 32112.6, 300 sec: 32112.6). Total num frames: 1683456. Throughput: 0: 8035.7. Samples: 417058. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:39:02,635][104163] Avg episode reward: [(0, '14.037')] |
|
[2025-08-26 15:39:02,638][104368] Saving new best policy, reward=14.037! |
|
[2025-08-26 15:39:03,776][104382] Updated weights for policy 0, policy_version 420 (0.0006) |
|
[2025-08-26 15:39:05,041][104382] Updated weights for policy 0, policy_version 430 (0.0006) |
|
[2025-08-26 15:39:06,293][104382] Updated weights for policy 0, policy_version 440 (0.0006) |
|
[2025-08-26 15:39:07,574][104382] Updated weights for policy 0, policy_version 450 (0.0007) |
|
[2025-08-26 15:39:07,634][104163] Fps is (10 sec: 32358.0, 60 sec: 32097.7, 300 sec: 32097.7). Total num frames: 1843200. Throughput: 0: 8039.5. Samples: 441334. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:39:07,635][104163] Avg episode reward: [(0, '15.014')] |
|
[2025-08-26 15:39:07,636][104368] Saving new best policy, reward=15.014! |
|
[2025-08-26 15:39:08,859][104382] Updated weights for policy 0, policy_version 460 (0.0006) |
|
[2025-08-26 15:39:10,173][104382] Updated weights for policy 0, policy_version 470 (0.0006) |
|
[2025-08-26 15:39:11,439][104382] Updated weights for policy 0, policy_version 480 (0.0007) |
|
[2025-08-26 15:39:12,634][104163] Fps is (10 sec: 31948.6, 60 sec: 32085.3, 300 sec: 32085.3). Total num frames: 2002944. Throughput: 0: 8039.2. Samples: 489302. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:39:12,636][104163] Avg episode reward: [(0, '17.896')] |
|
[2025-08-26 15:39:12,639][104368] Saving new best policy, reward=17.896! |
|
[2025-08-26 15:39:12,720][104382] Updated weights for policy 0, policy_version 490 (0.0006) |
|
[2025-08-26 15:39:13,967][104382] Updated weights for policy 0, policy_version 500 (0.0008) |
|
[2025-08-26 15:39:15,242][104382] Updated weights for policy 0, policy_version 510 (0.0006) |
|
[2025-08-26 15:39:16,497][104382] Updated weights for policy 0, policy_version 520 (0.0006) |
|
[2025-08-26 15:39:17,634][104163] Fps is (10 sec: 31949.1, 60 sec: 32153.6, 300 sec: 32074.8). Total num frames: 2162688. Throughput: 0: 8038.4. Samples: 537796. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:39:17,635][104163] Avg episode reward: [(0, '18.098')] |
|
[2025-08-26 15:39:17,636][104368] Saving new best policy, reward=18.098! |
|
[2025-08-26 15:39:17,778][104382] Updated weights for policy 0, policy_version 530 (0.0007) |
|
[2025-08-26 15:39:19,064][104382] Updated weights for policy 0, policy_version 540 (0.0007) |
|
[2025-08-26 15:39:20,351][104382] Updated weights for policy 0, policy_version 550 (0.0006) |
|
[2025-08-26 15:39:21,645][104382] Updated weights for policy 0, policy_version 560 (0.0007) |
|
[2025-08-26 15:39:22,634][104163] Fps is (10 sec: 31948.9, 60 sec: 32085.4, 300 sec: 32065.8). Total num frames: 2322432. Throughput: 0: 8032.7. Samples: 561728. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-26 15:39:22,635][104163] Avg episode reward: [(0, '17.572')] |
|
[2025-08-26 15:39:22,900][104382] Updated weights for policy 0, policy_version 570 (0.0006) |
|
[2025-08-26 15:39:24,175][104382] Updated weights for policy 0, policy_version 580 (0.0007) |
|
[2025-08-26 15:39:25,438][104382] Updated weights for policy 0, policy_version 590 (0.0007) |
|
[2025-08-26 15:39:26,719][104382] Updated weights for policy 0, policy_version 600 (0.0006) |
|
[2025-08-26 15:39:27,634][104163] Fps is (10 sec: 32358.3, 60 sec: 32153.6, 300 sec: 32112.6). Total num frames: 2486272. Throughput: 0: 8038.7. Samples: 609850. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:39:27,635][104163] Avg episode reward: [(0, '18.465')] |
|
[2025-08-26 15:39:27,636][104368] Saving new best policy, reward=18.465! |
|
[2025-08-26 15:39:28,011][104382] Updated weights for policy 0, policy_version 610 (0.0007) |
|
[2025-08-26 15:39:29,306][104382] Updated weights for policy 0, policy_version 620 (0.0007) |
|
[2025-08-26 15:39:30,598][104382] Updated weights for policy 0, policy_version 630 (0.0006) |
|
[2025-08-26 15:39:31,889][104382] Updated weights for policy 0, policy_version 640 (0.0006) |
|
[2025-08-26 15:39:32,634][104163] Fps is (10 sec: 31948.8, 60 sec: 32085.3, 300 sec: 32051.2). Total num frames: 2641920. Throughput: 0: 8021.4. Samples: 657494. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-26 15:39:32,635][104163] Avg episode reward: [(0, '20.794')] |
|
[2025-08-26 15:39:32,638][104368] Saving new best policy, reward=20.794! |
|
[2025-08-26 15:39:33,174][104382] Updated weights for policy 0, policy_version 650 (0.0008) |
|
[2025-08-26 15:39:34,441][104382] Updated weights for policy 0, policy_version 660 (0.0006) |
|
[2025-08-26 15:39:35,688][104382] Updated weights for policy 0, policy_version 670 (0.0006) |
|
[2025-08-26 15:39:36,989][104382] Updated weights for policy 0, policy_version 680 (0.0006) |
|
[2025-08-26 15:39:37,634][104163] Fps is (10 sec: 31948.5, 60 sec: 32153.6, 300 sec: 32093.3). Total num frames: 2805760. Throughput: 0: 8024.4. Samples: 681592. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:39:37,635][104163] Avg episode reward: [(0, '24.101')] |
|
[2025-08-26 15:39:37,636][104368] Saving new best policy, reward=24.101! |
|
[2025-08-26 15:39:38,265][104382] Updated weights for policy 0, policy_version 690 (0.0006) |
|
[2025-08-26 15:39:39,534][104382] Updated weights for policy 0, policy_version 700 (0.0007) |
|
[2025-08-26 15:39:40,801][104382] Updated weights for policy 0, policy_version 710 (0.0007) |
|
[2025-08-26 15:39:42,100][104382] Updated weights for policy 0, policy_version 720 (0.0006) |
|
[2025-08-26 15:39:42,634][104163] Fps is (10 sec: 32358.4, 60 sec: 32153.6, 300 sec: 32085.3). Total num frames: 2965504. Throughput: 0: 8017.8. Samples: 729722. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:39:42,635][104163] Avg episode reward: [(0, '19.267')] |
|
[2025-08-26 15:39:43,393][104382] Updated weights for policy 0, policy_version 730 (0.0006) |
|
[2025-08-26 15:39:44,674][104382] Updated weights for policy 0, policy_version 740 (0.0006) |
|
[2025-08-26 15:39:45,937][104382] Updated weights for policy 0, policy_version 750 (0.0008) |
|
[2025-08-26 15:39:47,222][104382] Updated weights for policy 0, policy_version 760 (0.0006) |
|
[2025-08-26 15:39:47,634][104163] Fps is (10 sec: 31949.1, 60 sec: 32085.3, 300 sec: 32078.1). Total num frames: 3125248. Throughput: 0: 8016.0. Samples: 777778. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-08-26 15:39:47,635][104163] Avg episode reward: [(0, '18.555')] |
|
[2025-08-26 15:39:48,492][104382] Updated weights for policy 0, policy_version 770 (0.0007) |
|
[2025-08-26 15:39:49,757][104382] Updated weights for policy 0, policy_version 780 (0.0007) |
|
[2025-08-26 15:39:51,032][104382] Updated weights for policy 0, policy_version 790 (0.0008) |
|
[2025-08-26 15:39:52,333][104382] Updated weights for policy 0, policy_version 800 (0.0007) |
|
[2025-08-26 15:39:52,634][104163] Fps is (10 sec: 31948.7, 60 sec: 32085.4, 300 sec: 32071.7). Total num frames: 3284992. Throughput: 0: 8011.7. Samples: 801858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:39:52,635][104163] Avg episode reward: [(0, '22.071')] |
|
[2025-08-26 15:39:53,626][104382] Updated weights for policy 0, policy_version 810 (0.0006) |
|
[2025-08-26 15:39:54,893][104382] Updated weights for policy 0, policy_version 820 (0.0008) |
|
[2025-08-26 15:39:56,197][104382] Updated weights for policy 0, policy_version 830 (0.0006) |
|
[2025-08-26 15:39:57,522][104382] Updated weights for policy 0, policy_version 840 (0.0007) |
|
[2025-08-26 15:39:57,634][104163] Fps is (10 sec: 31539.1, 60 sec: 32017.0, 300 sec: 32026.8). Total num frames: 3440640. Throughput: 0: 8011.5. Samples: 849818. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:39:57,635][104163] Avg episode reward: [(0, '22.008')] |
|
[2025-08-26 15:39:58,764][104382] Updated weights for policy 0, policy_version 850 (0.0007) |
|
[2025-08-26 15:40:00,025][104382] Updated weights for policy 0, policy_version 860 (0.0007) |
|
[2025-08-26 15:40:01,297][104382] Updated weights for policy 0, policy_version 870 (0.0006) |
|
[2025-08-26 15:40:02,541][104382] Updated weights for policy 0, policy_version 880 (0.0006) |
|
[2025-08-26 15:40:02,634][104163] Fps is (10 sec: 31948.7, 60 sec: 32017.0, 300 sec: 32060.5). Total num frames: 3604480. Throughput: 0: 8003.2. Samples: 897940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-08-26 15:40:02,635][104163] Avg episode reward: [(0, '21.884')] |
|
[2025-08-26 15:40:02,639][104368] Saving /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000880_3604480.pth... |
|
[2025-08-26 15:40:02,690][104368] Removing /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000865_3543040.pth |
|
[2025-08-26 15:40:03,850][104382] Updated weights for policy 0, policy_version 890 (0.0006) |
|
[2025-08-26 15:40:05,133][104382] Updated weights for policy 0, policy_version 900 (0.0006) |
|
[2025-08-26 15:40:06,397][104382] Updated weights for policy 0, policy_version 910 (0.0006) |
|
[2025-08-26 15:40:07,634][104163] Fps is (10 sec: 32358.6, 60 sec: 32017.1, 300 sec: 32055.7). Total num frames: 3764224. Throughput: 0: 8002.6. Samples: 921846. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-08-26 15:40:07,635][104163] Avg episode reward: [(0, '22.428')] |
|
[2025-08-26 15:40:07,656][104382] Updated weights for policy 0, policy_version 920 (0.0007) |
|
[2025-08-26 15:40:08,909][104382] Updated weights for policy 0, policy_version 930 (0.0006) |
|
[2025-08-26 15:40:10,187][104382] Updated weights for policy 0, policy_version 940 (0.0006) |
|
[2025-08-26 15:40:11,440][104382] Updated weights for policy 0, policy_version 950 (0.0006) |
|
[2025-08-26 15:40:12,634][104163] Fps is (10 sec: 32358.4, 60 sec: 32085.3, 300 sec: 32085.3). Total num frames: 3928064. Throughput: 0: 8014.8. Samples: 970518. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-08-26 15:40:12,635][104163] Avg episode reward: [(0, '21.595')] |
|
[2025-08-26 15:40:12,723][104382] Updated weights for policy 0, policy_version 960 (0.0006) |
|
[2025-08-26 15:40:13,972][104382] Updated weights for policy 0, policy_version 970 (0.0006) |
|
[2025-08-26 15:40:14,967][104368] Stopping Batcher_0... |
|
[2025-08-26 15:40:14,968][104368] Saving /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:40:14,968][104368] Loop batcher_evt_loop terminating... |
|
[2025-08-26 15:40:14,972][104163] Component Batcher_0 stopped! |
|
[2025-08-26 15:40:14,991][104395] Stopping RolloutWorker_w6... |
|
[2025-08-26 15:40:14,991][104382] Weights refcount: 2 0 |
|
[2025-08-26 15:40:14,992][104395] Loop rollout_proc6_evt_loop terminating... |
|
[2025-08-26 15:40:14,992][104382] Stopping InferenceWorker_p0-w0... |
|
[2025-08-26 15:40:14,993][104382] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-08-26 15:40:14,993][104384] Stopping RolloutWorker_w3... |
|
[2025-08-26 15:40:14,993][104384] Loop rollout_proc3_evt_loop terminating... |
|
[2025-08-26 15:40:14,993][104383] Stopping RolloutWorker_w2... |
|
[2025-08-26 15:40:14,993][104383] Loop rollout_proc2_evt_loop terminating... |
|
[2025-08-26 15:40:14,995][104381] Stopping RolloutWorker_w0... |
|
[2025-08-26 15:40:14,995][104381] Loop rollout_proc0_evt_loop terminating... |
|
[2025-08-26 15:40:14,997][104396] Stopping RolloutWorker_w7... |
|
[2025-08-26 15:40:14,997][104396] Loop rollout_proc7_evt_loop terminating... |
|
[2025-08-26 15:40:14,995][104163] Component RolloutWorker_w6 stopped! |
|
[2025-08-26 15:40:14,999][104163] Component InferenceWorker_p0-w0 stopped! |
|
[2025-08-26 15:40:15,000][104163] Component RolloutWorker_w3 stopped! |
|
[2025-08-26 15:40:15,000][104163] Component RolloutWorker_w2 stopped! |
|
[2025-08-26 15:40:15,001][104386] Stopping RolloutWorker_w4... |
|
[2025-08-26 15:40:15,001][104386] Loop rollout_proc4_evt_loop terminating... |
|
[2025-08-26 15:40:15,001][104163] Component RolloutWorker_w0 stopped! |
|
[2025-08-26 15:40:15,001][104163] Component RolloutWorker_w7 stopped! |
|
[2025-08-26 15:40:15,002][104163] Component RolloutWorker_w4 stopped! |
|
[2025-08-26 15:40:15,030][104387] Stopping RolloutWorker_w5... |
|
[2025-08-26 15:40:15,030][104387] Loop rollout_proc5_evt_loop terminating... |
|
[2025-08-26 15:40:15,030][104163] Component RolloutWorker_w5 stopped! |
|
[2025-08-26 15:40:15,048][104368] Saving new best policy, reward=24.907! |
|
[2025-08-26 15:40:15,112][104385] Stopping RolloutWorker_w1... |
|
[2025-08-26 15:40:15,113][104385] Loop rollout_proc1_evt_loop terminating... |
|
[2025-08-26 15:40:15,112][104163] Component RolloutWorker_w1 stopped! |
|
[2025-08-26 15:40:15,109][104368] Saving /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:40:15,182][104368] Stopping LearnerWorker_p0... |
|
[2025-08-26 15:40:15,182][104368] Loop learner_proc0_evt_loop terminating... |
|
[2025-08-26 15:40:15,182][104163] Component LearnerWorker_p0 stopped! |
|
[2025-08-26 15:40:15,184][104163] Waiting for process learner_proc0 to stop... |
|
[2025-08-26 15:40:15,946][104163] Waiting for process inference_proc0-0 to join... |
|
[2025-08-26 15:40:15,947][104163] Waiting for process rollout_proc0 to join... |
|
[2025-08-26 15:40:15,948][104163] Waiting for process rollout_proc1 to join... |
|
[2025-08-26 15:40:15,949][104163] Waiting for process rollout_proc2 to join... |
|
[2025-08-26 15:40:15,950][104163] Waiting for process rollout_proc3 to join... |
|
[2025-08-26 15:40:15,950][104163] Waiting for process rollout_proc4 to join... |
|
[2025-08-26 15:40:15,951][104163] Waiting for process rollout_proc5 to join... |
|
[2025-08-26 15:40:15,951][104163] Waiting for process rollout_proc6 to join... |
|
[2025-08-26 15:40:15,952][104163] Waiting for process rollout_proc7 to join... |
|
[2025-08-26 15:40:15,952][104163] Batcher 0 profile tree view: |
|
batching: 6.5070, releasing_batches: 0.0151 |
|
[2025-08-26 15:40:15,953][104163] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 2.8934 |
|
update_model: 1.9419 |
|
weight_update: 0.0006 |
|
one_step: 0.0012 |
|
handle_policy_step: 114.8844 |
|
deserialize: 5.4740, stack: 0.6782, obs_to_device_normalize: 25.2069, forward: 54.2009, send_messages: 8.0795 |
|
prepare_outputs: 15.7530 |
|
to_cpu: 9.7958 |
|
[2025-08-26 15:40:15,953][104163] Learner 0 profile tree view: |
|
misc: 0.0034, prepare_batch: 3.9240 |
|
train: 9.6528 |
|
epoch_init: 0.0031, minibatch_init: 0.0033, losses_postprocess: 0.1340, kl_divergence: 0.1665, after_optimizer: 1.3548 |
|
calculate_losses: 3.8251 |
|
losses_init: 0.0018, forward_head: 0.3181, bptt_initial: 1.9405, tail: 0.3149, advantages_returns: 0.0797, losses: 0.5454 |
|
bptt: 0.5430 |
|
bptt_forward_core: 0.5146 |
|
update: 3.9640 |
|
clip: 0.4258 |
|
[2025-08-26 15:40:15,954][104163] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.0902, enqueue_policy_requests: 4.0846, env_step: 57.9990, overhead: 4.8994, complete_rollouts: 0.2964 |
|
save_policy_outputs: 4.7429 |
|
split_output_tensors: 2.3279 |
|
[2025-08-26 15:40:15,954][104163] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.1038, enqueue_policy_requests: 4.1550, env_step: 59.5275, overhead: 4.9723, complete_rollouts: 0.2782 |
|
save_policy_outputs: 4.8431 |
|
split_output_tensors: 2.3471 |
|
[2025-08-26 15:40:15,955][104163] Loop Runner_EvtLoop terminating... |
|
[2025-08-26 15:40:15,956][104163] Runner profile tree view: |
|
main_loop: 130.9461 |
|
[2025-08-26 15:40:15,956][104163] Collected {0: 4005888}, FPS: 30591.9 |
|
[2025-08-26 15:40:16,035][104163] Loading existing experiment configuration from /home/ubuntu/train_dir/default_experiment/config.json |
|
[2025-08-26 15:40:16,035][104163] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-26 15:40:16,036][104163] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-26 15:40:16,036][104163] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-26 15:40:16,037][104163] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:40:16,037][104163] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-26 15:40:16,038][104163] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:40:16,039][104163] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-26 15:40:16,039][104163] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-26 15:40:16,039][104163] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-26 15:40:16,040][104163] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-26 15:40:16,040][104163] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-26 15:40:16,041][104163] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-26 15:40:16,041][104163] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-26 15:40:16,041][104163] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-26 15:40:16,048][104163] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:40:16,049][104163] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:40:16,049][104163] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:40:16,056][104163] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:40:16,114][104163] Conv encoder output size: 512 |
|
[2025-08-26 15:40:16,116][104163] Policy head output size: 512 |
|
[2025-08-26 15:40:16,204][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:40:16,206][104163] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:40:16,207][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:40:16,208][104163] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:40:16,208][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:40:16,209][104163] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:40:21,806][104163] Loading existing experiment configuration from /home/ubuntu/train_dir/default_experiment/config.json |
|
[2025-08-26 15:40:21,807][104163] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-26 15:40:21,807][104163] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-26 15:40:21,808][104163] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-26 15:40:21,808][104163] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:40:21,809][104163] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-26 15:40:21,809][104163] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:40:21,810][104163] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-26 15:40:21,810][104163] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-26 15:40:21,810][104163] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-26 15:40:21,811][104163] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-26 15:40:21,812][104163] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-26 15:40:21,812][104163] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-26 15:40:21,812][104163] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-26 15:40:21,813][104163] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-26 15:40:21,819][104163] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:40:21,820][104163] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:40:21,825][104163] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:40:21,848][104163] Conv encoder output size: 512 |
|
[2025-08-26 15:40:21,849][104163] Policy head output size: 512 |
|
[2025-08-26 15:40:21,856][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:40:21,857][104163] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:40:21,858][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:40:21,858][104163] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:40:21,859][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:40:21,859][104163] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:42:27,844][104163] Loading existing experiment configuration from /home/ubuntu/train_dir/default_experiment/config.json |
|
[2025-08-26 15:42:27,845][104163] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-26 15:42:27,845][104163] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-26 15:42:27,846][104163] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-26 15:42:27,846][104163] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:42:27,847][104163] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-26 15:42:27,847][104163] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:42:27,848][104163] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-26 15:42:27,848][104163] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-26 15:42:27,849][104163] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-26 15:42:27,849][104163] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-26 15:42:27,849][104163] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-26 15:42:27,850][104163] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-26 15:42:27,850][104163] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-26 15:42:27,851][104163] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-26 15:42:27,857][104163] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:42:27,857][104163] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:42:27,864][104163] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:42:27,886][104163] Conv encoder output size: 512 |
|
[2025-08-26 15:42:27,887][104163] Policy head output size: 512 |
|
[2025-08-26 15:42:27,900][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:42:27,901][104163] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:42:27,902][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:42:27,903][104163] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:42:27,903][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:42:27,904][104163] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:45:09,707][104163] Loading existing experiment configuration from /home/ubuntu/train_dir/default_experiment/config.json |
|
[2025-08-26 15:45:09,708][104163] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-26 15:45:09,708][104163] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-26 15:45:09,709][104163] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-26 15:45:09,709][104163] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:45:09,710][104163] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-26 15:45:09,710][104163] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:45:09,710][104163] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-26 15:45:09,711][104163] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-26 15:45:09,711][104163] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-26 15:45:09,712][104163] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-26 15:45:09,712][104163] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-26 15:45:09,713][104163] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-26 15:45:09,713][104163] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-26 15:45:09,714][104163] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-26 15:45:09,718][104163] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:45:09,719][104163] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:45:09,725][104163] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:45:09,751][104163] Conv encoder output size: 512 |
|
[2025-08-26 15:45:09,752][104163] Policy head output size: 512 |
|
[2025-08-26 15:45:09,763][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:45:09,764][104163] Could not load from checkpoint, attempt 0 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
(storage_offset,) = struct.unpack("<q", f.read(8)) |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:45:09,765][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:45:09,766][104163] Could not load from checkpoint, attempt 1 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
(storage_offset,) = struct.unpack("<q", f.read(8)) |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:45:09,767][104163] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:45:09,768][104163] Could not load from checkpoint, attempt 2 |
|
Traceback (most recent call last): |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint |
|
checkpoint_dict = torch.load(latest_checkpoint, map_location=device) |
|
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/serialization.py", line 1529, in load |
|
(storage_offset,) = struct.unpack("<q", f.read(8)) |
|
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. |
|
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. |
|
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. |
|
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. |
|
|
|
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. |
|
[2025-08-26 15:45:26,919][120654] Saving configuration to /home/ubuntu/train_dir/default_experiment/config.json... |
|
[2025-08-26 15:45:26,932][120654] Rollout worker 0 uses device cpu |
|
[2025-08-26 15:45:26,934][120654] Rollout worker 1 uses device cpu |
|
[2025-08-26 15:45:26,935][120654] Rollout worker 2 uses device cpu |
|
[2025-08-26 15:45:26,935][120654] Rollout worker 3 uses device cpu |
|
[2025-08-26 15:45:26,935][120654] Rollout worker 4 uses device cpu |
|
[2025-08-26 15:45:26,936][120654] Rollout worker 5 uses device cpu |
|
[2025-08-26 15:45:26,936][120654] Rollout worker 6 uses device cpu |
|
[2025-08-26 15:45:26,937][120654] Rollout worker 7 uses device cpu |
|
[2025-08-26 15:45:26,980][120654] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:45:26,981][120654] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-08-26 15:45:27,010][120654] Starting all processes... |
|
[2025-08-26 15:45:27,011][120654] Starting process learner_proc0 |
|
[2025-08-26 15:45:27,060][120654] Starting all processes... |
|
[2025-08-26 15:45:27,064][120654] Starting process inference_proc0-0 |
|
[2025-08-26 15:45:27,064][120654] Starting process rollout_proc0 |
|
[2025-08-26 15:45:27,065][120654] Starting process rollout_proc1 |
|
[2025-08-26 15:45:27,065][120654] Starting process rollout_proc2 |
|
[2025-08-26 15:45:27,065][120654] Starting process rollout_proc3 |
|
[2025-08-26 15:45:27,065][120654] Starting process rollout_proc4 |
|
[2025-08-26 15:45:27,066][120654] Starting process rollout_proc5 |
|
[2025-08-26 15:45:27,066][120654] Starting process rollout_proc6 |
|
[2025-08-26 15:45:27,066][120654] Starting process rollout_proc7 |
|
[2025-08-26 15:45:29,194][120865] Worker 3 uses CPU cores [3] |
|
[2025-08-26 15:45:29,230][120866] Worker 4 uses CPU cores [4] |
|
[2025-08-26 15:45:29,254][120867] Worker 5 uses CPU cores [5] |
|
[2025-08-26 15:45:29,323][120848] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:45:29,323][120848] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-08-26 15:45:29,336][120848] Num visible devices: 1 |
|
[2025-08-26 15:45:29,342][120876] Worker 6 uses CPU cores [6] |
|
[2025-08-26 15:45:29,383][120848] Starting seed is not provided |
|
[2025-08-26 15:45:29,383][120848] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:45:29,383][120848] Initializing actor-critic model on device cuda:0 |
|
[2025-08-26 15:45:29,383][120848] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:45:29,384][120848] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:45:29,391][120848] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:45:29,398][120863] Worker 1 uses CPU cores [1] |
|
[2025-08-26 15:45:29,414][120864] Worker 2 uses CPU cores [2] |
|
[2025-08-26 15:45:29,437][120862] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:45:29,437][120862] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-08-26 15:45:29,445][120848] Conv encoder output size: 512 |
|
[2025-08-26 15:45:29,445][120848] Policy head output size: 512 |
|
[2025-08-26 15:45:29,451][120862] Num visible devices: 1 |
|
[2025-08-26 15:45:29,454][120848] Created Actor Critic model with architecture: |
|
[2025-08-26 15:45:29,454][120848] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-08-26 15:45:29,511][120861] Worker 0 uses CPU cores [0] |
|
[2025-08-26 15:45:29,526][120848] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-08-26 15:45:29,589][120868] Worker 7 uses CPU cores [7] |
|
[2025-08-26 15:45:30,058][120848] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2025-08-26 15:45:30,075][120848] Loading model from checkpoint |
|
[2025-08-26 15:45:30,076][120848] Loaded experiment state at self.train_step=978, self.env_steps=4005888 |
|
[2025-08-26 15:45:30,076][120848] Initialized policy 0 weights for model version 978 |
|
[2025-08-26 15:45:30,077][120848] LearnerWorker_p0 finished initialization! |
|
[2025-08-26 15:45:30,077][120848] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-08-26 15:45:30,145][120862] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:45:30,146][120862] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:45:30,152][120862] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:45:30,206][120862] Conv encoder output size: 512 |
|
[2025-08-26 15:45:30,206][120862] Policy head output size: 512 |
|
[2025-08-26 15:45:30,229][120654] Inference worker 0-0 is ready! |
|
[2025-08-26 15:45:30,230][120654] All inference workers are ready! Signal rollout workers to start! |
|
[2025-08-26 15:45:30,240][120867] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:45:30,240][120866] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:45:30,240][120861] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:45:30,240][120864] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:45:30,240][120863] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:45:30,240][120865] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:45:30,240][120876] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:45:30,241][120868] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:45:30,393][120863] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:45:30,445][120866] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:45:30,445][120867] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:45:30,446][120864] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:45:30,446][120865] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:45:30,641][120865] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:45:30,641][120864] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:45:30,681][120863] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:45:30,697][120866] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:45:30,706][120867] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:45:30,718][120868] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:45:30,809][120865] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:45:30,858][120868] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:45:30,864][120866] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:45:30,893][120863] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:45:30,910][120861] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:45:31,003][120865] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:45:31,019][120866] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:45:31,064][120864] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:45:31,111][120863] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:45:31,111][120861] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:45:31,231][120868] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:45:31,272][120864] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:45:31,327][120867] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:45:31,333][120861] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:45:31,533][120861] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:45:31,534][120867] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:45:31,557][120868] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:45:31,618][120876] Decorrelating experience for 0 frames... |
|
[2025-08-26 15:45:31,803][120876] Decorrelating experience for 32 frames... |
|
[2025-08-26 15:45:31,926][120848] Signal inference workers to stop experience collection... |
|
[2025-08-26 15:45:31,928][120862] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-08-26 15:45:32,000][120876] Decorrelating experience for 64 frames... |
|
[2025-08-26 15:45:32,155][120876] Decorrelating experience for 96 frames... |
|
[2025-08-26 15:45:32,696][120848] Signal inference workers to resume experience collection... |
|
[2025-08-26 15:45:32,697][120848] Stopping Batcher_0... |
|
[2025-08-26 15:45:32,697][120848] Loop batcher_evt_loop terminating... |
|
[2025-08-26 15:45:32,704][120654] Component Batcher_0 stopped! |
|
[2025-08-26 15:45:32,710][120862] Weights refcount: 2 0 |
|
[2025-08-26 15:45:32,714][120862] Stopping InferenceWorker_p0-w0... |
|
[2025-08-26 15:45:32,714][120862] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-08-26 15:45:32,714][120654] Component InferenceWorker_p0-w0 stopped! |
|
[2025-08-26 15:45:32,722][120861] Stopping RolloutWorker_w0... |
|
[2025-08-26 15:45:32,723][120861] Loop rollout_proc0_evt_loop terminating... |
|
[2025-08-26 15:45:32,722][120654] Component RolloutWorker_w0 stopped! |
|
[2025-08-26 15:45:32,724][120876] Stopping RolloutWorker_w6... |
|
[2025-08-26 15:45:32,725][120876] Loop rollout_proc6_evt_loop terminating... |
|
[2025-08-26 15:45:32,727][120654] Component RolloutWorker_w6 stopped! |
|
[2025-08-26 15:45:32,727][120868] Stopping RolloutWorker_w7... |
|
[2025-08-26 15:45:32,727][120868] Loop rollout_proc7_evt_loop terminating... |
|
[2025-08-26 15:45:32,727][120654] Component RolloutWorker_w7 stopped! |
|
[2025-08-26 15:45:32,729][120863] Stopping RolloutWorker_w1... |
|
[2025-08-26 15:45:32,729][120863] Loop rollout_proc1_evt_loop terminating... |
|
[2025-08-26 15:45:32,729][120654] Component RolloutWorker_w1 stopped! |
|
[2025-08-26 15:45:32,729][120864] Stopping RolloutWorker_w2... |
|
[2025-08-26 15:45:32,729][120864] Loop rollout_proc2_evt_loop terminating... |
|
[2025-08-26 15:45:32,729][120654] Component RolloutWorker_w2 stopped! |
|
[2025-08-26 15:45:32,738][120865] Stopping RolloutWorker_w3... |
|
[2025-08-26 15:45:32,738][120865] Loop rollout_proc3_evt_loop terminating... |
|
[2025-08-26 15:45:32,738][120866] Stopping RolloutWorker_w4... |
|
[2025-08-26 15:45:32,738][120654] Component RolloutWorker_w3 stopped! |
|
[2025-08-26 15:45:32,738][120866] Loop rollout_proc4_evt_loop terminating... |
|
[2025-08-26 15:45:32,739][120654] Component RolloutWorker_w4 stopped! |
|
[2025-08-26 15:45:32,839][120867] Stopping RolloutWorker_w5... |
|
[2025-08-26 15:45:32,839][120867] Loop rollout_proc5_evt_loop terminating... |
|
[2025-08-26 15:45:32,839][120654] Component RolloutWorker_w5 stopped! |
|
[2025-08-26 15:45:32,861][120848] Saving /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2025-08-26 15:45:32,917][120848] Removing /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000880_3604480.pth |
|
[2025-08-26 15:45:32,929][120848] Saving /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2025-08-26 15:45:32,996][120848] Stopping LearnerWorker_p0... |
|
[2025-08-26 15:45:32,997][120848] Loop learner_proc0_evt_loop terminating... |
|
[2025-08-26 15:45:32,996][120654] Component LearnerWorker_p0 stopped! |
|
[2025-08-26 15:45:32,998][120654] Waiting for process learner_proc0 to stop... |
|
[2025-08-26 15:45:33,607][120654] Waiting for process inference_proc0-0 to join... |
|
[2025-08-26 15:45:33,608][120654] Waiting for process rollout_proc0 to join... |
|
[2025-08-26 15:45:33,609][120654] Waiting for process rollout_proc1 to join... |
|
[2025-08-26 15:45:33,609][120654] Waiting for process rollout_proc2 to join... |
|
[2025-08-26 15:45:33,610][120654] Waiting for process rollout_proc3 to join... |
|
[2025-08-26 15:45:33,610][120654] Waiting for process rollout_proc4 to join... |
|
[2025-08-26 15:45:33,611][120654] Waiting for process rollout_proc5 to join... |
|
[2025-08-26 15:45:33,612][120654] Waiting for process rollout_proc6 to join... |
|
[2025-08-26 15:45:33,612][120654] Waiting for process rollout_proc7 to join... |
|
[2025-08-26 15:45:33,613][120654] Batcher 0 profile tree view: |
|
batching: 0.0169, releasing_batches: 0.0003 |
|
[2025-08-26 15:45:33,613][120654] InferenceWorker_p0-w0 profile tree view: |
|
update_model: 0.0036 |
|
wait_policy: 0.0000 |
|
wait_policy_total: 0.8228 |
|
one_step: 0.0139 |
|
handle_policy_step: 0.8544 |
|
deserialize: 0.0249, stack: 0.0021, obs_to_device_normalize: 0.1393, forward: 0.5971, send_messages: 0.0208 |
|
prepare_outputs: 0.0467 |
|
to_cpu: 0.0282 |
|
[2025-08-26 15:45:33,614][120654] Learner 0 profile tree view: |
|
misc: 0.0000, prepare_batch: 0.5212 |
|
train: 0.5603 |
|
epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0002, kl_divergence: 0.0058, after_optimizer: 0.0317 |
|
calculate_losses: 0.2745 |
|
losses_init: 0.0000, forward_head: 0.0395, bptt_initial: 0.1919, tail: 0.0256, advantages_returns: 0.0005, losses: 0.0154 |
|
bptt: 0.0013 |
|
bptt_forward_core: 0.0013 |
|
update: 0.2475 |
|
clip: 0.0241 |
|
[2025-08-26 15:45:33,614][120654] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.0003, enqueue_policy_requests: 0.0129, env_step: 0.1365, overhead: 0.0116, complete_rollouts: 0.0005 |
|
save_policy_outputs: 0.0114 |
|
split_output_tensors: 0.0057 |
|
[2025-08-26 15:45:33,615][120654] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.0003, enqueue_policy_requests: 0.0108, env_step: 0.1146, overhead: 0.0092, complete_rollouts: 0.0003 |
|
save_policy_outputs: 0.0090 |
|
split_output_tensors: 0.0044 |
|
[2025-08-26 15:45:33,615][120654] Loop Runner_EvtLoop terminating... |
|
[2025-08-26 15:45:33,616][120654] Runner profile tree view: |
|
main_loop: 6.6058 |
|
[2025-08-26 15:45:33,616][120654] Collected {0: 4014080}, FPS: 1240.1 |
|
[2025-08-26 15:45:33,696][120654] Loading existing experiment configuration from /home/ubuntu/train_dir/default_experiment/config.json |
|
[2025-08-26 15:45:33,697][120654] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-26 15:45:33,698][120654] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-26 15:45:33,698][120654] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-26 15:45:33,699][120654] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:45:33,699][120654] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-26 15:45:33,700][120654] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:45:33,700][120654] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-26 15:45:33,701][120654] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-08-26 15:45:33,701][120654] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-08-26 15:45:33,701][120654] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-26 15:45:33,702][120654] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-26 15:45:33,702][120654] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-26 15:45:33,703][120654] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-26 15:45:33,703][120654] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-26 15:45:33,709][120654] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-08-26 15:45:33,710][120654] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:45:33,711][120654] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:45:33,719][120654] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:45:33,777][120654] Conv encoder output size: 512 |
|
[2025-08-26 15:45:33,778][120654] Policy head output size: 512 |
|
[2025-08-26 15:45:33,867][120654] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2025-08-26 15:45:34,393][120654] Num frames 100... |
|
[2025-08-26 15:45:34,459][120654] Num frames 200... |
|
[2025-08-26 15:45:34,524][120654] Num frames 300... |
|
[2025-08-26 15:45:34,588][120654] Num frames 400... |
|
[2025-08-26 15:45:34,654][120654] Num frames 500... |
|
[2025-08-26 15:45:34,718][120654] Num frames 600... |
|
[2025-08-26 15:45:34,781][120654] Num frames 700... |
|
[2025-08-26 15:45:34,852][120654] Num frames 800... |
|
[2025-08-26 15:45:34,919][120654] Num frames 900... |
|
[2025-08-26 15:45:34,974][120654] Avg episode rewards: #0: 21.010, true rewards: #0: 9.010 |
|
[2025-08-26 15:45:34,974][120654] Avg episode reward: 21.010, avg true_objective: 9.010 |
|
[2025-08-26 15:45:35,038][120654] Num frames 1000... |
|
[2025-08-26 15:45:35,104][120654] Num frames 1100... |
|
[2025-08-26 15:45:35,165][120654] Num frames 1200... |
|
[2025-08-26 15:45:35,232][120654] Num frames 1300... |
|
[2025-08-26 15:45:35,302][120654] Num frames 1400... |
|
[2025-08-26 15:45:35,370][120654] Num frames 1500... |
|
[2025-08-26 15:45:35,435][120654] Num frames 1600... |
|
[2025-08-26 15:45:35,499][120654] Num frames 1700... |
|
[2025-08-26 15:45:35,566][120654] Num frames 1800... |
|
[2025-08-26 15:45:35,634][120654] Num frames 1900... |
|
[2025-08-26 15:45:35,699][120654] Num frames 2000... |
|
[2025-08-26 15:45:35,764][120654] Num frames 2100... |
|
[2025-08-26 15:45:35,849][120654] Avg episode rewards: #0: 26.245, true rewards: #0: 10.745 |
|
[2025-08-26 15:45:35,849][120654] Avg episode reward: 26.245, avg true_objective: 10.745 |
|
[2025-08-26 15:45:35,884][120654] Num frames 2200... |
|
[2025-08-26 15:45:35,952][120654] Num frames 2300... |
|
[2025-08-26 15:45:36,018][120654] Num frames 2400... |
|
[2025-08-26 15:45:36,083][120654] Num frames 2500... |
|
[2025-08-26 15:45:36,147][120654] Num frames 2600... |
|
[2025-08-26 15:45:36,220][120654] Avg episode rewards: #0: 20.097, true rewards: #0: 8.763 |
|
[2025-08-26 15:45:36,221][120654] Avg episode reward: 20.097, avg true_objective: 8.763 |
|
[2025-08-26 15:45:36,265][120654] Num frames 2700... |
|
[2025-08-26 15:45:36,331][120654] Num frames 2800... |
|
[2025-08-26 15:45:36,392][120654] Num frames 2900... |
|
[2025-08-26 15:45:36,458][120654] Num frames 3000... |
|
[2025-08-26 15:45:36,526][120654] Num frames 3100... |
|
[2025-08-26 15:45:36,593][120654] Num frames 3200... |
|
[2025-08-26 15:45:36,658][120654] Num frames 3300... |
|
[2025-08-26 15:45:36,727][120654] Num frames 3400... |
|
[2025-08-26 15:45:36,791][120654] Num frames 3500... |
|
[2025-08-26 15:45:36,859][120654] Num frames 3600... |
|
[2025-08-26 15:45:36,922][120654] Num frames 3700... |
|
[2025-08-26 15:45:36,990][120654] Num frames 3800... |
|
[2025-08-26 15:45:37,055][120654] Num frames 3900... |
|
[2025-08-26 15:45:37,157][120654] Avg episode rewards: #0: 23.933, true rewards: #0: 9.932 |
|
[2025-08-26 15:45:37,157][120654] Avg episode reward: 23.933, avg true_objective: 9.932 |
|
[2025-08-26 15:45:37,175][120654] Num frames 4000... |
|
[2025-08-26 15:45:37,239][120654] Num frames 4100... |
|
[2025-08-26 15:45:37,301][120654] Num frames 4200... |
|
[2025-08-26 15:45:37,361][120654] Num frames 4300... |
|
[2025-08-26 15:45:37,427][120654] Num frames 4400... |
|
[2025-08-26 15:45:37,491][120654] Num frames 4500... |
|
[2025-08-26 15:45:37,554][120654] Num frames 4600... |
|
[2025-08-26 15:45:37,620][120654] Num frames 4700... |
|
[2025-08-26 15:45:37,689][120654] Num frames 4800... |
|
[2025-08-26 15:45:37,758][120654] Num frames 4900... |
|
[2025-08-26 15:45:37,823][120654] Num frames 5000... |
|
[2025-08-26 15:45:37,887][120654] Num frames 5100... |
|
[2025-08-26 15:45:37,954][120654] Num frames 5200... |
|
[2025-08-26 15:45:38,041][120654] Avg episode rewards: #0: 25.706, true rewards: #0: 10.506 |
|
[2025-08-26 15:45:38,042][120654] Avg episode reward: 25.706, avg true_objective: 10.506 |
|
[2025-08-26 15:45:38,076][120654] Num frames 5300... |
|
[2025-08-26 15:45:38,140][120654] Num frames 5400... |
|
[2025-08-26 15:45:38,202][120654] Num frames 5500... |
|
[2025-08-26 15:45:38,264][120654] Num frames 5600... |
|
[2025-08-26 15:45:38,329][120654] Num frames 5700... |
|
[2025-08-26 15:45:38,393][120654] Num frames 5800... |
|
[2025-08-26 15:45:38,456][120654] Num frames 5900... |
|
[2025-08-26 15:45:38,517][120654] Num frames 6000... |
|
[2025-08-26 15:45:38,578][120654] Num frames 6100... |
|
[2025-08-26 15:45:38,641][120654] Num frames 6200... |
|
[2025-08-26 15:45:38,726][120654] Avg episode rewards: #0: 25.242, true rewards: #0: 10.408 |
|
[2025-08-26 15:45:38,726][120654] Avg episode reward: 25.242, avg true_objective: 10.408 |
|
[2025-08-26 15:45:38,762][120654] Num frames 6300... |
|
[2025-08-26 15:45:38,828][120654] Num frames 6400... |
|
[2025-08-26 15:45:38,892][120654] Num frames 6500... |
|
[2025-08-26 15:45:38,956][120654] Num frames 6600... |
|
[2025-08-26 15:45:39,018][120654] Num frames 6700... |
|
[2025-08-26 15:45:39,083][120654] Num frames 6800... |
|
[2025-08-26 15:45:39,150][120654] Num frames 6900... |
|
[2025-08-26 15:45:39,216][120654] Num frames 7000... |
|
[2025-08-26 15:45:39,282][120654] Num frames 7100... |
|
[2025-08-26 15:45:39,341][120654] Avg episode rewards: #0: 24.442, true rewards: #0: 10.156 |
|
[2025-08-26 15:45:39,342][120654] Avg episode reward: 24.442, avg true_objective: 10.156 |
|
[2025-08-26 15:45:39,400][120654] Num frames 7200... |
|
[2025-08-26 15:45:39,464][120654] Num frames 7300... |
|
[2025-08-26 15:45:39,526][120654] Num frames 7400... |
|
[2025-08-26 15:45:39,591][120654] Num frames 7500... |
|
[2025-08-26 15:45:39,654][120654] Num frames 7600... |
|
[2025-08-26 15:45:39,716][120654] Num frames 7700... |
|
[2025-08-26 15:45:39,780][120654] Num frames 7800... |
|
[2025-08-26 15:45:39,844][120654] Num frames 7900... |
|
[2025-08-26 15:45:39,908][120654] Num frames 8000... |
|
[2025-08-26 15:45:39,977][120654] Num frames 8100... |
|
[2025-08-26 15:45:40,042][120654] Num frames 8200... |
|
[2025-08-26 15:45:40,107][120654] Num frames 8300... |
|
[2025-08-26 15:45:40,171][120654] Num frames 8400... |
|
[2025-08-26 15:45:40,237][120654] Num frames 8500... |
|
[2025-08-26 15:45:40,340][120654] Avg episode rewards: #0: 25.849, true rewards: #0: 10.724 |
|
[2025-08-26 15:45:40,340][120654] Avg episode reward: 25.849, avg true_objective: 10.724 |
|
[2025-08-26 15:45:40,355][120654] Num frames 8600... |
|
[2025-08-26 15:45:40,415][120654] Num frames 8700... |
|
[2025-08-26 15:45:40,476][120654] Num frames 8800... |
|
[2025-08-26 15:45:40,541][120654] Num frames 8900... |
|
[2025-08-26 15:45:40,607][120654] Num frames 9000... |
|
[2025-08-26 15:45:40,672][120654] Num frames 9100... |
|
[2025-08-26 15:45:40,738][120654] Num frames 9200... |
|
[2025-08-26 15:45:40,803][120654] Num frames 9300... |
|
[2025-08-26 15:45:40,869][120654] Num frames 9400... |
|
[2025-08-26 15:45:40,936][120654] Num frames 9500... |
|
[2025-08-26 15:45:40,999][120654] Num frames 9600... |
|
[2025-08-26 15:45:41,067][120654] Num frames 9700... |
|
[2025-08-26 15:45:41,134][120654] Num frames 9800... |
|
[2025-08-26 15:45:41,200][120654] Num frames 9900... |
|
[2025-08-26 15:45:41,268][120654] Num frames 10000... |
|
[2025-08-26 15:45:41,333][120654] Num frames 10100... |
|
[2025-08-26 15:45:41,398][120654] Num frames 10200... |
|
[2025-08-26 15:45:41,461][120654] Num frames 10300... |
|
[2025-08-26 15:45:41,526][120654] Num frames 10400... |
|
[2025-08-26 15:45:41,592][120654] Avg episode rewards: #0: 28.021, true rewards: #0: 11.577 |
|
[2025-08-26 15:45:41,593][120654] Avg episode reward: 28.021, avg true_objective: 11.577 |
|
[2025-08-26 15:45:41,645][120654] Num frames 10500... |
|
[2025-08-26 15:45:41,710][120654] Num frames 10600... |
|
[2025-08-26 15:45:41,772][120654] Num frames 10700... |
|
[2025-08-26 15:45:41,836][120654] Num frames 10800... |
|
[2025-08-26 15:45:41,901][120654] Num frames 10900... |
|
[2025-08-26 15:45:41,963][120654] Num frames 11000... |
|
[2025-08-26 15:45:42,024][120654] Num frames 11100... |
|
[2025-08-26 15:45:42,088][120654] Num frames 11200... |
|
[2025-08-26 15:45:42,154][120654] Num frames 11300... |
|
[2025-08-26 15:45:42,236][120654] Avg episode rewards: #0: 27.244, true rewards: #0: 11.344 |
|
[2025-08-26 15:45:42,237][120654] Avg episode reward: 27.244, avg true_objective: 11.344 |
|
[2025-08-26 15:45:58,295][120654] Replay video saved to /home/ubuntu/train_dir/default_experiment/replay.mp4! |
|
[2025-08-26 15:47:13,203][120654] Loading existing experiment configuration from /home/ubuntu/train_dir/default_experiment/config.json |
|
[2025-08-26 15:47:13,204][120654] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-26 15:47:13,204][120654] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-26 15:47:13,205][120654] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-26 15:47:13,206][120654] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:47:13,206][120654] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-26 15:47:13,207][120654] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-08-26 15:47:13,207][120654] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-26 15:47:13,208][120654] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-08-26 15:47:13,208][120654] Adding new argument 'hf_repository'='igzi/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-08-26 15:47:13,209][120654] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-26 15:47:13,209][120654] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-26 15:47:13,209][120654] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-26 15:47:13,210][120654] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-26 15:47:13,210][120654] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-26 15:47:13,217][120654] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:47:13,218][120654] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:47:13,225][120654] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:47:13,247][120654] Conv encoder output size: 512 |
|
[2025-08-26 15:47:13,248][120654] Policy head output size: 512 |
|
[2025-08-26 15:47:13,260][120654] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2025-08-26 15:47:13,517][120654] Num frames 100... |
|
[2025-08-26 15:47:13,582][120654] Num frames 200... |
|
[2025-08-26 15:47:13,642][120654] Num frames 300... |
|
[2025-08-26 15:47:13,707][120654] Num frames 400... |
|
[2025-08-26 15:47:13,776][120654] Num frames 500... |
|
[2025-08-26 15:47:13,841][120654] Num frames 600... |
|
[2025-08-26 15:47:13,909][120654] Num frames 700... |
|
[2025-08-26 15:47:13,972][120654] Num frames 800... |
|
[2025-08-26 15:47:14,035][120654] Num frames 900... |
|
[2025-08-26 15:47:14,104][120654] Avg episode rewards: #0: 19.250, true rewards: #0: 9.250 |
|
[2025-08-26 15:47:14,105][120654] Avg episode reward: 19.250, avg true_objective: 9.250 |
|
[2025-08-26 15:47:14,155][120654] Num frames 1000... |
|
[2025-08-26 15:47:14,218][120654] Num frames 1100... |
|
[2025-08-26 15:47:14,284][120654] Num frames 1200... |
|
[2025-08-26 15:47:14,344][120654] Num frames 1300... |
|
[2025-08-26 15:47:14,408][120654] Num frames 1400... |
|
[2025-08-26 15:47:14,470][120654] Num frames 1500... |
|
[2025-08-26 15:47:14,534][120654] Num frames 1600... |
|
[2025-08-26 15:47:14,598][120654] Num frames 1700... |
|
[2025-08-26 15:47:14,660][120654] Num frames 1800... |
|
[2025-08-26 15:47:14,725][120654] Num frames 1900... |
|
[2025-08-26 15:47:14,790][120654] Num frames 2000... |
|
[2025-08-26 15:47:14,853][120654] Num frames 2100... |
|
[2025-08-26 15:47:14,918][120654] Num frames 2200... |
|
[2025-08-26 15:47:14,985][120654] Num frames 2300... |
|
[2025-08-26 15:47:15,047][120654] Num frames 2400... |
|
[2025-08-26 15:47:15,121][120654] Avg episode rewards: #0: 28.175, true rewards: #0: 12.175 |
|
[2025-08-26 15:47:15,122][120654] Avg episode reward: 28.175, avg true_objective: 12.175 |
|
[2025-08-26 15:47:15,164][120654] Num frames 2500... |
|
[2025-08-26 15:47:15,229][120654] Num frames 2600... |
|
[2025-08-26 15:47:15,293][120654] Num frames 2700... |
|
[2025-08-26 15:47:15,362][120654] Num frames 2800... |
|
[2025-08-26 15:47:15,427][120654] Num frames 2900... |
|
[2025-08-26 15:47:15,490][120654] Num frames 3000... |
|
[2025-08-26 15:47:15,556][120654] Num frames 3100... |
|
[2025-08-26 15:47:15,623][120654] Num frames 3200... |
|
[2025-08-26 15:47:15,687][120654] Num frames 3300... |
|
[2025-08-26 15:47:15,755][120654] Num frames 3400... |
|
[2025-08-26 15:47:15,826][120654] Num frames 3500... |
|
[2025-08-26 15:47:15,891][120654] Num frames 3600... |
|
[2025-08-26 15:47:15,955][120654] Num frames 3700... |
|
[2025-08-26 15:47:16,021][120654] Avg episode rewards: #0: 27.737, true rewards: #0: 12.403 |
|
[2025-08-26 15:47:16,022][120654] Avg episode reward: 27.737, avg true_objective: 12.403 |
|
[2025-08-26 15:47:16,076][120654] Num frames 3800... |
|
[2025-08-26 15:47:16,144][120654] Num frames 3900... |
|
[2025-08-26 15:47:16,208][120654] Num frames 4000... |
|
[2025-08-26 15:47:16,277][120654] Num frames 4100... |
|
[2025-08-26 15:47:16,345][120654] Num frames 4200... |
|
[2025-08-26 15:47:16,407][120654] Num frames 4300... |
|
[2025-08-26 15:47:16,470][120654] Num frames 4400... |
|
[2025-08-26 15:47:16,537][120654] Num frames 4500... |
|
[2025-08-26 15:47:16,646][120654] Avg episode rewards: #0: 24.963, true rewards: #0: 11.462 |
|
[2025-08-26 15:47:16,647][120654] Avg episode reward: 24.963, avg true_objective: 11.462 |
|
[2025-08-26 15:47:16,658][120654] Num frames 4600... |
|
[2025-08-26 15:47:16,723][120654] Num frames 4700... |
|
[2025-08-26 15:47:16,793][120654] Num frames 4800... |
|
[2025-08-26 15:47:16,865][120654] Num frames 4900... |
|
[2025-08-26 15:47:16,932][120654] Num frames 5000... |
|
[2025-08-26 15:47:17,000][120654] Num frames 5100... |
|
[2025-08-26 15:47:17,065][120654] Num frames 5200... |
|
[2025-08-26 15:47:17,129][120654] Num frames 5300... |
|
[2025-08-26 15:47:17,194][120654] Num frames 5400... |
|
[2025-08-26 15:47:17,257][120654] Num frames 5500... |
|
[2025-08-26 15:47:17,317][120654] Num frames 5600... |
|
[2025-08-26 15:47:17,384][120654] Num frames 5700... |
|
[2025-08-26 15:47:17,450][120654] Num frames 5800... |
|
[2025-08-26 15:47:17,515][120654] Num frames 5900... |
|
[2025-08-26 15:47:17,582][120654] Num frames 6000... |
|
[2025-08-26 15:47:17,644][120654] Num frames 6100... |
|
[2025-08-26 15:47:17,705][120654] Num frames 6200... |
|
[2025-08-26 15:47:17,772][120654] Num frames 6300... |
|
[2025-08-26 15:47:17,841][120654] Num frames 6400... |
|
[2025-08-26 15:47:17,910][120654] Num frames 6500... |
|
[2025-08-26 15:47:17,965][120654] Avg episode rewards: #0: 29.004, true rewards: #0: 13.004 |
|
[2025-08-26 15:47:17,965][120654] Avg episode reward: 29.004, avg true_objective: 13.004 |
|
[2025-08-26 15:47:18,031][120654] Num frames 6600... |
|
[2025-08-26 15:47:18,094][120654] Num frames 6700... |
|
[2025-08-26 15:47:18,161][120654] Num frames 6800... |
|
[2025-08-26 15:47:18,224][120654] Num frames 6900... |
|
[2025-08-26 15:47:18,289][120654] Num frames 7000... |
|
[2025-08-26 15:47:18,353][120654] Num frames 7100... |
|
[2025-08-26 15:47:18,461][120654] Avg episode rewards: #0: 25.978, true rewards: #0: 11.978 |
|
[2025-08-26 15:47:18,461][120654] Avg episode reward: 25.978, avg true_objective: 11.978 |
|
[2025-08-26 15:47:18,470][120654] Num frames 7200... |
|
[2025-08-26 15:47:18,537][120654] Num frames 7300... |
|
[2025-08-26 15:47:18,600][120654] Num frames 7400... |
|
[2025-08-26 15:47:18,666][120654] Num frames 7500... |
|
[2025-08-26 15:47:18,732][120654] Num frames 7600... |
|
[2025-08-26 15:47:18,797][120654] Num frames 7700... |
|
[2025-08-26 15:47:18,864][120654] Num frames 7800... |
|
[2025-08-26 15:47:18,932][120654] Num frames 7900... |
|
[2025-08-26 15:47:19,011][120654] Num frames 8000... |
|
[2025-08-26 15:47:19,079][120654] Num frames 8100... |
|
[2025-08-26 15:47:19,142][120654] Num frames 8200... |
|
[2025-08-26 15:47:19,206][120654] Num frames 8300... |
|
[2025-08-26 15:47:19,270][120654] Num frames 8400... |
|
[2025-08-26 15:47:19,335][120654] Num frames 8500... |
|
[2025-08-26 15:47:19,402][120654] Num frames 8600... |
|
[2025-08-26 15:47:19,499][120654] Avg episode rewards: #0: 27.669, true rewards: #0: 12.383 |
|
[2025-08-26 15:47:19,500][120654] Avg episode reward: 27.669, avg true_objective: 12.383 |
|
[2025-08-26 15:47:19,521][120654] Num frames 8700... |
|
[2025-08-26 15:47:19,585][120654] Num frames 8800... |
|
[2025-08-26 15:47:19,648][120654] Num frames 8900... |
|
[2025-08-26 15:47:19,714][120654] Num frames 9000... |
|
[2025-08-26 15:47:19,777][120654] Num frames 9100... |
|
[2025-08-26 15:47:19,844][120654] Num frames 9200... |
|
[2025-08-26 15:47:19,909][120654] Num frames 9300... |
|
[2025-08-26 15:47:19,994][120654] Avg episode rewards: #0: 26.059, true rewards: #0: 11.684 |
|
[2025-08-26 15:47:19,995][120654] Avg episode reward: 26.059, avg true_objective: 11.684 |
|
[2025-08-26 15:47:20,030][120654] Num frames 9400... |
|
[2025-08-26 15:47:20,095][120654] Num frames 9500... |
|
[2025-08-26 15:47:20,161][120654] Num frames 9600... |
|
[2025-08-26 15:47:20,224][120654] Num frames 9700... |
|
[2025-08-26 15:47:20,290][120654] Num frames 9800... |
|
[2025-08-26 15:47:20,360][120654] Num frames 9900... |
|
[2025-08-26 15:47:20,424][120654] Num frames 10000... |
|
[2025-08-26 15:47:20,490][120654] Num frames 10100... |
|
[2025-08-26 15:47:20,555][120654] Num frames 10200... |
|
[2025-08-26 15:47:20,620][120654] Num frames 10300... |
|
[2025-08-26 15:47:20,687][120654] Num frames 10400... |
|
[2025-08-26 15:47:20,753][120654] Num frames 10500... |
|
[2025-08-26 15:47:20,820][120654] Num frames 10600... |
|
[2025-08-26 15:47:20,886][120654] Num frames 10700... |
|
[2025-08-26 15:47:20,954][120654] Num frames 10800... |
|
[2025-08-26 15:47:21,020][120654] Num frames 10900... |
|
[2025-08-26 15:47:21,088][120654] Num frames 11000... |
|
[2025-08-26 15:47:21,160][120654] Num frames 11100... |
|
[2025-08-26 15:47:21,228][120654] Num frames 11200... |
|
[2025-08-26 15:47:21,297][120654] Num frames 11300... |
|
[2025-08-26 15:47:21,365][120654] Num frames 11400... |
|
[2025-08-26 15:47:21,436][120654] Avg episode rewards: #0: 29.697, true rewards: #0: 12.697 |
|
[2025-08-26 15:47:21,437][120654] Avg episode reward: 29.697, avg true_objective: 12.697 |
|
[2025-08-26 15:47:21,484][120654] Num frames 11500... |
|
[2025-08-26 15:47:21,546][120654] Num frames 11600... |
|
[2025-08-26 15:47:21,610][120654] Num frames 11700... |
|
[2025-08-26 15:47:21,675][120654] Num frames 11800... |
|
[2025-08-26 15:47:21,736][120654] Num frames 11900... |
|
[2025-08-26 15:47:21,798][120654] Num frames 12000... |
|
[2025-08-26 15:47:21,860][120654] Num frames 12100... |
|
[2025-08-26 15:47:21,929][120654] Num frames 12200... |
|
[2025-08-26 15:47:22,021][120654] Avg episode rewards: #0: 28.559, true rewards: #0: 12.259 |
|
[2025-08-26 15:47:22,022][120654] Avg episode reward: 28.559, avg true_objective: 12.259 |
|
[2025-08-26 15:47:39,373][120654] Replay video saved to /home/ubuntu/train_dir/default_experiment/replay.mp4! |
|
[2025-08-26 15:47:42,691][120654] The model has been pushed to https://huggingface.co/igzi/rl_course_vizdoom_health_gathering_supreme |
|
[2025-08-26 15:47:42,703][120654] Loading existing experiment configuration from /home/ubuntu/train_dir/default_experiment/config.json |
|
[2025-08-26 15:47:42,703][120654] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-08-26 15:47:42,703][120654] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-08-26 15:47:42,704][120654] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-08-26 15:47:42,704][120654] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-08-26 15:47:42,704][120654] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-08-26 15:47:42,705][120654] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-08-26 15:47:42,705][120654] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-08-26 15:47:42,705][120654] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-08-26 15:47:42,706][120654] Adding new argument 'hf_repository'='igzi/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-08-26 15:47:42,706][120654] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-08-26 15:47:42,707][120654] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-08-26 15:47:42,707][120654] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-08-26 15:47:42,708][120654] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-08-26 15:47:42,709][120654] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-08-26 15:47:42,711][120654] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-08-26 15:47:42,712][120654] RunningMeanStd input shape: (1,) |
|
[2025-08-26 15:47:42,718][120654] ConvEncoder: input_channels=3 |
|
[2025-08-26 15:47:42,747][120654] Conv encoder output size: 512 |
|
[2025-08-26 15:47:42,748][120654] Policy head output size: 512 |
|
[2025-08-26 15:47:42,756][120654] Loading state from checkpoint /home/ubuntu/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2025-08-26 15:47:43,000][120654] Num frames 100... |
|
[2025-08-26 15:47:43,061][120654] Num frames 200... |
|
[2025-08-26 15:47:43,124][120654] Num frames 300... |
|
[2025-08-26 15:47:43,189][120654] Num frames 400... |
|
[2025-08-26 15:47:43,291][120654] Avg episode rewards: #0: 7.800, true rewards: #0: 4.800 |
|
[2025-08-26 15:47:43,292][120654] Avg episode reward: 7.800, avg true_objective: 4.800 |
|
[2025-08-26 15:47:43,306][120654] Num frames 500... |
|
[2025-08-26 15:47:43,371][120654] Num frames 600... |
|
[2025-08-26 15:47:43,438][120654] Num frames 700... |
|
[2025-08-26 15:47:43,505][120654] Num frames 800... |
|
[2025-08-26 15:47:43,567][120654] Num frames 900... |
|
[2025-08-26 15:47:43,632][120654] Num frames 1000... |
|
[2025-08-26 15:47:43,697][120654] Num frames 1100... |
|
[2025-08-26 15:47:43,760][120654] Num frames 1200... |
|
[2025-08-26 15:47:43,821][120654] Num frames 1300... |
|
[2025-08-26 15:47:43,882][120654] Num frames 1400... |
|
[2025-08-26 15:47:43,948][120654] Num frames 1500... |
|
[2025-08-26 15:47:44,014][120654] Num frames 1600... |
|
[2025-08-26 15:47:44,078][120654] Num frames 1700... |
|
[2025-08-26 15:47:44,141][120654] Num frames 1800... |
|
[2025-08-26 15:47:44,203][120654] Num frames 1900... |
|
[2025-08-26 15:47:44,268][120654] Num frames 2000... |
|
[2025-08-26 15:47:44,338][120654] Avg episode rewards: #0: 23.140, true rewards: #0: 10.140 |
|
[2025-08-26 15:47:44,339][120654] Avg episode reward: 23.140, avg true_objective: 10.140 |
|
[2025-08-26 15:47:44,382][120654] Num frames 2100... |
|
[2025-08-26 15:47:44,444][120654] Num frames 2200... |
|
[2025-08-26 15:47:44,505][120654] Num frames 2300... |
|
[2025-08-26 15:47:44,565][120654] Num frames 2400... |
|
[2025-08-26 15:47:44,625][120654] Num frames 2500... |
|
[2025-08-26 15:47:44,690][120654] Num frames 2600... |
|
[2025-08-26 15:47:44,754][120654] Num frames 2700... |
|
[2025-08-26 15:47:44,815][120654] Num frames 2800... |
|
[2025-08-26 15:47:44,877][120654] Num frames 2900... |
|
[2025-08-26 15:47:44,940][120654] Num frames 3000... |
|
[2025-08-26 15:47:44,999][120654] Num frames 3100... |
|
[2025-08-26 15:47:45,064][120654] Num frames 3200... |
|
[2025-08-26 15:47:45,127][120654] Num frames 3300... |
|
[2025-08-26 15:47:45,191][120654] Num frames 3400... |
|
[2025-08-26 15:47:45,255][120654] Num frames 3500... |
|
[2025-08-26 15:47:45,306][120654] Avg episode rewards: #0: 26.333, true rewards: #0: 11.667 |
|
[2025-08-26 15:47:45,307][120654] Avg episode reward: 26.333, avg true_objective: 11.667 |
|
[2025-08-26 15:47:45,372][120654] Num frames 3600... |
|
[2025-08-26 15:47:45,440][120654] Num frames 3700... |
|
[2025-08-26 15:47:45,512][120654] Num frames 3800... |
|
[2025-08-26 15:47:45,575][120654] Num frames 3900... |
|
[2025-08-26 15:47:45,638][120654] Num frames 4000... |
|
[2025-08-26 15:47:45,699][120654] Num frames 4100... |
|
[2025-08-26 15:47:45,758][120654] Num frames 4200... |
|
[2025-08-26 15:47:45,819][120654] Num frames 4300... |
|
[2025-08-26 15:47:45,883][120654] Num frames 4400... |
|
[2025-08-26 15:47:45,951][120654] Num frames 4500... |
|
[2025-08-26 15:47:46,017][120654] Num frames 4600... |
|
[2025-08-26 15:47:46,080][120654] Num frames 4700... |
|
[2025-08-26 15:47:46,145][120654] Num frames 4800... |
|
[2025-08-26 15:47:46,209][120654] Num frames 4900... |
|
[2025-08-26 15:47:46,287][120654] Avg episode rewards: #0: 27.850, true rewards: #0: 12.350 |
|
[2025-08-26 15:47:46,288][120654] Avg episode reward: 27.850, avg true_objective: 12.350 |
|
[2025-08-26 15:47:46,328][120654] Num frames 5000... |
|
[2025-08-26 15:47:46,391][120654] Num frames 5100... |
|
[2025-08-26 15:47:46,455][120654] Num frames 5200... |
|
[2025-08-26 15:47:46,519][120654] Num frames 5300... |
|
[2025-08-26 15:47:46,580][120654] Num frames 5400... |
|
[2025-08-26 15:47:46,646][120654] Num frames 5500... |
|
[2025-08-26 15:47:46,710][120654] Num frames 5600... |
|
[2025-08-26 15:47:46,791][120654] Avg episode rewards: #0: 25.088, true rewards: #0: 11.288 |
|
[2025-08-26 15:47:46,792][120654] Avg episode reward: 25.088, avg true_objective: 11.288 |
|
[2025-08-26 15:47:46,828][120654] Num frames 5700... |
|
[2025-08-26 15:47:46,893][120654] Num frames 5800... |
|
[2025-08-26 15:47:46,957][120654] Num frames 5900... |
|
[2025-08-26 15:47:47,021][120654] Num frames 6000... |
|
[2025-08-26 15:47:47,086][120654] Num frames 6100... |
|
[2025-08-26 15:47:47,164][120654] Avg episode rewards: #0: 22.072, true rewards: #0: 10.238 |
|
[2025-08-26 15:47:47,165][120654] Avg episode reward: 22.072, avg true_objective: 10.238 |
|
[2025-08-26 15:47:47,201][120654] Num frames 6200... |
|
[2025-08-26 15:47:47,262][120654] Num frames 6300... |
|
[2025-08-26 15:47:47,320][120654] Num frames 6400... |
|
[2025-08-26 15:47:47,384][120654] Num frames 6500... |
|
[2025-08-26 15:47:47,443][120654] Num frames 6600... |
|
[2025-08-26 15:47:47,547][120654] Avg episode rewards: #0: 19.981, true rewards: #0: 9.553 |
|
[2025-08-26 15:47:47,548][120654] Avg episode reward: 19.981, avg true_objective: 9.553 |
|
[2025-08-26 15:47:47,557][120654] Num frames 6700... |
|
[2025-08-26 15:47:47,619][120654] Num frames 6800... |
|
[2025-08-26 15:47:47,679][120654] Num frames 6900... |
|
[2025-08-26 15:47:47,741][120654] Num frames 7000... |
|
[2025-08-26 15:47:47,807][120654] Num frames 7100... |
|
[2025-08-26 15:47:47,870][120654] Num frames 7200... |
|
[2025-08-26 15:47:47,933][120654] Num frames 7300... |
|
[2025-08-26 15:47:47,996][120654] Num frames 7400... |
|
[2025-08-26 15:47:48,062][120654] Num frames 7500... |
|
[2025-08-26 15:47:48,131][120654] Num frames 7600... |
|
[2025-08-26 15:47:48,199][120654] Num frames 7700... |
|
[2025-08-26 15:47:48,263][120654] Num frames 7800... |
|
[2025-08-26 15:47:48,328][120654] Num frames 7900... |
|
[2025-08-26 15:47:48,395][120654] Num frames 8000... |
|
[2025-08-26 15:47:48,457][120654] Num frames 8100... |
|
[2025-08-26 15:47:48,520][120654] Num frames 8200... |
|
[2025-08-26 15:47:48,576][120654] Avg episode rewards: #0: 22.256, true rewards: #0: 10.256 |
|
[2025-08-26 15:47:48,577][120654] Avg episode reward: 22.256, avg true_objective: 10.256 |
|
[2025-08-26 15:47:48,638][120654] Num frames 8300... |
|
[2025-08-26 15:47:48,705][120654] Num frames 8400... |
|
[2025-08-26 15:47:48,765][120654] Num frames 8500... |
|
[2025-08-26 15:47:48,831][120654] Num frames 8600... |
|
[2025-08-26 15:47:48,897][120654] Num frames 8700... |
|
[2025-08-26 15:47:48,965][120654] Num frames 8800... |
|
[2025-08-26 15:47:49,033][120654] Num frames 8900... |
|
[2025-08-26 15:47:49,098][120654] Num frames 9000... |
|
[2025-08-26 15:47:49,159][120654] Num frames 9100... |
|
[2025-08-26 15:47:49,213][120654] Avg episode rewards: #0: 21.668, true rewards: #0: 10.112 |
|
[2025-08-26 15:47:49,213][120654] Avg episode reward: 21.668, avg true_objective: 10.112 |
|
[2025-08-26 15:47:49,272][120654] Num frames 9200... |
|
[2025-08-26 15:47:49,331][120654] Num frames 9300... |
|
[2025-08-26 15:47:49,389][120654] Num frames 9400... |
|
[2025-08-26 15:47:49,452][120654] Num frames 9500... |
|
[2025-08-26 15:47:49,518][120654] Num frames 9600... |
|
[2025-08-26 15:47:49,584][120654] Num frames 9700... |
|
[2025-08-26 15:47:49,649][120654] Num frames 9800... |
|
[2025-08-26 15:47:49,715][120654] Num frames 9900... |
|
[2025-08-26 15:47:49,780][120654] Num frames 10000... |
|
[2025-08-26 15:47:49,872][120654] Avg episode rewards: #0: 21.661, true rewards: #0: 10.061 |
|
[2025-08-26 15:47:49,873][120654] Avg episode reward: 21.661, avg true_objective: 10.061 |
|
[2025-08-26 15:48:04,257][120654] Replay video saved to /home/ubuntu/train_dir/default_experiment/replay.mp4! |
|
|